diff --git a/.huskies/work/1_backlog/510_bug_stale_1_backlog_filesystem_shadows_get_re_promoted_by_rate_limit_retry_timers_yanking_successfully_merged_stories_back_into_current.md b/.huskies/work/1_backlog/510_bug_stale_1_backlog_filesystem_shadows_get_re_promoted_by_rate_limit_retry_timers_yanking_successfully_merged_stories_back_into_current.md deleted file mode 100644 index 485c6a8e..00000000 --- a/.huskies/work/1_backlog/510_bug_stale_1_backlog_filesystem_shadows_get_re_promoted_by_rate_limit_retry_timers_yanking_successfully_merged_stories_back_into_current.md +++ /dev/null @@ -1,70 +0,0 @@ ---- -name: "Stale 1_backlog filesystem shadows get re-promoted by rate-limit retry timers, yanking successfully-merged stories back into current" ---- - -# Bug 510: Stale 1_backlog filesystem shadows get re-promoted by rate-limit retry timers, yanking successfully-merged stories back into current - -## Description - -After a story successfully completes the entire pipeline — coder runs, gates pass, mergemaster squashes the feature branch to master, lifecycle moves the story from `4_merge/` to `5_done/` — a stale filesystem shadow of the story's markdown file remains in `.huskies/work/1_backlog/`. This shadow is a leftover from the 491/492 migration: story state moved to the database as the source of truth, but the lifecycle move logic in `lifecycle.rs` is still operating on the filesystem and doesn't fully clean up after successful pipeline completions. - -When a rate-limit retry timer subsequently fires for that story (rate limits get scheduled by story 496's auto-retry whenever an agent is hard-blocked, and bug 501 means those timers aren't cancelled on successful completion either), the timer fire path calls `move_story_to_current()`, which uses the **filesystem-only** `move_item` helper. That helper finds the stale `1_backlog/` shadow and "moves" it to `2_current/` — even though the story is correctly in `5_done` in the database. - -Net effect: a fully-merged, archived-to-done story suddenly reappears in `current` with a fresh coder spawned on it. The matrix bot sends `Done → Current` notifications. The agent burns tokens working on a story whose work has already shipped to master. The user sees the story flapping and assumes the merge didn't actually happen. - -**Observed live on 2026-04-09 against story 503:** - -``` -18:31:32 [lifecycle] Moved '503_…' from work/4_merge/ to work/5_done/ -18:31:32 [bot] Sending stage notification: 🎉 #503 … — Merge → Done -18:32:21 [timer] Timer fired for story 503_… -18:32:21 [lifecycle] Moved '503_…' from work/1_backlog/ to work/2_current/ ← stale shadow! -18:32:21 [auto-assign] Assigning 'coder-1' to '503_…' in 2_current/ -``` - -The merge to master persisted (commit `41515e3b` is on master). Only the *pipeline state* got corrupted by the stale shadow being re-promoted. - -This is **distinct from bug 501** (which is about manual `stop_agent` not cancelling timers) but compounds it: 501 is about user-initiated stops, this is about successful pipeline completions. Both share a root cause — the rate-limit retry timer system has no notion of "this story has moved on, cancel any pending retries" — but the *consequences* of this bug are worse because the timer fires successfully and re-creates work that shouldn't exist. - -Also distinct from bug 502 (mergemaster stage-mismatch) which has been fixed. - -The deeper architectural problem this exposes: **`lifecycle.rs::move_item` and `move_story_to_current` are still on the legacy filesystem path** while the rest of the pipeline (491/492) has moved to DB-as-source-of-truth. The filesystem shadows in `.huskies/work/N_stage/` are supposed to be a *materialized rendering* of the DB state, not a parallel source of truth — but `move_item` treats them as authoritative. - -## How to Reproduce - -1. Take any story through the full pipeline successfully — coder runs, gates pass, mergemaster squashes to master, story moves to `5_done`. -2. While the story was in flight, ensure at least one coder run hit a hard rate limit (so a retry timer was scheduled). Bug 501 means that timer survives the successful completion. -3. Verify post-completion state: - - `SELECT stage FROM pipeline_items WHERE id = 'N_story_X';` returns `5_done` ✓ - - `ls .huskies/work/1_backlog/N_story_X.md` shows the file STILL EXISTS (the stale shadow) - - `cat .huskies/timers.json` shows a pending entry for `N_story_X` with a future `scheduled_at` -4. Wait for the timer to fire (default ~5 minutes after the last rate-limit hit). - -## Actual Result - -When the timer fires: -- The `[timer] Timer fired` log line appears for the already-done story -- `move_story_to_current` is called and finds the stale `1_backlog/N_story_X.md` shadow -- Lifecycle log: `[lifecycle] Moved 'N_…' from work/1_backlog/ to work/2_current/` -- Auto-assign sees the story in `2_current/` and spawns a coder -- Matrix bot sends `Done → Current` (and then later `Current → Current` etc.) stage notifications, spamming the room -- The new coder works on a story whose work is already shipped on master, burning tokens -- The story is now visible in BOTH `5_done` (via DB) AND `2_current` (via filesystem shadow), depending on which view the consumer reads -- The actual master commit is unaffected — the merge that already landed is still there. Only the *pipeline state* is corrupted. - -## Expected Result - -Successful pipeline completions must fully clean up the story's filesystem shadows. After `move_story_to_done` runs, `.huskies/work/1_backlog/N_story_X.md` (and any other stage shadow) for that story must not exist. - -Additionally — and this is the more general fix — the rate-limit retry timer system must cancel any pending timers for a story when that story successfully completes the pipeline. This is a sibling fix to bug 501 (which is about cancelling on manual stop): both manual stop and successful completion should mean "no more retries". - -The deepest fix is to migrate `lifecycle.rs::move_item` off the filesystem path and onto the DB path so the shadow files can be torn down entirely (or made strictly read-only renderings). That's a larger change that probably wants its own story, not a bug fix. - -## Acceptance Criteria - -- [ ] After a story moves to 5_done via the normal pipeline path (mergemaster success), the filesystem shadow at .huskies/work/1_backlog/N_story_X.md is removed (and any other stage shadows are also removed) -- [ ] When a story moves to 5_done, any pending rate-limit retry timer for that story is cancelled (the entry is removed from timers.json before the file is persisted) -- [ ] Regression test: simulate the full repro sequence — run a story through the pipeline with a mid-flight rate limit, complete the merge, fast-forward to the timer fire, assert (a) the story stays in 5_done, (b) no agent is spawned, (c) no Done→Current notification fires -- [ ] No regression in bug 501's fix for manual-stop timer cancellation -- [ ] Filesystem shadow cleanup is symmetric — also runs on delete_story, move_story to backlog, etc., not just the done path -- [ ] The matrix bot does not spam Done→Current notifications for stories whose work has actually completed diff --git a/crates/bft-json-crdt/src/json_crdt.rs b/crates/bft-json-crdt/src/json_crdt.rs index fcdb289f..e51f6166 100644 --- a/crates/bft-json-crdt/src/json_crdt.rs +++ b/crates/bft-json-crdt/src/json_crdt.rs @@ -264,11 +264,7 @@ impl BaseCrdt { // Bounded queue overflow: evict the oldest op from the largest // pending bucket before adding the new one. See CAUSAL_QUEUE_MAX. if self.queue_len >= CAUSAL_QUEUE_MAX { - if let Some(bucket) = self - .message_q - .values_mut() - .max_by_key(|v| v.len()) - { + if let Some(bucket) = self.message_q.values_mut().max_by_key(|v| v.len()) { if !bucket.is_empty() { bucket.remove(0); self.queue_len = self.queue_len.saturating_sub(1); diff --git a/crates/bft-json-crdt/src/lww_crdt.rs b/crates/bft-json-crdt/src/lww_crdt.rs index 744ef3c8..e8a39860 100644 --- a/crates/bft-json-crdt/src/lww_crdt.rs +++ b/crates/bft-json-crdt/src/lww_crdt.rs @@ -1,5 +1,5 @@ use crate::debug::DebugView; -use crate::json_crdt::{CrdtNode, OpState, JsonValue}; +use crate::json_crdt::{CrdtNode, JsonValue, OpState}; use crate::op::{join_path, print_path, Op, PathSegment, SequenceNumber}; use std::cmp::{max, Ordering}; use std::fmt::Debug; diff --git a/script/test b/script/test index fb69e4a2..a1c86185 100755 --- a/script/test +++ b/script/test @@ -16,7 +16,7 @@ fi echo "=== Checking Rust formatting ===" if cargo fmt --version &>/dev/null; then - cargo fmt --manifest-path "$PROJECT_ROOT/Cargo.toml" --check + cargo fmt --manifest-path "$PROJECT_ROOT/Cargo.toml" --all --check else echo "Skipping Rust formatting check (rustfmt not installed)" fi diff --git a/server/examples/pipeline_state_sketch_bare.rs b/server/examples/pipeline_state_sketch_bare.rs index 2dd241fa..dbfc0e47 100644 --- a/server/examples/pipeline_state_sketch_bare.rs +++ b/server/examples/pipeline_state_sketch_bare.rs @@ -226,10 +226,7 @@ pub enum PipelineEvent { #[derive(Debug, Clone, PartialEq, Eq)] pub enum TransitionError { /// The current stage doesn't accept this event. - InvalidTransition { - from_stage: String, - event: String, - }, + InvalidTransition { from_stage: String, event: String }, } // ── The transition function ────────────────────────────────────────────────── @@ -260,11 +257,23 @@ pub fn transition(state: Stage, event: PipelineEvent) -> Result Ok(Coding), (Coding, GatesStarted) => Ok(Qa), - (Coding, QaSkipped { feature_branch, commits_ahead }) => Ok(Merge { + ( + Coding, + QaSkipped { + feature_branch, + commits_ahead, + }, + ) => Ok(Merge { feature_branch, commits_ahead, }), - (Qa, GatesPassed { feature_branch, commits_ahead }) => Ok(Merge { + ( + Qa, + GatesPassed { + feature_branch, + commits_ahead, + }, + ) => Ok(Merge { feature_branch, commits_ahead, }), @@ -414,7 +423,9 @@ pub fn execution_transition( }), (Running { agent, .. }, HitRateLimit { resume_at }) - | (Pending { agent, .. }, HitRateLimit { resume_at }) => Ok(RateLimited { agent, resume_at }), + | (Pending { agent, .. }, HitRateLimit { resume_at }) => { + Ok(RateLimited { agent, resume_at }) + } (RateLimited { agent, .. }, SpawnedSuccessfully) => Ok(Running { agent, @@ -747,10 +758,7 @@ mod tests { assert!(matches!(e, ExecutionState::Running { .. })); let e = execution_transition(e, ExecutionEvent::Exited { exit_code: 0 }).unwrap(); - assert!(matches!( - e, - ExecutionState::Completed { exit_code: 0, .. } - )); + assert!(matches!(e, ExecutionState::Completed { exit_code: 0, .. })); } #[test] @@ -800,22 +808,20 @@ fn main() { // Helper to apply a transition + fire the bus. let mut current_stage = Stage::Backlog; - let step = |bus: &EventBus, - stage: &mut Stage, - event: PipelineEvent| - -> Result<(), TransitionError> { - let before = stage.clone(); - let after = transition(stage.clone(), event.clone())?; - bus.fire(TransitionFired { - story_id: story_id.clone(), - before, - after: after.clone(), - event, - at: Utc::now(), - }); - *stage = after; - Ok(()) - }; + let step = + |bus: &EventBus, stage: &mut Stage, event: PipelineEvent| -> Result<(), TransitionError> { + let before = stage.clone(); + let after = transition(stage.clone(), event.clone())?; + bus.fire(TransitionFired { + story_id: story_id.clone(), + before, + after: after.clone(), + event, + at: Utc::now(), + }); + *stage = after; + Ok(()) + }; println!("Initial: {current_stage:?}\n"); diff --git a/server/examples/pipeline_state_sketch_statig.rs b/server/examples/pipeline_state_sketch_statig.rs index 24f94041..34c2c66e 100644 --- a/server/examples/pipeline_state_sketch_statig.rs +++ b/server/examples/pipeline_state_sketch_statig.rs @@ -167,10 +167,9 @@ impl PipelineMachine { // transitions forward and doesn't read them — but they're available // to inspect via the State::Merge variant generated by the macro. match event { - PipelineEvent::MergeSucceeded { merge_commit } => Transition(State::done( - Utc::now(), - merge_commit.clone(), - )), + PipelineEvent::MergeSucceeded { merge_commit } => { + Transition(State::done(Utc::now(), merge_commit.clone())) + } PipelineEvent::MergeFailedFinal { reason } => Transition(State::archived( Utc::now(), ArchiveReason::MergeFailed { @@ -205,9 +204,7 @@ impl PipelineMachine { reason: reason.clone(), }, )), - PipelineEvent::Abandon => { - Transition(State::archived(now, ArchiveReason::Abandoned)) - } + PipelineEvent::Abandon => Transition(State::archived(now, ArchiveReason::Abandoned)), PipelineEvent::Supersede { by } => Transition(State::archived( now, ArchiveReason::Superseded { by: by.clone() }, @@ -230,12 +227,8 @@ impl PipelineMachine { let _ = merged_at; // currently unused; available for queries let _ = merge_commit; match event { - PipelineEvent::Accepted => { - Transition(State::archived(now, ArchiveReason::Completed)) - } - PipelineEvent::Abandon => { - Transition(State::archived(now, ArchiveReason::Abandoned)) - } + PipelineEvent::Accepted => Transition(State::archived(now, ArchiveReason::Completed)), + PipelineEvent::Abandon => Transition(State::archived(now, ArchiveReason::Abandoned)), PipelineEvent::Supersede { by } => Transition(State::archived( now, ArchiveReason::Superseded { by: by.clone() }, @@ -294,10 +287,7 @@ pub mod execution { #[derive(Default)] pub struct ExecutionMachine; - #[state_machine( - initial = "State::idle()", - state(derive(Debug, Clone, PartialEq, Eq)) - )] + #[state_machine(initial = "State::idle()", state(derive(Debug, Clone, PartialEq, Eq)))] impl ExecutionMachine { // ── Idle: no agent on this node is working on this story ────────── @@ -327,11 +317,9 @@ pub mod execution { ExecutionEvent::HitRateLimit { resume_at } => { Transition(State::rate_limited(agent.clone(), *resume_at)) } - ExecutionEvent::Exited { exit_code } => Transition(State::completed( - agent.clone(), - *exit_code, - Utc::now(), - )), + ExecutionEvent::Exited { exit_code } => { + Transition(State::completed(agent.clone(), *exit_code, Utc::now())) + } _ => Super, } } @@ -358,11 +346,9 @@ pub mod execution { ExecutionEvent::HitRateLimit { resume_at } => { Transition(State::rate_limited(agent.clone(), *resume_at)) } - ExecutionEvent::Exited { exit_code } => Transition(State::completed( - agent.clone(), - *exit_code, - Utc::now(), - )), + ExecutionEvent::Exited { exit_code } => { + Transition(State::completed(agent.clone(), *exit_code, Utc::now())) + } _ => Super, } } @@ -380,11 +366,9 @@ pub mod execution { let now = Utc::now(); Transition(State::running(agent.clone(), now, now)) } - ExecutionEvent::Exited { exit_code } => Transition(State::completed( - agent.clone(), - *exit_code, - Utc::now(), - )), + ExecutionEvent::Exited { exit_code } => { + Transition(State::completed(agent.clone(), *exit_code, Utc::now())) + } _ => Super, } } @@ -411,9 +395,7 @@ pub mod execution { #[superstate] fn any(event: &ExecutionEvent) -> Response { match event { - ExecutionEvent::Stopped | ExecutionEvent::Reset => { - Transition(State::idle()) - } + ExecutionEvent::Stopped | ExecutionEvent::Reset => Transition(State::idle()), _ => Handled, } } @@ -677,7 +659,10 @@ mod tests { assert!(matches!(em.state(), ExecState::Running { .. })); em.handle(&ExecutionEvent::Exited { exit_code: 0 }); - assert!(matches!(em.state(), ExecState::Completed { exit_code: 0, .. })); + assert!(matches!( + em.state(), + ExecState::Completed { exit_code: 0, .. } + )); } #[test] @@ -781,5 +766,8 @@ fn main() { }); println!(" before Unblock: {:?}", sm2.state()); sm2.handle(&PipelineEvent::Unblock); // silently ignored — no transition - println!(" after Unblock: {:?} (no change — Unblock is a no-op from Done)", sm2.state()); + println!( + " after Unblock: {:?} (no change — Unblock is a no-op from Done)", + sm2.state() + ); } diff --git a/server/src/agent_log.rs b/server/src/agent_log.rs index 9c54c351..3cb12731 100644 --- a/server/src/agent_log.rs +++ b/server/src/agent_log.rs @@ -6,7 +6,6 @@ use std::fs::{self, File, OpenOptions}; use std::io::{BufRead, BufReader, Write}; use std::path::{Path, PathBuf}; - /// A single line in the agent log file (JSONL format). #[derive(Debug, Serialize, Deserialize)] pub struct LogEntry { @@ -72,10 +71,7 @@ impl AgentLogWriter { /// Return the log directory for a story. fn log_dir(project_root: &Path, story_id: &str) -> PathBuf { - project_root - .join(".huskies") - .join("logs") - .join(story_id) + project_root.join(".huskies").join("logs").join(story_id) } /// Return the path to a specific log file. @@ -102,8 +98,8 @@ pub fn read_log(path: &Path) -> Result, String> { if trimmed.is_empty() { continue; } - let entry: LogEntry = serde_json::from_str(trimmed) - .map_err(|e| format!("Failed to parse log entry: {e}"))?; + let entry: LogEntry = + serde_json::from_str(trimmed).map_err(|e| format!("Failed to parse log entry: {e}"))?; entries.push(entry); } @@ -197,10 +193,7 @@ pub fn format_log_entry_as_text(timestamp: &str, event: &serde_json::Value) -> O Some("done") => Some(format!("{pfx} DONE")), Some("status") => { // Skip trivial running/started noise - let status = event - .get("status") - .and_then(|v| v.as_str()) - .unwrap_or("?"); + let status = event.get("status").and_then(|v| v.as_str()).unwrap_or("?"); match status { "running" | "started" => None, _ => Some(format!("{pfx} STATUS: {status}")), @@ -211,10 +204,7 @@ pub fn format_log_entry_as_text(timestamp: &str, event: &serde_json::Value) -> O match data.get("type").and_then(|v| v.as_str()) { Some("assistant") => { let mut parts: Vec = Vec::new(); - if let Some(arr) = data - .pointer("/message/content") - .and_then(|v| v.as_array()) - { + if let Some(arr) = data.pointer("/message/content").and_then(|v| v.as_array()) { for item in arr { match item.get("type").and_then(|v| v.as_str()) { Some("text") => { @@ -228,15 +218,11 @@ pub fn format_log_entry_as_text(timestamp: &str, event: &serde_json::Value) -> O } } Some("tool_use") => { - let name = item - .get("name") - .and_then(|v| v.as_str()) - .unwrap_or("?"); + let name = + item.get("name").and_then(|v| v.as_str()).unwrap_or("?"); let input = item .get("input") - .map(|v| { - serde_json::to_string(v).unwrap_or_default() - }) + .map(|v| serde_json::to_string(v).unwrap_or_default()) .unwrap_or_default(); let display = if input.len() > 200 { format!("{}...", &input[..200]) @@ -257,14 +243,9 @@ pub fn format_log_entry_as_text(timestamp: &str, event: &serde_json::Value) -> O } Some("user") => { let mut parts: Vec = Vec::new(); - if let Some(arr) = data - .pointer("/message/content") - .and_then(|v| v.as_array()) - { + if let Some(arr) = data.pointer("/message/content").and_then(|v| v.as_array()) { for item in arr { - if item.get("type").and_then(|v| v.as_str()) - != Some("tool_result") - { + if item.get("type").and_then(|v| v.as_str()) != Some("tool_result") { continue; } let content_str = match item.get("content") { @@ -316,11 +297,7 @@ pub fn read_log_as_readable_lines(path: &Path) -> Result, String> { /// /// Scans `.huskies/logs/{story_id}/` for files matching `{agent_name}-*.log` /// and returns the one with the most recent modification time. -pub fn find_latest_log( - project_root: &Path, - story_id: &str, - agent_name: &str, -) -> Option { +pub fn find_latest_log(project_root: &Path, story_id: &str, agent_name: &str) -> Option { let dir = log_dir(project_root, story_id); if !dir.is_dir() { return None; @@ -362,8 +339,7 @@ mod tests { let tmp = tempdir().unwrap(); let root = tmp.path(); - let _writer = - AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-abc123").unwrap(); + let _writer = AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-abc123").unwrap(); let expected_path = root .join(".huskies") @@ -378,8 +354,7 @@ mod tests { let tmp = tempdir().unwrap(); let root = tmp.path(); - let mut writer = - AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-001").unwrap(); + let mut writer = AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-001").unwrap(); let event = AgentEvent::Status { story_id: "42_story_foo".to_string(), @@ -426,8 +401,7 @@ mod tests { let tmp = tempdir().unwrap(); let root = tmp.path(); - let mut writer = - AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-002").unwrap(); + let mut writer = AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-002").unwrap(); let events = vec![ AgentEvent::Status { @@ -472,10 +446,8 @@ mod tests { let tmp = tempdir().unwrap(); let root = tmp.path(); - let mut writer1 = - AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-aaa").unwrap(); - let mut writer2 = - AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-bbb").unwrap(); + let mut writer1 = AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-aaa").unwrap(); + let mut writer2 = AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-bbb").unwrap(); writer1 .write_event(&AgentEvent::Output { @@ -496,7 +468,10 @@ mod tests { let path1 = log_file_path(root, "42_story_foo", "coder-1", "sess-aaa"); let path2 = log_file_path(root, "42_story_foo", "coder-1", "sess-bbb"); - assert_ne!(path1, path2, "Different sessions should use different files"); + assert_ne!( + path1, path2, + "Different sessions should use different files" + ); let entries1 = read_log(&path1).unwrap(); let entries2 = read_log(&path2).unwrap(); @@ -513,8 +488,7 @@ mod tests { let root = tmp.path(); // Create two log files with a small delay - let mut writer1 = - AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-old").unwrap(); + let mut writer1 = AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-old").unwrap(); writer1 .write_event(&AgentEvent::Output { story_id: "42_story_foo".to_string(), @@ -527,8 +501,7 @@ mod tests { // Touch the second file to ensure it's newer std::thread::sleep(std::time::Duration::from_millis(50)); - let mut writer2 = - AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-new").unwrap(); + let mut writer2 = AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-new").unwrap(); writer2 .write_event(&AgentEvent::Output { story_id: "42_story_foo".to_string(), @@ -568,8 +541,7 @@ mod tests { drop(w1); std::thread::sleep(std::time::Duration::from_millis(10)); - let mut w2 = - AgentLogWriter::new(root, "42_story_foo", "mergemaster", "sess-bbb").unwrap(); + let mut w2 = AgentLogWriter::new(root, "42_story_foo", "mergemaster", "sess-bbb").unwrap(); w2.write_event(&AgentEvent::Output { story_id: "42_story_foo".to_string(), agent_name: "mergemaster".to_string(), @@ -601,8 +573,7 @@ mod tests { .unwrap(); drop(w1); - let mut w2 = - AgentLogWriter::new(root, "42_story_foo", "mergemaster", "sess-b").unwrap(); + let mut w2 = AgentLogWriter::new(root, "42_story_foo", "mergemaster", "sess-b").unwrap(); w2.write_event(&AgentEvent::Output { story_id: "42_story_foo".to_string(), agent_name: "mergemaster".to_string(), @@ -704,7 +675,10 @@ mod tests { } }); let result = format_log_entry_as_text(ts, &event).unwrap(); - assert!(result.contains("TOOL: Read"), "should show tool call: {result}"); + assert!( + result.contains("TOOL: Read"), + "should show tool call: {result}" + ); assert!(result.contains("file_path"), "should show input: {result}"); } @@ -728,7 +702,10 @@ mod tests { } }); let result = format_log_entry_as_text(ts, &event).unwrap(); - assert!(result.contains("Now I will read the file."), "should show text: {result}"); + assert!( + result.contains("Now I will read the file."), + "should show text: {result}" + ); } #[test] @@ -743,7 +720,10 @@ mod tests { "event": {"type": "content_block_delta", "delta": {"type": "text_delta", "text": "chunk"}} } }); - assert!(format_log_entry_as_text(ts, &event).is_none(), "stream events should be skipped"); + assert!( + format_log_entry_as_text(ts, &event).is_none(), + "stream events should be skipped" + ); } #[test] @@ -771,7 +751,11 @@ mod tests { let path = log_file_path(root, "42_story_foo", "coder-1", "sess-readable"); let lines = read_log_as_readable_lines(&path).unwrap(); assert_eq!(lines.len(), 2, "Should produce 2 readable lines"); - assert!(lines[0].contains("Let me read the file"), "first line: {}", lines[0]); + assert!( + lines[0].contains("Let me read the file"), + "first line: {}", + lines[0] + ); assert!(lines[1].contains("DONE"), "second line: {}", lines[1]); } @@ -802,7 +786,10 @@ mod tests { }; // File should still exist and be readable - assert!(path.exists(), "Log file should persist after writer is dropped"); + assert!( + path.exists(), + "Log file should persist after writer is dropped" + ); let entries = read_log(&path).unwrap(); assert_eq!(entries.len(), 1); assert_eq!(entries[0].event["type"], "status"); diff --git a/server/src/agent_mode.rs b/server/src/agent_mode.rs index 81a99d17..cba3940e 100644 --- a/server/src/agent_mode.rs +++ b/server/src/agent_mode.rs @@ -51,14 +51,20 @@ pub async fn run( println!("\x1b[96;1m[agent-mode]\x1b[0m Starting headless build agent"); println!("\x1b[96;1m[agent-mode]\x1b[0m Rendezvous: {rendezvous_url}"); - println!("\x1b[96;1m[agent-mode]\x1b[0m Project: {}", project_root.display()); + println!( + "\x1b[96;1m[agent-mode]\x1b[0m Project: {}", + project_root.display() + ); // Validate project config. let config = ProjectConfig::load(&project_root).unwrap_or_else(|e| { eprintln!("error: invalid project config: {e}"); std::process::exit(1); }); - slog!("[agent-mode] Loaded config with {} agents", config.agent.len()); + slog!( + "[agent-mode] Loaded config with {} agents", + config.agent.len() + ); // Event bus for pipeline lifecycle events. let (watcher_tx, _) = broadcast::channel::(1024); @@ -79,9 +85,7 @@ pub async fn run( { let story_id = evt.story_id.clone(); tokio::task::spawn_blocking(move || { - if let Err(e) = - crate::worktree::prune_worktree_sync(&root, &story_id) - { + if let Err(e) = crate::worktree::prune_worktree_sync(&root, &story_id) { slog!("[agent-mode] worktree prune failed for {story_id}: {e}"); } }); @@ -113,9 +117,7 @@ pub async fn run( if let watcher::WatcherEvent::WorkItem { ref stage, .. } = event && matches!(stage.as_str(), "2_current" | "3_qa" | "4_merge") { - slog!( - "[agent-mode] CRDT transition in {stage}/; triggering auto-assign." - ); + slog!("[agent-mode] CRDT transition in {stage}/; triggering auto-assign."); auto_agents.auto_assign_available_work(&auto_root).await; } } diff --git a/server/src/agents/gates.rs b/server/src/agents/gates.rs index 990cc52f..074daeaf 100644 --- a/server/src/agents/gates.rs +++ b/server/src/agents/gates.rs @@ -36,9 +36,7 @@ pub(crate) fn worktree_has_committed_work(wt_path: &Path) -> bool { .current_dir(wt_path) .output(); match output { - Ok(out) if out.status.success() => { - !String::from_utf8_lossy(&out.stdout).trim().is_empty() - } + Ok(out) if out.status.success() => !String::from_utf8_lossy(&out.stdout).trim().is_empty(), _ => false, } } @@ -258,14 +256,21 @@ mod tests { let script_dir = path.join("script"); fs::create_dir_all(&script_dir).unwrap(); let script_test = script_dir.join("test"); - fs::write(&script_test, "#!/usr/bin/env bash\necho 'all tests passed'\nexit 0\n").unwrap(); + fs::write( + &script_test, + "#!/usr/bin/env bash\necho 'all tests passed'\nexit 0\n", + ) + .unwrap(); let mut perms = fs::metadata(&script_test).unwrap().permissions(); perms.set_mode(0o755); fs::set_permissions(&script_test, perms).unwrap(); let (passed, output) = run_project_tests(path).unwrap(); assert!(passed, "script/test exiting 0 should pass"); - assert!(output.contains("script/test"), "output should mention script/test"); + assert!( + output.contains("script/test"), + "output should mention script/test" + ); } #[cfg(unix)] @@ -286,7 +291,10 @@ mod tests { let (passed, output) = run_project_tests(path).unwrap(); assert!(!passed, "script/test exiting 1 should fail"); - assert!(output.contains("script/test"), "output should mention script/test"); + assert!( + output.contains("script/test"), + "output should mention script/test" + ); } // ── run_coverage_gate tests ─────────────────────────────────────────────── @@ -347,7 +355,10 @@ mod tests { let script = script_dir.join("test_coverage"); { let mut f = fs::File::create(&script).unwrap(); - f.write_all(b"#!/usr/bin/env bash\necho 'FAIL: Coverage 40% is below threshold 80%'\nexit 1\n").unwrap(); + f.write_all( + b"#!/usr/bin/env bash\necho 'FAIL: Coverage 40% is below threshold 80%'\nexit 1\n", + ) + .unwrap(); f.sync_all().unwrap(); } let mut perms = fs::metadata(&script).unwrap().permissions(); diff --git a/server/src/agents/lifecycle.rs b/server/src/agents/lifecycle.rs index a954f2b6..c133fdac 100644 --- a/server/src/agents/lifecycle.rs +++ b/server/src/agents/lifecycle.rs @@ -37,9 +37,7 @@ fn move_item<'a>( // Use the typed projection for compile-safe stage comparison. if let Ok(Some(typed_item)) = crate::pipeline_state::read_typed(story_id) { let current_dir = typed_item.stage.dir_name(); - if current_dir == target_dir - || extra_done_dirs.contains(¤t_dir) - { + if current_dir == target_dir || extra_done_dirs.contains(¤t_dir) { return Ok(None); // Idempotent: already there. } @@ -77,11 +75,7 @@ fn move_item<'a>( })) }; - crate::db::move_item_stage( - story_id, - target_dir, - transform.as_ref().map(|f| f.as_ref()), - ); + crate::db::move_item_stage(story_id, target_dir, transform.as_ref().map(|f| f.as_ref())); slog!("[lifecycle] Moved '{story_id}' from work/{src_dir}/ to work/{target_dir}/"); return Ok(Some(src_dir)); @@ -121,7 +115,16 @@ fn move_item<'a>( /// that has already advanced past the coding stage. /// Idempotent: if already in `2_current/`, returns Ok. If not found, logs and returns Ok. pub fn move_story_to_current(project_root: &Path, story_id: &str) -> Result<(), String> { - move_item(project_root, story_id, &["1_backlog"], "2_current", &[], true, &[]).map(|_| ()) + move_item( + project_root, + story_id, + &["1_backlog"], + "2_current", + &[], + true, + &[], + ) + .map(|_| ()) } /// Check whether a feature branch `feature/story-{story_id}` exists and has @@ -205,12 +208,25 @@ pub fn move_story_to_qa(project_root: &Path, story_id: &str) -> Result<(), Strin } /// Move a story from `work/3_qa/` back to `work/2_current/`, clearing `review_hold` and writing notes. -pub fn reject_story_from_qa(project_root: &Path, story_id: &str, notes: &str) -> Result<(), String> { - let moved = move_item(project_root, story_id, &["3_qa"], "2_current", &[], false, &["review_hold"])?; +pub fn reject_story_from_qa( + project_root: &Path, + story_id: &str, + notes: &str, +) -> Result<(), String> { + let moved = move_item( + project_root, + story_id, + &["3_qa"], + "2_current", + &[], + false, + &["review_hold"], + )?; if moved.is_some() && !notes.is_empty() { // Append rejection notes to the stored content. if let Some(content) = crate::db::read_content(story_id) { - let updated = crate::io::story_metadata::write_rejection_notes_to_content(&content, notes); + let updated = + crate::io::story_metadata::write_rejection_notes_to_content(&content, notes); crate::db::write_content(story_id, &updated); // Re-sync to DB. crate::db::write_item_with_content(story_id, "2_current", &updated); @@ -251,8 +267,16 @@ pub fn move_story_to_stage( let all_dirs: Vec<&str> = STAGES.iter().map(|(_, dir)| *dir).collect(); - match move_item(project_root, story_id, &all_dirs, target_dir, &[], false, &[]) - .map_err(|_| format!("Work item '{story_id}' not found in any pipeline stage."))? + match move_item( + project_root, + story_id, + &all_dirs, + target_dir, + &[], + false, + &[], + ) + .map_err(|_| format!("Work item '{story_id}' not found in any pipeline stage."))? { Some(src_dir) => { let from_stage = STAGES diff --git a/server/src/agents/merge.rs b/server/src/agents/merge.rs index fc4757ae..e1f5419e 100644 --- a/server/src/agents/merge.rs +++ b/server/src/agents/merge.rs @@ -248,7 +248,9 @@ pub(crate) fn run_squash_merge( .output() .map_err(|e| format!("Failed to check merge diff: {e}"))?; let changed_files = String::from_utf8_lossy(&diff_check.stdout); - let has_code_changes = changed_files.lines().any(|f| !f.starts_with(".huskies/work/")); + let has_code_changes = changed_files + .lines() + .any(|f| !f.starts_with(".huskies/work/")); if !has_code_changes { all_output.push_str( "=== Merge commit contains only .huskies/ file moves, no code changes ===\n", @@ -423,7 +425,14 @@ pub(crate) fn run_squash_merge( // Exclude .huskies/work/ (pipeline file moves) but keep .huskies/project.toml // and other config files which are legitimate deliverables. let diff_stat = Command::new("git") - .args(["diff", "--stat", "HEAD~1..HEAD", "--", ".", ":(exclude).huskies/work"]) + .args([ + "diff", + "--stat", + "HEAD~1..HEAD", + "--", + ".", + ":(exclude).huskies/work", + ]) .current_dir(project_root) .output() .map(|o| String::from_utf8_lossy(&o.stdout).trim().to_string()) diff --git a/server/src/agents/pool/auto_assign/auto_assign.rs b/server/src/agents/pool/auto_assign/auto_assign.rs index b406722b..52431713 100644 --- a/server/src/agents/pool/auto_assign/auto_assign.rs +++ b/server/src/agents/pool/auto_assign/auto_assign.rs @@ -64,8 +64,7 @@ impl AgentPool { } // All deps met — promote from backlog to current. slog!("[auto-assign] Story '{story_id}' deps met; promoting from backlog to current."); - if let Err(e) = - crate::agents::lifecycle::move_story_to_current(project_root, story_id) + if let Err(e) = crate::agents::lifecycle::move_story_to_current(project_root, story_id) { slog!("[auto-assign] Failed to promote '{story_id}' to current: {e}"); } @@ -160,10 +159,12 @@ impl AgentPool { ); let _ = crate::io::story_metadata::write_blocked(&story_path); } - let _ = self.watcher_tx.send(crate::io::watcher::WatcherEvent::StoryBlocked { - story_id: story_id.to_string(), - reason: empty_diff_reason.to_string(), - }); + let _ = self + .watcher_tx + .send(crate::io::watcher::WatcherEvent::StoryBlocked { + story_id: story_id.to_string(), + reason: empty_diff_reason.to_string(), + }); continue; } @@ -570,9 +571,12 @@ mod tests { pool.auto_assign_available_work(root).await; let agents = pool.agents.lock().unwrap(); - let has_pending = agents - .values() - .any(|a| matches!(a.status, crate::agents::AgentStatus::Pending | crate::agents::AgentStatus::Running)); + let has_pending = agents.values().any(|a| { + matches!( + a.status, + crate::agents::AgentStatus::Pending | crate::agents::AgentStatus::Running + ) + }); assert!( has_pending, "story with all deps done should be auto-assigned" diff --git a/server/src/agents/pool/auto_assign/reconcile.rs b/server/src/agents/pool/auto_assign/reconcile.rs index ec7f4284..89c61e12 100644 --- a/server/src/agents/pool/auto_assign/reconcile.rs +++ b/server/src/agents/pool/auto_assign/reconcile.rs @@ -161,17 +161,19 @@ impl AgentPool { match qa_mode { crate::io::story_metadata::QaMode::Server => { - if let Err(e) = - crate::agents::move_story_to_merge(project_root, story_id) - { - eprintln!("[startup:reconcile] Failed to move '{story_id}' to 4_merge/: {e}"); + if let Err(e) = crate::agents::move_story_to_merge(project_root, story_id) { + eprintln!( + "[startup:reconcile] Failed to move '{story_id}' to 4_merge/: {e}" + ); let _ = progress_tx.send(ReconciliationEvent { story_id: story_id.clone(), status: "failed".to_string(), message: format!("Failed to advance to merge: {e}"), }); } else { - eprintln!("[startup:reconcile] Moved '{story_id}' → 4_merge/ (qa: server)."); + eprintln!( + "[startup:reconcile] Moved '{story_id}' → 4_merge/ (qa: server)." + ); let _ = progress_tx.send(ReconciliationEvent { story_id: story_id.clone(), status: "advanced".to_string(), @@ -180,10 +182,10 @@ impl AgentPool { } } crate::io::story_metadata::QaMode::Agent => { - if let Err(e) = - crate::agents::move_story_to_qa(project_root, story_id) - { - eprintln!("[startup:reconcile] Failed to move '{story_id}' to 3_qa/: {e}"); + if let Err(e) = crate::agents::move_story_to_qa(project_root, story_id) { + eprintln!( + "[startup:reconcile] Failed to move '{story_id}' to 3_qa/: {e}" + ); let _ = progress_tx.send(ReconciliationEvent { story_id: story_id.clone(), status: "failed".to_string(), @@ -199,10 +201,10 @@ impl AgentPool { } } crate::io::story_metadata::QaMode::Human => { - if let Err(e) = - crate::agents::move_story_to_qa(project_root, story_id) - { - eprintln!("[startup:reconcile] Failed to move '{story_id}' to 3_qa/: {e}"); + if let Err(e) = crate::agents::move_story_to_qa(project_root, story_id) { + eprintln!( + "[startup:reconcile] Failed to move '{story_id}' to 3_qa/: {e}" + ); let _ = progress_tx.send(ReconciliationEvent { story_id: story_id.clone(), status: "failed".to_string(), @@ -219,7 +221,9 @@ impl AgentPool { "[startup:reconcile] Failed to set review_hold on '{story_id}': {e}" ); } - eprintln!("[startup:reconcile] Moved '{story_id}' → 3_qa/ (qa: human — holding for review)."); + eprintln!( + "[startup:reconcile] Moved '{story_id}' → 3_qa/ (qa: human — holding for review)." + ); let _ = progress_tx.send(ReconciliationEvent { story_id: story_id.clone(), status: "review_hold".to_string(), @@ -284,9 +288,7 @@ impl AgentPool { let story_path = project_root .join(".huskies/work/3_qa") .join(format!("{story_id}.md")); - if let Err(e) = - crate::io::story_metadata::write_review_hold(&story_path) - { + if let Err(e) = crate::io::story_metadata::write_review_hold(&story_path) { eprintln!( "[startup:reconcile] Failed to set review_hold on '{story_id}': {e}" ); diff --git a/server/src/agents/pool/auto_assign/scan.rs b/server/src/agents/pool/auto_assign/scan.rs index ce821b17..efd414c4 100644 --- a/server/src/agents/pool/auto_assign/scan.rs +++ b/server/src/agents/pool/auto_assign/scan.rs @@ -31,7 +31,9 @@ pub(super) fn scan_stage_items(project_root: &Path, stage_dir: &str) -> Vec bool { +pub(super) fn has_unmet_dependencies(project_root: &Path, stage_dir: &str, story_id: &str) -> bool { // Prefer CRDT-based check. let crdt_deps = crate::crdt_state::check_unmet_deps_crdt(story_id); if !crdt_deps.is_empty() { return true; } // If the CRDT had the item and returned empty deps, it means all are met. - if crate::pipeline_state::read_typed(story_id).ok().flatten().is_some() { + if crate::pipeline_state::read_typed(story_id) + .ok() + .flatten() + .is_some() + { return false; } // Fallback: filesystem check (CRDT not initialised or item not yet in CRDT). @@ -82,7 +82,11 @@ pub(super) fn check_archived_dependencies( story_id: &str, ) -> Vec { // Prefer CRDT-based check when the item is known to CRDT. - if crate::pipeline_state::read_typed(story_id).ok().flatten().is_some() { + if crate::pipeline_state::read_typed(story_id) + .ok() + .flatten() + .is_some() + { return crate::crdt_state::check_archived_deps_crdt(story_id); } // Fallback: filesystem. @@ -146,7 +150,11 @@ mod tests { "---\nname: Blocked\ndepends_on: [999]\n---\n", ) .unwrap(); - assert!(has_unmet_dependencies(tmp.path(), "2_current", "10_story_blocked")); + assert!(has_unmet_dependencies( + tmp.path(), + "2_current", + "10_story_blocked" + )); } #[test] @@ -162,7 +170,11 @@ mod tests { "---\nname: Ok\ndepends_on: [999]\n---\n", ) .unwrap(); - assert!(!has_unmet_dependencies(tmp.path(), "2_current", "10_story_ok")); + assert!(!has_unmet_dependencies( + tmp.path(), + "2_current", + "10_story_ok" + )); } #[test] @@ -171,7 +183,11 @@ mod tests { let current = tmp.path().join(".huskies/work/2_current"); std::fs::create_dir_all(¤t).unwrap(); std::fs::write(current.join("5_story_free.md"), "---\nname: Free\n---\n").unwrap(); - assert!(!has_unmet_dependencies(tmp.path(), "2_current", "5_story_free")); + assert!(!has_unmet_dependencies( + tmp.path(), + "2_current", + "5_story_free" + )); } // ── Bug 503: archived-dep visibility ───────────────────────────────────── @@ -184,7 +200,11 @@ mod tests { let archived = tmp.path().join(".huskies/work/6_archived"); std::fs::create_dir_all(&backlog).unwrap(); std::fs::create_dir_all(&archived).unwrap(); - std::fs::write(archived.join("500_spike_crdt.md"), "---\nname: CRDT Spike\n---\n").unwrap(); + std::fs::write( + archived.join("500_spike_crdt.md"), + "---\nname: CRDT Spike\n---\n", + ) + .unwrap(); std::fs::write( backlog.join("503_story_dependent.md"), "---\nname: Dependent\ndepends_on: [500]\n---\n", diff --git a/server/src/agents/pool/auto_assign/watchdog.rs b/server/src/agents/pool/auto_assign/watchdog.rs index 2fa40c95..c87bde0a 100644 --- a/server/src/agents/pool/auto_assign/watchdog.rs +++ b/server/src/agents/pool/auto_assign/watchdog.rs @@ -84,8 +84,8 @@ impl AgentPool { #[cfg(test)] mod tests { - use super::*; use super::super::super::{AgentPool, composite_key}; + use super::*; // ── check_orphaned_agents return value tests (bug 161) ────────────────── diff --git a/server/src/agents/pool/mod.rs b/server/src/agents/pool/mod.rs index e7d80c08..7243f987 100644 --- a/server/src/agents/pool/mod.rs +++ b/server/src/agents/pool/mod.rs @@ -1,12 +1,12 @@ //! Agent pool — manages the set of active agents across all pipeline stages. mod auto_assign; mod pipeline; -mod start; -mod stop; -mod wait; mod process; mod query; +mod start; +mod stop; mod types; +mod wait; mod worktree; #[cfg(test)] @@ -68,10 +68,15 @@ impl AgentPool { Err(broadcast::error::RecvError::Lagged(_)) => continue, }; let (story_id, agent_name) = match &event { - WatcherEvent::RateLimitWarning { story_id, agent_name } - | WatcherEvent::RateLimitHardBlock { story_id, agent_name, .. } => { - (story_id.clone(), agent_name.clone()) + WatcherEvent::RateLimitWarning { + story_id, + agent_name, } + | WatcherEvent::RateLimitHardBlock { + story_id, + agent_name, + .. + } => (story_id.clone(), agent_name.clone()), _ => continue, }; let key = composite_key(&story_id, &agent_name); diff --git a/server/src/agents/pool/pipeline/advance.rs b/server/src/agents/pool/pipeline/advance.rs index 06f36562..8c75007b 100644 --- a/server/src/agents/pool/pipeline/advance.rs +++ b/server/src/agents/pool/pipeline/advance.rs @@ -1,18 +1,15 @@ //! Pipeline advance — moves stories forward through pipeline stages after agent completion. use crate::config::ProjectConfig; +use crate::io::watcher::WatcherEvent; use crate::slog; use crate::slog_error; use crate::slog_warn; -use crate::io::watcher::WatcherEvent; use std::collections::HashMap; use std::path::{Path, PathBuf}; use std::sync::{Arc, Mutex}; use tokio::sync::broadcast; -use super::super::super::{ - CompletionReport, PipelineStage, - agent_config_stage, pipeline_stage, -}; +use super::super::super::{CompletionReport, PipelineStage, agent_config_stage, pipeline_stage}; use super::super::{AgentPool, StoryAgent}; impl AgentPool { @@ -66,14 +63,16 @@ impl AgentPool { "[pipeline] Coder '{agent_name}' passed gates for '{story_id}'. \ qa: server — moving directly to merge." ); - if let Err(e) = - crate::agents::lifecycle::move_story_to_merge(&project_root, story_id) - { + if let Err(e) = crate::agents::lifecycle::move_story_to_merge( + &project_root, + story_id, + ) { slog_error!( "[pipeline] Failed to move '{story_id}' to 4_merge/: {e}" ); } else { - self.start_mergemaster_or_block(&project_root, story_id).await; + self.start_mergemaster_or_block(&project_root, story_id) + .await; } } crate::io::story_metadata::QaMode::Agent => { @@ -81,13 +80,17 @@ impl AgentPool { "[pipeline] Coder '{agent_name}' passed gates for '{story_id}'. \ qa: agent — moving to QA." ); - if let Err(e) = crate::agents::lifecycle::move_story_to_qa(&project_root, story_id) { + if let Err(e) = + crate::agents::lifecycle::move_story_to_qa(&project_root, story_id) + { slog_error!("[pipeline] Failed to move '{story_id}' to 3_qa/: {e}"); } else if let Err(e) = self .start_agent(&project_root, story_id, Some("qa"), None, None) .await { - slog_error!("[pipeline] Failed to start qa agent for '{story_id}': {e}"); + slog_error!( + "[pipeline] Failed to start qa agent for '{story_id}': {e}" + ); } } crate::io::story_metadata::QaMode::Human => { @@ -95,7 +98,9 @@ impl AgentPool { "[pipeline] Coder '{agent_name}' passed gates for '{story_id}'. \ qa: human — holding for human review." ); - if let Err(e) = crate::agents::lifecycle::move_story_to_qa(&project_root, story_id) { + if let Err(e) = + crate::agents::lifecycle::move_story_to_qa(&project_root, story_id) + { slog_error!("[pipeline] Failed to move '{story_id}' to 3_qa/: {e}"); } else { write_review_hold_to_store(story_id); @@ -104,7 +109,8 @@ impl AgentPool { } } else { // Increment retry count and check if blocked. - if let Some(reason) = should_block_story(story_id, config.max_retries, "coder") { + if let Some(reason) = should_block_story(story_id, config.max_retries, "coder") + { // Story has exceeded retry limit — do not restart. let _ = self.watcher_tx.send(WatcherEvent::StoryBlocked { story_id: story_id.to_string(), @@ -144,13 +150,14 @@ impl AgentPool { .clone() .unwrap_or_else(|| project_root.clone()); let cp = coverage_path.clone(); - let coverage_result = - tokio::task::spawn_blocking(move || crate::agents::gates::run_coverage_gate(&cp)) - .await - .unwrap_or_else(|e| { - slog_warn!("[pipeline] Coverage gate task panicked: {e}"); - Ok((false, format!("Coverage gate task panicked: {e}"))) - }); + let coverage_result = tokio::task::spawn_blocking(move || { + crate::agents::gates::run_coverage_gate(&cp) + }) + .await + .unwrap_or_else(|e| { + slog_warn!("[pipeline] Coverage gate task panicked: {e}"); + Ok((false, format!("Coverage gate task panicked: {e}"))) + }); let (coverage_passed, coverage_output) = match coverage_result { Ok(pair) => pair, Err(e) => (false, e), @@ -184,17 +191,21 @@ impl AgentPool { "[pipeline] QA passed gates and coverage for '{story_id}'. \ Moving directly to merge." ); - if let Err(e) = - crate::agents::lifecycle::move_story_to_merge(&project_root, story_id) - { + if let Err(e) = crate::agents::lifecycle::move_story_to_merge( + &project_root, + story_id, + ) { slog_error!( "[pipeline] Failed to move '{story_id}' to 4_merge/: {e}" ); } else { - self.start_mergemaster_or_block(&project_root, story_id).await; + self.start_mergemaster_or_block(&project_root, story_id) + .await; } } - } else if let Some(reason) = should_block_story(story_id, config.max_retries, "qa-coverage") { + } else if let Some(reason) = + should_block_story(story_id, config.max_retries, "qa-coverage") + { // Story has exceeded retry limit — do not restart. let _ = self.watcher_tx.send(WatcherEvent::StoryBlocked { story_id: story_id.to_string(), @@ -217,7 +228,8 @@ impl AgentPool { slog_error!("[pipeline] Failed to restart qa for '{story_id}': {e}"); } } - } else if let Some(reason) = should_block_story(story_id, config.max_retries, "qa") { + } else if let Some(reason) = should_block_story(story_id, config.max_retries, "qa") + { // Story has exceeded retry limit — do not restart. let _ = self.watcher_tx.send(WatcherEvent::StoryBlocked { story_id: story_id.to_string(), @@ -272,13 +284,14 @@ impl AgentPool { "[pipeline] Mergemaster completed for '{story_id}'. Running post-merge tests on master." ); let root = project_root.clone(); - let test_result = - tokio::task::spawn_blocking(move || crate::agents::gates::run_project_tests(&root)) - .await - .unwrap_or_else(|e| { - slog_warn!("[pipeline] Post-merge test task panicked: {e}"); - Ok((false, format!("Test task panicked: {e}"))) - }); + let test_result = tokio::task::spawn_blocking(move || { + crate::agents::gates::run_project_tests(&root) + }) + .await + .unwrap_or_else(|e| { + slog_warn!("[pipeline] Post-merge test task panicked: {e}"); + Ok((false, format!("Test task panicked: {e}"))) + }); let (passed, output) = match test_result { Ok(pair) => pair, Err(e) => (false, e), @@ -309,7 +322,9 @@ impl AgentPool { slog!( "[pipeline] Story '{story_id}' done. Worktree preserved for inspection." ); - } else if let Some(reason) = should_block_story(story_id, config.max_retries, "mergemaster") { + } else if let Some(reason) = + should_block_story(story_id, config.max_retries, "mergemaster") + { // Story has exceeded retry limit — do not restart. let _ = self.watcher_tx.send(WatcherEvent::StoryBlocked { story_id: story_id.to_string(), @@ -564,7 +579,10 @@ mod tests { ) .unwrap(); crate::db::ensure_content_store(); - crate::db::write_content("9909_story_agent_qa", "---\nname: Test\nqa: agent\n---\ntest"); + crate::db::write_content( + "9909_story_agent_qa", + "---\nname: Test\nqa: agent\n---\ntest", + ); let pool = AgentPool::new_test(3001); pool.run_pipeline_advance( @@ -758,10 +776,26 @@ stage = "qa" let root = tmp.path(); // Init a bare git repo on master with one empty commit. - Command::new("git").args(["init"]).current_dir(root).output().unwrap(); - Command::new("git").args(["config", "user.email", "test@test.com"]).current_dir(root).output().unwrap(); - Command::new("git").args(["config", "user.name", "Test"]).current_dir(root).output().unwrap(); - Command::new("git").args(["commit", "--allow-empty", "-m", "init"]).current_dir(root).output().unwrap(); + Command::new("git") + .args(["init"]) + .current_dir(root) + .output() + .unwrap(); + Command::new("git") + .args(["config", "user.email", "test@test.com"]) + .current_dir(root) + .output() + .unwrap(); + Command::new("git") + .args(["config", "user.name", "Test"]) + .current_dir(root) + .output() + .unwrap(); + Command::new("git") + .args(["commit", "--allow-empty", "-m", "init"]) + .current_dir(root) + .output() + .unwrap(); // Create a feature branch that points at master HEAD (zero commits ahead). // This replicates the incident where the worktree was reset to master. @@ -775,7 +809,11 @@ stage = "qa" let current = root.join(".huskies/work/2_current"); fs::create_dir_all(¤t).unwrap(); fs::create_dir_all(root.join(".huskies/work/4_merge")).unwrap(); - fs::write(current.join("9919_story_no_commits.md"), "---\nname: Test\n---\n").unwrap(); + fs::write( + current.join("9919_story_no_commits.md"), + "---\nname: Test\n---\n", + ) + .unwrap(); crate::db::ensure_content_store(); crate::db::write_content("9919_story_no_commits", "---\nname: Test\n---\n"); @@ -835,8 +873,8 @@ stage = "qa" #[tokio::test] async fn pipeline_advance_picks_up_waiting_qa_stories_after_completion() { - use std::fs; use super::super::super::auto_assign::is_agent_free; + use std::fs; let tmp = tempfile::tempdir().unwrap(); let root = tmp.path(); @@ -908,8 +946,7 @@ stage = "qa" // After pipeline advance, auto_assign should have started QA on story 293. let agents = pool.agents.lock().unwrap(); let qa_on_293 = agents.values().any(|a| { - a.agent_name == "qa" - && matches!(a.status, AgentStatus::Pending | AgentStatus::Running) + a.agent_name == "qa" && matches!(a.status, AgentStatus::Pending | AgentStatus::Running) }); assert!( qa_on_293, @@ -940,10 +977,26 @@ stage = "qa" let root = tmp.path(); // Init a git repo so post-merge tests would pass if they ran. - Command::new("git").args(["init"]).current_dir(root).output().unwrap(); - Command::new("git").args(["config", "user.email", "test@test.com"]).current_dir(root).output().unwrap(); - Command::new("git").args(["config", "user.name", "Test"]).current_dir(root).output().unwrap(); - Command::new("git").args(["commit", "--allow-empty", "-m", "init"]).current_dir(root).output().unwrap(); + Command::new("git") + .args(["init"]) + .current_dir(root) + .output() + .unwrap(); + Command::new("git") + .args(["config", "user.email", "test@test.com"]) + .current_dir(root) + .output() + .unwrap(); + Command::new("git") + .args(["config", "user.name", "Test"]) + .current_dir(root) + .output() + .unwrap(); + Command::new("git") + .args(["commit", "--allow-empty", "-m", "init"]) + .current_dir(root) + .output() + .unwrap(); // Set up pipeline dirs. fs::create_dir_all(root.join(".huskies/work/5_done")).unwrap(); diff --git a/server/src/agents/pool/pipeline/completion.rs b/server/src/agents/pool/pipeline/completion.rs index 5dde2064..2beedcd0 100644 --- a/server/src/agents/pool/pipeline/completion.rs +++ b/server/src/agents/pool/pipeline/completion.rs @@ -1,11 +1,13 @@ //! Agent completion handling — processes exit results and triggers pipeline advancement. -use crate::slog; use crate::io::watcher::WatcherEvent; +use crate::slog; use std::collections::HashMap; use std::sync::{Arc, Mutex}; use tokio::sync::broadcast; -use super::super::super::{AgentEvent, AgentStatus, CompletionReport, PipelineStage, pipeline_stage}; +use super::super::super::{ + AgentEvent, AgentStatus, CompletionReport, PipelineStage, pipeline_stage, +}; use super::super::{AgentPool, StoryAgent, composite_key}; use super::advance::spawn_pipeline_advance; @@ -207,7 +209,10 @@ pub(in crate::agents::pool) async fn run_server_owned_completion( // hold the build lock while gates try to run. if let Some(wt_path) = worktree_path.as_ref() && let Ok(output) = std::process::Command::new("pgrep") - .args(["-f", &format!("--manifest-path {}/Cargo.toml", wt_path.display())]) + .args([ + "-f", + &format!("--manifest-path {}/Cargo.toml", wt_path.display()), + ]) .output() { let pids = String::from_utf8_lossy(&output.stdout); @@ -216,7 +221,9 @@ pub(in crate::agents::pool) async fn run_server_owned_completion( crate::slog!( "[agents] Killing stale cargo process (pid {pid}) for '{story_id}' before running gates" ); - unsafe { libc::kill(pid, libc::SIGKILL); } + unsafe { + libc::kill(pid, libc::SIGKILL); + } } } } @@ -311,8 +318,8 @@ pub(in crate::agents::pool) async fn run_server_owned_completion( #[cfg(test)] mod tests { - use super::*; use super::super::super::AgentPool; + use super::*; use crate::agents::{AgentEvent, AgentStatus, CompletionReport}; use std::path::PathBuf; use std::process::Command; diff --git a/server/src/agents/pool/pipeline/merge.rs b/server/src/agents/pool/pipeline/merge.rs index 85fa8353..ac1e068a 100644 --- a/server/src/agents/pool/pipeline/merge.rs +++ b/server/src/agents/pool/pipeline/merge.rs @@ -85,10 +85,11 @@ impl AgentPool { let sid = story_id.to_string(); let br = branch.clone(); - let merge_result = - tokio::task::spawn_blocking(move || crate::agents::merge::run_squash_merge(&root, &br, &sid)) - .await - .map_err(|e| format!("Merge task panicked: {e}"))??; + let merge_result = tokio::task::spawn_blocking(move || { + crate::agents::merge::run_squash_merge(&root, &br, &sid) + }) + .await + .map_err(|e| format!("Merge task panicked: {e}"))??; if !merge_result.success { return Ok(crate::agents::merge::MergeReport { @@ -185,8 +186,8 @@ impl AgentPool { #[cfg(test)] mod tests { - use super::*; use super::super::super::AgentPool; + use super::*; use crate::agents::merge::{MergeJob, MergeJobStatus}; use std::process::Command; diff --git a/server/src/agents/pool/process.rs b/server/src/agents/pool/process.rs index cd8b4bf5..d36064b6 100644 --- a/server/src/agents/pool/process.rs +++ b/server/src/agents/pool/process.rs @@ -34,7 +34,11 @@ impl AgentPool { /// Test helper: inject a child killer into the registry. #[cfg(test)] - pub fn inject_child_killer(&self, key: &str, killer: Box) { + pub fn inject_child_killer( + &self, + key: &str, + killer: Box, + ) { let mut killers = self.child_killers.lock().unwrap(); killers.insert(key.to_string(), killer); } diff --git a/server/src/agents/pool/query.rs b/server/src/agents/pool/query.rs index fed99d81..8025ba4d 100644 --- a/server/src/agents/pool/query.rs +++ b/server/src/agents/pool/query.rs @@ -4,8 +4,8 @@ use std::path::PathBuf; use tokio::sync::broadcast; use super::super::{AgentEvent, AgentInfo, AgentStatus, PipelineStage, agent_config_stage}; -use super::types::{agent_info_from_entry, composite_key}; use super::AgentPool; +use super::types::{agent_info_from_entry, composite_key}; impl AgentPool { /// Return the names of configured agents for `stage` that are not currently diff --git a/server/src/agents/pool/start.rs b/server/src/agents/pool/start.rs index 284d7c1c..bb815bb3 100644 --- a/server/src/agents/pool/start.rs +++ b/server/src/agents/pool/start.rs @@ -6,14 +6,15 @@ use std::path::Path; use std::sync::{Arc, Mutex}; use tokio::sync::broadcast; +use super::super::runtime::{ + AgentRuntime, ClaudeCodeRuntime, GeminiRuntime, OpenAiRuntime, RuntimeContext, +}; use super::super::{ - AgentEvent, AgentInfo, AgentStatus, PipelineStage, agent_config_stage, - pipeline_stage, + AgentEvent, AgentInfo, AgentStatus, PipelineStage, agent_config_stage, pipeline_stage, }; use super::types::{PendingGuard, StoryAgent, composite_key}; -use super::{AgentPool, auto_assign}; use super::worktree::find_active_story_stage; -use super::super::runtime::{AgentRuntime, ClaudeCodeRuntime, GeminiRuntime, OpenAiRuntime, RuntimeContext}; +use super::{AgentPool, auto_assign}; impl AgentPool { /// Start an agent for a story: load config, create worktree, spawn agent. @@ -102,7 +103,9 @@ impl AgentPool { // the auto_assign path (bug 379). let front_matter_agent: Option = if agent_name.is_none() { crate::db::read_content(story_id).and_then(|contents| { - crate::io::story_metadata::parse_front_matter(&contents).ok()?.agent + crate::io::story_metadata::parse_front_matter(&contents) + .ok()? + .agent }) } else { None @@ -446,7 +449,10 @@ impl AgentPool { let run_result = match runtime_name { "claude-code" => { - let runtime = ClaudeCodeRuntime::new(child_killers_clone.clone(), watcher_tx_clone.clone()); + let runtime = ClaudeCodeRuntime::new( + child_killers_clone.clone(), + watcher_tx_clone.clone(), + ); let ctx = RuntimeContext { story_id: sid.clone(), agent_name: aname.clone(), @@ -514,7 +520,10 @@ impl AgentPool { .find_agent(&aname) .and_then(|a| a.model.clone()); let record = crate::agents::token_usage::build_record( - &sid, &aname, model, usage.clone(), + &sid, + &aname, + model, + usage.clone(), ); if let Err(e) = crate::agents::token_usage::append_record(pr, &record) { slog_error!( @@ -568,15 +577,13 @@ impl AgentPool { // re-dispatches a new mergemaster if the story still needs // merging. This avoids an async call to start_agent inside // a tokio::spawn (which would require Send). - let _ = watcher_tx_clone.send( - crate::io::watcher::WatcherEvent::WorkItem { - stage: "4_merge".to_string(), - item_id: sid.clone(), - action: "reassign".to_string(), - commit_msg: String::new(), - from_stage: None, - }, - ); + let _ = watcher_tx_clone.send(crate::io::watcher::WatcherEvent::WorkItem { + stage: "4_merge".to_string(), + item_id: sid.clone(), + action: "reassign".to_string(), + commit_msg: String::new(), + from_stage: None, + }); } else { // Server-owned completion: run acceptance gates automatically // when the agent process exits normally. @@ -712,7 +719,9 @@ stage = "coder" pool.inject_test_agent("story-1", "coder-1", AgentStatus::Running); pool.inject_test_agent("story-2", "coder-2", AgentStatus::Pending); - let result = pool.start_agent(tmp.path(), "story-3", None, None, None).await; + let result = pool + .start_agent(tmp.path(), "story-3", None, None, None) + .await; assert!(result.is_err()); let err = result.unwrap_err(); assert!( @@ -744,7 +753,9 @@ stage = "coder" let pool = AgentPool::new_test(3001); pool.inject_test_agent("story-1", "coder-1", AgentStatus::Running); - let result = pool.start_agent(tmp.path(), "story-3", None, None, None).await; + let result = pool + .start_agent(tmp.path(), "story-3", None, None, None) + .await; assert!(result.is_err()); let err = result.unwrap_err(); @@ -782,7 +793,9 @@ stage = "coder" let pool = AgentPool::new_test(3001); - let result = pool.start_agent(tmp.path(), "story-5", None, None, None).await; + let result = pool + .start_agent(tmp.path(), "story-5", None, None, None) + .await; match result { Ok(_) => {} Err(e) => { @@ -843,7 +856,9 @@ stage = "coder" let pool = AgentPool::new_test(3001); pool.inject_test_agent("story-a", "qa", AgentStatus::Running); - let result = pool.start_agent(root, "story-b", Some("qa"), None, None).await; + let result = pool + .start_agent(root, "story-b", Some("qa"), None, None) + .await; assert!( result.is_err(), @@ -870,7 +885,9 @@ stage = "coder" let pool = AgentPool::new_test(3001); pool.inject_test_agent("story-a", "qa", AgentStatus::Completed); - let result = pool.start_agent(root, "story-b", Some("qa"), None, None).await; + let result = pool + .start_agent(root, "story-b", Some("qa"), None, None) + .await; if let Err(ref e) = result { assert!( @@ -962,7 +979,9 @@ stage = "coder" let pool = AgentPool::new_test(3099); pool.inject_test_agent("story-x", "qa", AgentStatus::Running); - let result = pool.start_agent(root, "story-y", Some("qa"), None, None).await; + let result = pool + .start_agent(root, "story-y", Some("qa"), None, None) + .await; assert!(result.is_err()); let err = result.unwrap_err(); @@ -1247,11 +1266,7 @@ stage = "coder" ) .unwrap(); crate::db::ensure_content_store(); - crate::db::write_item_with_content( - "310_story_foo", - "2_current", - "---\nname: Foo\n---\n", - ); + crate::db::write_item_with_content("310_story_foo", "2_current", "---\nname: Foo\n---\n"); let pool = AgentPool::new_test(3099); let result = pool @@ -1323,11 +1338,7 @@ stage = "coder" ) .unwrap(); crate::db::ensure_content_store(); - crate::db::write_item_with_content( - "55_story_baz", - "4_merge", - "---\nname: Baz\n---\n", - ); + crate::db::write_item_with_content("55_story_baz", "4_merge", "---\nname: Baz\n---\n"); let pool = AgentPool::new_test(3099); let result = pool @@ -1459,7 +1470,13 @@ stage = "coder" let pool = AgentPool::new_test(3098); let result = pool - .start_agent(root, "502_story_split_brain", Some("mergemaster"), None, None) + .start_agent( + root, + "502_story_split_brain", + Some("mergemaster"), + None, + None, + ) .await; // Stage check must not reject mergemaster. @@ -1475,11 +1492,15 @@ stage = "coder" // Before the fix, line 53 of start.rs would have demoted it to // 2_current/ via move_story_to_current finding the 1_backlog shadow. assert!( - sk_dir.join("work/4_merge/502_story_split_brain.md").exists(), + sk_dir + .join("work/4_merge/502_story_split_brain.md") + .exists(), "story must still be in 4_merge/ after start_agent(mergemaster, ...)" ); assert!( - !sk_dir.join("work/2_current/502_story_split_brain.md").exists(), + !sk_dir + .join("work/2_current/502_story_split_brain.md") + .exists(), "story must NOT have been demoted to 2_current/ — that's bug 502" ); } @@ -1564,11 +1585,7 @@ stage = "coder" ) .unwrap(); let story_content = "---\nname: Test Story\nagent: coder-opus\n---\n# Story 368\n"; - std::fs::write( - backlog.join("368_story_test.md"), - story_content, - ) - .unwrap(); + std::fs::write(backlog.join("368_story_test.md"), story_content).unwrap(); // Also write to the filesystem current dir and content store so that // start_agent reads the correct front matter even when another test has // left a stale entry for "368_story_test" in the global CRDT. @@ -1583,7 +1600,10 @@ stage = "coder" let result = pool .start_agent(tmp.path(), "368_story_test", None, None, None) .await; - assert!(result.is_err(), "expected error when preferred agent is busy"); + assert!( + result.is_err(), + "expected error when preferred agent is busy" + ); let err = result.unwrap_err(); assert!( err.contains("coder-opus"), diff --git a/server/src/agents/pool/stop.rs b/server/src/agents/pool/stop.rs index 564389b0..ddb8d4f8 100644 --- a/server/src/agents/pool/stop.rs +++ b/server/src/agents/pool/stop.rs @@ -4,8 +4,8 @@ use crate::slog_error; use std::path::Path; use super::super::{AgentEvent, AgentStatus}; -use super::types::composite_key; use super::AgentPool; +use super::types::composite_key; impl AgentPool { /// Stop a running agent. Worktree is preserved for inspection. diff --git a/server/src/agents/pool/test_helpers.rs b/server/src/agents/pool/test_helpers.rs index 6589d7ae..c2dc8574 100644 --- a/server/src/agents/pool/test_helpers.rs +++ b/server/src/agents/pool/test_helpers.rs @@ -5,8 +5,8 @@ use std::sync::{Arc, Mutex}; use tokio::sync::broadcast; use super::super::{AgentEvent, AgentStatus, CompletionReport}; -use super::types::{StoryAgent, composite_key}; use super::AgentPool; +use super::types::{StoryAgent, composite_key}; impl AgentPool { /// Test helper: inject a pre-built agent entry so unit tests can exercise diff --git a/server/src/agents/pool/wait.rs b/server/src/agents/pool/wait.rs index 7a569f76..52d899a0 100644 --- a/server/src/agents/pool/wait.rs +++ b/server/src/agents/pool/wait.rs @@ -1,7 +1,7 @@ //! Agent wait — blocks until an agent reaches a terminal state with optional timeout. use super::super::{AgentEvent, AgentInfo, AgentStatus}; -use super::types::{agent_info_from_entry, composite_key}; use super::AgentPool; +use super::types::{agent_info_from_entry, composite_key}; use tokio::sync::broadcast; diff --git a/server/src/agents/pool/worktree.rs b/server/src/agents/pool/worktree.rs index ea4b24d4..9133689b 100644 --- a/server/src/agents/pool/worktree.rs +++ b/server/src/agents/pool/worktree.rs @@ -23,7 +23,10 @@ impl AgentPool { /// Return the active pipeline stage directory name for `story_id`, or `None` if the /// story is not in any active stage (`2_current/`, `3_qa/`, `4_merge/`). -pub(super) fn find_active_story_stage(_project_root: &Path, story_id: &str) -> Option<&'static str> { +pub(super) fn find_active_story_stage( + _project_root: &Path, + story_id: &str, +) -> Option<&'static str> { if let Ok(Some(item)) = crate::pipeline_state::read_typed(story_id) && item.stage.is_active() { @@ -39,11 +42,7 @@ mod tests { #[test] fn find_active_story_stage_detects_current() { crate::db::ensure_content_store(); - crate::db::write_item_with_content( - "10_story_test", - "2_current", - "---\nname: Test\n---\n", - ); + crate::db::write_item_with_content("10_story_test", "2_current", "---\nname: Test\n---\n"); let tmp = tempfile::tempdir().unwrap(); assert_eq!( find_active_story_stage(tmp.path(), "10_story_test"), @@ -54,23 +53,18 @@ mod tests { #[test] fn find_active_story_stage_detects_qa() { crate::db::ensure_content_store(); - crate::db::write_item_with_content( - "11_story_test", - "3_qa", - "---\nname: Test\n---\n", - ); + crate::db::write_item_with_content("11_story_test", "3_qa", "---\nname: Test\n---\n"); let tmp = tempfile::tempdir().unwrap(); - assert_eq!(find_active_story_stage(tmp.path(), "11_story_test"), Some("3_qa")); + assert_eq!( + find_active_story_stage(tmp.path(), "11_story_test"), + Some("3_qa") + ); } #[test] fn find_active_story_stage_detects_merge() { crate::db::ensure_content_store(); - crate::db::write_item_with_content( - "12_story_test", - "4_merge", - "---\nname: Test\n---\n", - ); + crate::db::write_item_with_content("12_story_test", "4_merge", "---\nname: Test\n---\n"); let tmp = tempfile::tempdir().unwrap(); assert_eq!( find_active_story_stage(tmp.path(), "12_story_test"), diff --git a/server/src/agents/pty.rs b/server/src/agents/pty.rs index 43dcac69..1107f402 100644 --- a/server/src/agents/pty.rs +++ b/server/src/agents/pty.rs @@ -237,10 +237,23 @@ fn run_agent_pty_blocking( story_id.replace(['_', '.'], "-") ); let session_count = std::fs::read_dir(&session_dir) - .map(|d| d.filter(|e| e.as_ref().map(|e| e.path().extension().is_some_and(|ext| ext == "jsonl")).unwrap_or(false)).count()) + .map(|d| { + d.filter(|e| { + e.as_ref() + .map(|e| e.path().extension().is_some_and(|ext| ext == "jsonl")) + .unwrap_or(false) + }) + .count() + }) .unwrap_or(0); let session_bytes: u64 = std::fs::read_dir(&session_dir) - .map(|d| d.filter_map(|e| e.ok()).filter(|e| e.path().extension().is_some_and(|ext| ext == "jsonl")).filter_map(|e| e.metadata().ok()).map(|m| m.len()).sum()) + .map(|d| { + d.filter_map(|e| e.ok()) + .filter(|e| e.path().extension().is_some_and(|ext| ext == "jsonl")) + .filter_map(|e| e.metadata().ok()) + .map(|m| m.len()) + .sum() + }) .unwrap_or(0); slog!( @@ -373,12 +386,7 @@ fn run_agent_pty_blocking( "stream_event" => { if let Some(event) = json.get("event") { handle_agent_stream_event( - event, - story_id, - agent_name, - tx, - event_log, - log_writer, + event, story_id, agent_name, tx, event_log, log_writer, ); } } @@ -409,8 +417,7 @@ fn run_agent_pty_blocking( t } None => { - let default = chrono::Utc::now() - + chrono::Duration::minutes(5); + let default = chrono::Utc::now() + chrono::Duration::minutes(5); slog!( "[agent:{story_id}:{agent_name}] API rate limit hard block \ (status={status}); no reset_at in rate_limit_info, \ @@ -469,14 +476,10 @@ fn run_agent_pty_blocking( let wait_result = child.wait(); match &wait_result { Ok(status) => { - slog!( - "[agent:{story_id}:{agent_name}] Child exited: {status:?}" - ); + slog!("[agent:{story_id}:{agent_name}] Child exited: {status:?}"); } Err(e) => { - slog!( - "[agent:{story_id}:{agent_name}] Child wait error: {e}" - ); + slog!("[agent:{story_id}:{agent_name}] Child wait error: {e}"); } } @@ -709,8 +712,7 @@ mod tests { let tmp = tempfile::tempdir().unwrap(); let root = tmp.path(); - let log_writer = - AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-emit").unwrap(); + let log_writer = AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-emit").unwrap(); let log_mutex = Mutex::new(log_writer); let (tx, _rx) = broadcast::channel::(64); diff --git a/server/src/agents/runtime/gemini.rs b/server/src/agents/runtime/gemini.rs index b0f6c2a5..e1c5377e 100644 --- a/server/src/agents/runtime/gemini.rs +++ b/server/src/agents/runtime/gemini.rs @@ -4,7 +4,7 @@ use std::sync::{Arc, Mutex}; use reqwest::Client; use serde::{Deserialize, Serialize}; -use serde_json::{json, Value}; +use serde_json::{Value, json}; use tokio::sync::broadcast; use crate::agent_log::AgentLogWriter; @@ -135,14 +135,15 @@ impl AgentRuntime for GeminiRuntime { }); } - slog!("[gemini] Turn {turn} for {}:{}", ctx.story_id, ctx.agent_name); - - let request_body = build_generate_content_request( - &system_instruction, - &contents, - &gemini_tools, + slog!( + "[gemini] Turn {turn} for {}:{}", + ctx.story_id, + ctx.agent_name ); + let request_body = + build_generate_content_request(&system_instruction, &contents, &gemini_tools); + let url = format!( "https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent?key={api_key}" ); @@ -201,8 +202,7 @@ impl AgentRuntime for GeminiRuntime { text_parts.push(text.to_string()); } if let Some(fc) = part.get("functionCall") - && let (Some(name), Some(args)) = - (fc["name"].as_str(), fc.get("args")) + && let (Some(name), Some(args)) = (fc["name"].as_str(), fc.get("args")) { function_calls.push(GeminiFunctionCall { name: name.to_string(), @@ -263,18 +263,14 @@ impl AgentRuntime for GeminiRuntime { text: format!("\n[Tool call: {}]\n", fc.name), }); - let tool_result = - call_mcp_tool(&client, &mcp_base, &fc.name, &fc.args).await; + let tool_result = call_mcp_tool(&client, &mcp_base, &fc.name, &fc.args).await; let response_value = match &tool_result { Ok(result) => { emit(AgentEvent::Output { story_id: ctx.story_id.clone(), agent_name: ctx.agent_name.clone(), - text: format!( - "[Tool result: {} chars]\n", - result.len() - ), + text: format!("[Tool result: {} chars]\n", result.len()), }); json!({ "result": result }) } @@ -453,7 +449,10 @@ async fn fetch_and_convert_mcp_tools( }); } - slog!("[gemini] Loaded {} MCP tools as function declarations", declarations.len()); + slog!( + "[gemini] Loaded {} MCP tools as function declarations", + declarations.len() + ); Ok(declarations) } @@ -560,10 +559,7 @@ async fn call_mcp_tool( // MCP tools/call returns { result: { content: [{ type: "text", text: "..." }] } } let content = &body["result"]["content"]; if let Some(arr) = content.as_array() { - let texts: Vec<&str> = arr - .iter() - .filter_map(|c| c["text"].as_str()) - .collect(); + let texts: Vec<&str> = arr.iter().filter_map(|c| c["text"].as_str()).collect(); if !texts.is_empty() { return Ok(texts.join("\n")); } @@ -747,7 +743,10 @@ mod tests { let body = build_generate_content_request(&system, &contents, &tools); assert!(body["tools"][0]["functionDeclarations"].is_array()); - assert_eq!(body["tools"][0]["functionDeclarations"][0]["name"], "my_tool"); + assert_eq!( + body["tools"][0]["functionDeclarations"][0]["name"], + "my_tool" + ); } #[test] diff --git a/server/src/agents/runtime/mod.rs b/server/src/agents/runtime/mod.rs index d5f2b551..9ae0013b 100644 --- a/server/src/agents/runtime/mod.rs +++ b/server/src/agents/runtime/mod.rs @@ -151,8 +151,8 @@ mod tests { #[test] fn claude_code_runtime_get_status_returns_idle() { - use std::collections::HashMap; use crate::io::watcher::WatcherEvent; + use std::collections::HashMap; let killers = Arc::new(Mutex::new(HashMap::new())); let (watcher_tx, _) = broadcast::channel::(16); let runtime = ClaudeCodeRuntime::new(killers, watcher_tx); @@ -161,8 +161,8 @@ mod tests { #[test] fn claude_code_runtime_stream_events_empty() { - use std::collections::HashMap; use crate::io::watcher::WatcherEvent; + use std::collections::HashMap; let killers = Arc::new(Mutex::new(HashMap::new())); let (watcher_tx, _) = broadcast::channel::(16); let runtime = ClaudeCodeRuntime::new(killers, watcher_tx); diff --git a/server/src/agents/runtime/openai.rs b/server/src/agents/runtime/openai.rs index 4b6f3949..73640d38 100644 --- a/server/src/agents/runtime/openai.rs +++ b/server/src/agents/runtime/openai.rs @@ -3,7 +3,7 @@ use std::sync::atomic::{AtomicBool, Ordering}; use std::sync::{Arc, Mutex}; use reqwest::Client; -use serde_json::{json, Value}; +use serde_json::{Value, json}; use tokio::sync::broadcast; use crate::agent_log::AgentLogWriter; @@ -471,10 +471,7 @@ async fn call_mcp_tool( // MCP tools/call returns { result: { content: [{ type: "text", text: "..." }] } } let content = &body["result"]["content"]; if let Some(arr) = content.as_array() { - let texts: Vec<&str> = arr - .iter() - .filter_map(|c| c["text"].as_str()) - .collect(); + let texts: Vec<&str> = arr.iter().filter_map(|c| c["text"].as_str()).collect(); if !texts.is_empty() { return Ok(texts.join("\n")); } diff --git a/server/src/chat/commands/ambient.rs b/server/src/chat/commands/ambient.rs index ff52cf42..6b163af9 100644 --- a/server/src/chat/commands/ambient.rs +++ b/server/src/chat/commands/ambient.rs @@ -69,7 +69,10 @@ mod tests { // "timmy ambient on" — bot name mentioned but not @-prefixed, so // is_addressed is false; strip_bot_mention still strips "timmy ". let result = try_handle_command(&dispatch, "timmy ambient on"); - assert!(result.is_some(), "ambient on should fire even when is_addressed=false"); + assert!( + result.is_some(), + "ambient on should fire even when is_addressed=false" + ); assert!( ambient_rooms.lock().unwrap().contains(&room_id), "room should be in ambient_rooms after ambient on" @@ -92,7 +95,10 @@ mod tests { }; // Bare "ambient off" in an ambient room (is_addressed=false). let result = try_handle_command(&dispatch, "ambient off"); - assert!(result.is_some(), "bare ambient off should be handled without LLM"); + assert!( + result.is_some(), + "bare ambient off should be handled without LLM" + ); let output = result.unwrap(); assert!( output.contains("Ambient mode off"), @@ -161,7 +167,11 @@ mod tests { #[test] fn ambient_invalid_args_returns_usage() { - let result = super::super::tests::try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy ambient"); + let result = super::super::tests::try_cmd_addressed( + "Timmy", + "@timmy:homeserver.local", + "@timmy ambient", + ); let output = result.unwrap(); assert!( output.contains("Usage"), diff --git a/server/src/chat/commands/backlog.rs b/server/src/chat/commands/backlog.rs index 75f10547..ebd89b38 100644 --- a/server/src/chat/commands/backlog.rs +++ b/server/src/chat/commands/backlog.rs @@ -1,8 +1,8 @@ //! Handler for the `backlog` command — shows only Stage::Backlog items. -use crate::pipeline_state::{PipelineItem, Stage}; use super::CommandContext; use super::status::{story_short_label, unmet_deps_from_items}; +use crate::pipeline_state::{PipelineItem, Stage}; pub(super) fn handle_backlog(_ctx: &CommandContext) -> Option { Some(build_backlog_output()) @@ -94,16 +94,29 @@ mod tests { make_item("30_story_in_qa", "In QA", Stage::Qa), ]; let output = build_backlog_from_items(&items); - assert!(output.contains("In Backlog"), "should show backlog item: {output}"); - assert!(!output.contains("In Progress"), "should not show coding items: {output}"); - assert!(!output.contains("In QA"), "should not show QA items: {output}"); + assert!( + output.contains("In Backlog"), + "should show backlog item: {output}" + ); + assert!( + !output.contains("In Progress"), + "should not show coding items: {output}" + ); + assert!( + !output.contains("In QA"), + "should not show QA items: {output}" + ); } // -- AC: shows number, type, name ----------------------------------------- #[test] fn backlog_shows_number_type_and_name() { - let items = vec![make_item("42_story_my_feature", "My Feature", Stage::Backlog)]; + let items = vec![make_item( + "42_story_my_feature", + "My Feature", + Stage::Backlog, + )]; let output = build_backlog_from_items(&items); assert!( output.contains("42 [story] — My Feature"), @@ -116,7 +129,12 @@ mod tests { #[test] fn backlog_shows_waiting_on_for_unmet_deps() { let items = vec![ - make_item_with_deps("10_story_waiting", "Waiting Story", Stage::Backlog, vec![999]), + make_item_with_deps( + "10_story_waiting", + "Waiting Story", + Stage::Backlog, + vec![999], + ), make_item("999_story_dep", "Dep Story", Stage::Backlog), ]; let output = build_backlog_from_items(&items); @@ -150,16 +168,17 @@ mod tests { fn backlog_no_waiting_on_when_no_deps() { let items = vec![make_item("5_story_nodeps", "No Deps", Stage::Backlog)]; let output = build_backlog_from_items(&items); - assert!(!output.contains("waiting on"), "no dep suffix when no deps: {output}"); + assert!( + !output.contains("waiting on"), + "no dep suffix when no deps: {output}" + ); } // -- AC: command is registered in the registry ---------------------------- #[test] fn backlog_command_in_registry() { - let found = super::super::commands() - .iter() - .any(|c| c.name == "backlog"); + let found = super::super::commands().iter().any(|c| c.name == "backlog"); assert!(found, "backlog must be registered in commands()"); } @@ -171,7 +190,10 @@ mod tests { "@timmy help", ); let output = result.unwrap_or_default(); - assert!(output.contains("backlog"), "backlog should appear in help output: {output}"); + assert!( + output.contains("backlog"), + "backlog should appear in help output: {output}" + ); } #[test] @@ -181,7 +203,10 @@ mod tests { "@timmy:homeserver.local", "@timmy backlog", ); - assert!(result.is_some(), "backlog command should match and return Some"); + assert!( + result.is_some(), + "backlog command should match and return Some" + ); } #[test] @@ -192,7 +217,10 @@ mod tests { "@timmy backlog", ); let output = result.unwrap_or_default(); - assert!(output.contains("Backlog"), "backlog output should contain Backlog header: {output}"); + assert!( + output.contains("Backlog"), + "backlog output should contain Backlog header: {output}" + ); } // -- empty backlog -------------------------------------------------------- @@ -201,6 +229,9 @@ mod tests { fn backlog_shows_none_when_empty() { let items = vec![make_item("1_story_done", "Done", Stage::Coding)]; let output = build_backlog_from_items(&items); - assert!(output.contains("*(none)*"), "should show none when no backlog items: {output}"); + assert!( + output.contains("*(none)*"), + "should show none when no backlog items: {output}" + ); } } diff --git a/server/src/chat/commands/cost.rs b/server/src/chat/commands/cost.rs index 1eee04b5..4d5311e1 100644 --- a/server/src/chat/commands/cost.rs +++ b/server/src/chat/commands/cost.rs @@ -2,8 +2,8 @@ use std::collections::HashMap; -use super::status::story_short_label; use super::CommandContext; +use super::status::story_short_label; /// Show token spend: 24h total, top 5 stories, agent-type breakdown, and /// all-time total. @@ -102,7 +102,10 @@ mod tests { use crate::agents::AgentPool; use std::sync::Arc; - fn write_token_records(root: &std::path::Path, records: &[crate::agents::token_usage::TokenUsageRecord]) { + fn write_token_records( + root: &std::path::Path, + records: &[crate::agents::token_usage::TokenUsageRecord], + ) { for r in records { crate::agents::token_usage::append_record(root, r).unwrap(); } @@ -118,7 +121,12 @@ mod tests { } } - fn make_record(story_id: &str, agent_name: &str, cost: f64, hours_ago: i64) -> crate::agents::token_usage::TokenUsageRecord { + fn make_record( + story_id: &str, + agent_name: &str, + cost: f64, + hours_ago: i64, + ) -> crate::agents::token_usage::TokenUsageRecord { let ts = (chrono::Utc::now() - chrono::Duration::hours(hours_ago)).to_rfc3339(); crate::agents::token_usage::TokenUsageRecord { story_id: story_id.to_string(), @@ -157,55 +165,89 @@ mod tests { #[test] fn cost_command_appears_in_help() { - let result = super::super::tests::try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy help"); + let result = super::super::tests::try_cmd_addressed( + "Timmy", + "@timmy:homeserver.local", + "@timmy help", + ); let output = result.unwrap(); - assert!(output.contains("cost"), "help should list cost command: {output}"); + assert!( + output.contains("cost"), + "help should list cost command: {output}" + ); } #[test] fn cost_command_no_records() { let tmp = tempfile::TempDir::new().unwrap(); let output = cost_cmd_with_root(tmp.path()).unwrap(); - assert!(output.contains("No usage records found"), "should show empty message: {output}"); + assert!( + output.contains("No usage records found"), + "should show empty message: {output}" + ); } #[test] fn cost_command_shows_24h_total() { let tmp = tempfile::TempDir::new().unwrap(); - write_token_records(tmp.path(), &[ - make_record("42_story_foo", "coder-1", 1.50, 2), - make_record("42_story_foo", "coder-1", 0.50, 5), - ]); + write_token_records( + tmp.path(), + &[ + make_record("42_story_foo", "coder-1", 1.50, 2), + make_record("42_story_foo", "coder-1", 0.50, 5), + ], + ); let output = cost_cmd_with_root(tmp.path()).unwrap(); - assert!(output.contains("**Last 24h:** $2.00"), "should show 24h total: {output}"); + assert!( + output.contains("**Last 24h:** $2.00"), + "should show 24h total: {output}" + ); } #[test] fn cost_command_excludes_old_from_24h() { let tmp = tempfile::TempDir::new().unwrap(); - write_token_records(tmp.path(), &[ - make_record("42_story_foo", "coder-1", 1.00, 2), // within 24h - make_record("43_story_bar", "coder-1", 5.00, 48), // older - ]); + write_token_records( + tmp.path(), + &[ + make_record("42_story_foo", "coder-1", 1.00, 2), // within 24h + make_record("43_story_bar", "coder-1", 5.00, 48), // older + ], + ); let output = cost_cmd_with_root(tmp.path()).unwrap(); - assert!(output.contains("**Last 24h:** $1.00"), "should only count recent: {output}"); - assert!(output.contains("**All-time:** $6.00"), "all-time should include everything: {output}"); + assert!( + output.contains("**Last 24h:** $1.00"), + "should only count recent: {output}" + ); + assert!( + output.contains("**All-time:** $6.00"), + "all-time should include everything: {output}" + ); } #[test] fn cost_command_shows_top_stories() { let tmp = tempfile::TempDir::new().unwrap(); - write_token_records(tmp.path(), &[ - make_record("42_story_foo", "coder-1", 3.00, 1), - make_record("43_story_bar", "coder-1", 1.00, 1), - make_record("42_story_foo", "qa-1", 2.00, 1), - ]); + write_token_records( + tmp.path(), + &[ + make_record("42_story_foo", "coder-1", 3.00, 1), + make_record("43_story_bar", "coder-1", 1.00, 1), + make_record("42_story_foo", "qa-1", 2.00, 1), + ], + ); let output = cost_cmd_with_root(tmp.path()).unwrap(); - assert!(output.contains("Top Stories"), "should have top stories section: {output}"); + assert!( + output.contains("Top Stories"), + "should have top stories section: {output}" + ); // Story 42 ($5.00) should appear before story 43 ($1.00) let pos_42 = output.find("42").unwrap(); let pos_43 = output.find("43").unwrap(); - assert!(pos_42 < pos_43, "story 42 should appear before 43 (sorted by cost): {output}"); + assert!( + pos_42 < pos_43, + "story 42 should appear before 43 (sorted by cost): {output}" + ); } #[test] @@ -213,45 +255,75 @@ mod tests { let tmp = tempfile::TempDir::new().unwrap(); let mut records = Vec::new(); for i in 1..=7 { - records.push(make_record(&format!("{i}_story_s{i}"), "coder-1", i as f64, 1)); + records.push(make_record( + &format!("{i}_story_s{i}"), + "coder-1", + i as f64, + 1, + )); } write_token_records(tmp.path(), &records); let output = cost_cmd_with_root(tmp.path()).unwrap(); // The top 5 most expensive are stories 7,6,5,4,3. Stories 1 and 2 should be excluded. let top_section = output.split("**By Agent Type").next().unwrap(); - assert!(!top_section.contains("• 1 —"), "story 1 should not be in top 5: {output}"); - assert!(!top_section.contains("• 2 —"), "story 2 should not be in top 5: {output}"); + assert!( + !top_section.contains("• 1 —"), + "story 1 should not be in top 5: {output}" + ); + assert!( + !top_section.contains("• 2 —"), + "story 2 should not be in top 5: {output}" + ); } #[test] fn cost_command_shows_agent_type_breakdown() { let tmp = tempfile::TempDir::new().unwrap(); - write_token_records(tmp.path(), &[ - make_record("42_story_foo", "coder-1", 2.00, 1), - make_record("42_story_foo", "qa-1", 1.50, 1), - make_record("42_story_foo", "mergemaster", 0.50, 1), - ]); + write_token_records( + tmp.path(), + &[ + make_record("42_story_foo", "coder-1", 2.00, 1), + make_record("42_story_foo", "qa-1", 1.50, 1), + make_record("42_story_foo", "mergemaster", 0.50, 1), + ], + ); let output = cost_cmd_with_root(tmp.path()).unwrap(); - assert!(output.contains("By Agent Type"), "should have agent type section: {output}"); + assert!( + output.contains("By Agent Type"), + "should have agent type section: {output}" + ); assert!(output.contains("coder"), "should show coder type: {output}"); assert!(output.contains("qa"), "should show qa type: {output}"); - assert!(output.contains("mergemaster"), "should show mergemaster type: {output}"); + assert!( + output.contains("mergemaster"), + "should show mergemaster type: {output}" + ); } #[test] fn cost_command_shows_all_time_total() { let tmp = tempfile::TempDir::new().unwrap(); - write_token_records(tmp.path(), &[ - make_record("42_story_foo", "coder-1", 1.00, 2), - make_record("43_story_bar", "coder-1", 9.00, 100), - ]); + write_token_records( + tmp.path(), + &[ + make_record("42_story_foo", "coder-1", 1.00, 2), + make_record("43_story_bar", "coder-1", 9.00, 100), + ], + ); let output = cost_cmd_with_root(tmp.path()).unwrap(); - assert!(output.contains("**All-time:** $10.00"), "should show all-time total: {output}"); + assert!( + output.contains("**All-time:** $10.00"), + "should show all-time total: {output}" + ); } #[test] fn cost_command_case_insensitive() { - let result = super::super::tests::try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy COST"); + let result = super::super::tests::try_cmd_addressed( + "Timmy", + "@timmy:homeserver.local", + "@timmy COST", + ); assert!(result.is_some(), "COST should match case-insensitively"); } diff --git a/server/src/chat/commands/coverage.rs b/server/src/chat/commands/coverage.rs index 8cb57130..ab36079a 100644 --- a/server/src/chat/commands/coverage.rs +++ b/server/src/chat/commands/coverage.rs @@ -59,12 +59,16 @@ fn read_cached_coverage(project_root: &std::path::Path) -> String { fn read_coverage_report(path: &std::path::Path) -> String { let content = match std::fs::read_to_string(path) { Ok(c) => c, - Err(e) => return format!("**Coverage (cached)**\n\nError reading `.coverage_report.json`: {e}"), + Err(e) => { + return format!("**Coverage (cached)**\n\nError reading `.coverage_report.json`: {e}"); + } }; let report: CoverageReport = match serde_json::from_str(&content) { Ok(r) => r, - Err(e) => return format!("**Coverage (cached)**\n\nFailed to parse `.coverage_report.json`: {e}"), + Err(e) => { + return format!("**Coverage (cached)**\n\nFailed to parse `.coverage_report.json`: {e}"); + } }; format_coverage_report(&report) @@ -81,13 +85,22 @@ fn format_coverage_report(report: &CoverageReport) -> String { // Top 5 lowest-covered files (already sorted ascending in the JSON, but sort // defensively here so the display is correct even if the file was hand-edited). let mut sorted: Vec<&FileCoverage> = report.files.iter().collect(); - sorted.sort_by(|a, b| a.coverage.partial_cmp(&b.coverage).unwrap_or(std::cmp::Ordering::Equal)); + sorted.sort_by(|a, b| { + a.coverage + .partial_cmp(&b.coverage) + .unwrap_or(std::cmp::Ordering::Equal) + }); let targets: Vec<&FileCoverage> = sorted.into_iter().take(5).collect(); if !targets.is_empty() { out.push_str("\n**Top 5 files needing coverage:**\n"); for (i, file) in targets.iter().enumerate() { - out.push_str(&format!("{}. {} — {:.1}%\n", i + 1, file.path, file.coverage)); + out.push_str(&format!( + "{}. {} — {:.1}%\n", + i + 1, + file.path, + file.coverage + )); } } @@ -162,8 +175,13 @@ fn run_coverage(project_root: &std::path::Path) -> String { // Replace the "cached" label with "fresh". result = result.replacen("Coverage (cached)", "Coverage (fresh)", 1); // Replace the cached hint with a pass/fail indicator. - let pass_indicator = if out.status.success() { "PASS" } else { "FAIL: coverage below threshold" }; - result = result.replacen("*Run `coverage run` for fresh results.*", pass_indicator, 1); + let pass_indicator = if out.status.success() { + "PASS" + } else { + "FAIL: coverage below threshold" + }; + result = + result.replacen("*Run `coverage run` for fresh results.*", pass_indicator, 1); return result; } @@ -322,9 +340,18 @@ mod tests { let output = handle_coverage(&ctx).unwrap(); assert!(output.contains("72.5"), "should include overall: {output}"); - assert!(output.contains("60.0"), "should include threshold: {output}"); - assert!(output.contains("15.0"), "should include lowest-covered file pct: {output}"); - assert!(output.contains("server/src/low.rs"), "should include lowest-covered file path: {output}"); + assert!( + output.contains("60.0"), + "should include threshold: {output}" + ); + assert!( + output.contains("15.0"), + "should include lowest-covered file pct: {output}" + ); + assert!( + output.contains("server/src/low.rs"), + "should include lowest-covered file path: {output}" + ); } #[test] @@ -348,9 +375,18 @@ mod tests { let output = handle_coverage(&ctx).unwrap(); assert!(output.contains("a.rs"), "should show lowest file: {output}"); - assert!(output.contains("e.rs"), "should show 5th lowest file: {output}"); - assert!(!output.contains("f.rs"), "should not show 6th file: {output}"); - assert!(!output.contains("g.rs"), "should not show 7th file: {output}"); + assert!( + output.contains("e.rs"), + "should show 5th lowest file: {output}" + ); + assert!( + !output.contains("f.rs"), + "should not show 6th file: {output}" + ); + assert!( + !output.contains("g.rs"), + "should not show 7th file: {output}" + ); } #[test] @@ -466,15 +502,24 @@ mod tests { overall: 66.25, threshold: 60.0, files: vec![ - FileCoverage { path: "a.rs".to_string(), coverage: 10.0 }, - FileCoverage { path: "b.rs".to_string(), coverage: 80.0 }, + FileCoverage { + path: "a.rs".to_string(), + coverage: 10.0, + }, + FileCoverage { + path: "b.rs".to_string(), + coverage: 80.0, + }, ], }; let result = format_coverage_report(&report); assert!(result.contains("66.2"), "should show overall: {result}"); assert!(result.contains("60.0"), "should show threshold: {result}"); assert!(result.contains("a.rs"), "should show lowest file: {result}"); - assert!(result.contains("10.0"), "should show lowest file pct: {result}"); + assert!( + result.contains("10.0"), + "should show lowest file pct: {result}" + ); } #[test] @@ -490,9 +535,18 @@ Frontend line coverage: 70.0%\n\ PASS: Coverage 66.25% meets threshold 60.00%\n\ "; let result = parse_coverage_output(sample, true); - assert!(result.contains("62.5"), "should include Rust coverage: {result}"); - assert!(result.contains("70.0"), "should include Frontend coverage: {result}"); - assert!(result.contains("66.25"), "should include Overall coverage: {result}"); + assert!( + result.contains("62.5"), + "should include Rust coverage: {result}" + ); + assert!( + result.contains("70.0"), + "should include Frontend coverage: {result}" + ); + assert!( + result.contains("66.25"), + "should include Overall coverage: {result}" + ); assert!(result.contains("PASS"), "should indicate PASS: {result}"); } diff --git a/server/src/chat/commands/depends.rs b/server/src/chat/commands/depends.rs index 6af13d7f..d59bb9db 100644 --- a/server/src/chat/commands/depends.rs +++ b/server/src/chat/commands/depends.rs @@ -128,14 +128,20 @@ mod tests { "@timmy help", ); let output = result.unwrap(); - assert!(output.contains("depends"), "help should list depends command: {output}"); + assert!( + output.contains("depends"), + "help should list depends command: {output}" + ); } #[test] fn depends_no_args_returns_usage() { let tmp = tempfile::TempDir::new().unwrap(); let output = depends_cmd_with_root(tmp.path(), "").unwrap(); - assert!(output.contains("Usage"), "no args should show usage: {output}"); + assert!( + output.contains("Usage"), + "no args should show usage: {output}" + ); } #[test] @@ -188,10 +194,9 @@ mod tests { output.contains("477") && output.contains("478"), "response should mention dep numbers: {output}" ); - let contents = std::fs::read_to_string( - tmp.path().join(".huskies/work/1_backlog/42_story_foo.md"), - ) - .unwrap(); + let contents = + std::fs::read_to_string(tmp.path().join(".huskies/work/1_backlog/42_story_foo.md")) + .unwrap(); assert!( contents.contains("depends_on: [477, 478]"), "file should have depends_on set: {contents}" @@ -212,10 +217,9 @@ mod tests { output.contains("Cleared"), "should confirm clearing deps: {output}" ); - let contents = std::fs::read_to_string( - tmp.path().join(".huskies/work/2_current/10_story_bar.md"), - ) - .unwrap(); + let contents = + std::fs::read_to_string(tmp.path().join(".huskies/work/2_current/10_story_bar.md")) + .unwrap(); assert!( !contents.contains("depends_on"), "file should have depends_on cleared: {contents}" diff --git a/server/src/chat/commands/git.rs b/server/src/chat/commands/git.rs index f8b6d534..018767c9 100644 --- a/server/src/chat/commands/git.rs +++ b/server/src/chat/commands/git.rs @@ -100,9 +100,16 @@ mod tests { #[test] fn git_command_appears_in_help() { - let result = super::super::tests::try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy help"); + let result = super::super::tests::try_cmd_addressed( + "Timmy", + "@timmy:homeserver.local", + "@timmy help", + ); let output = result.unwrap(); - assert!(output.contains("git"), "help should list git command: {output}"); + assert!( + output.contains("git"), + "help should list git command: {output}" + ); } #[test] @@ -197,7 +204,11 @@ mod tests { #[test] fn git_command_case_insensitive() { - let result = super::super::tests::try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy GIT"); + let result = super::super::tests::try_cmd_addressed( + "Timmy", + "@timmy:homeserver.local", + "@timmy GIT", + ); assert!(result.is_some(), "GIT should match case-insensitively"); } } diff --git a/server/src/chat/commands/help.rs b/server/src/chat/commands/help.rs index f31ba816..77b10f29 100644 --- a/server/src/chat/commands/help.rs +++ b/server/src/chat/commands/help.rs @@ -1,6 +1,6 @@ //! Handler for the `help` command. -use super::{commands, CommandContext}; +use super::{CommandContext, commands}; pub(super) fn handle_help(ctx: &CommandContext) -> Option { let mut output = format!("**{} Commands**\n\n", ctx.bot_name); @@ -14,7 +14,7 @@ pub(super) fn handle_help(ctx: &CommandContext) -> Option { #[cfg(test)] mod tests { - use super::super::tests::{try_cmd_addressed, commands}; + use super::super::tests::{commands, try_cmd_addressed}; #[test] fn help_command_matches() { @@ -74,7 +74,10 @@ mod tests { fn help_output_includes_status() { let result = try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy help"); let output = result.unwrap(); - assert!(output.contains("status"), "help should list status command: {output}"); + assert!( + output.contains("status"), + "help should list status command: {output}" + ); } #[test] @@ -86,7 +89,9 @@ mod tests { .iter() .map(|c| { let marker = format!("**{}**", c.name); - let pos = output.find(&marker).expect("command must appear in help as **name**"); + let pos = output + .find(&marker) + .expect("command must appear in help as **name**"); (pos, c.name) }) .collect(); @@ -94,20 +99,29 @@ mod tests { let names_in_order: Vec<&str> = positions.iter().map(|(_, n)| *n).collect(); let mut sorted = names_in_order.clone(); sorted.sort(); - assert_eq!(names_in_order, sorted, "commands must appear in alphabetical order"); + assert_eq!( + names_in_order, sorted, + "commands must appear in alphabetical order" + ); } #[test] fn help_output_includes_ambient() { let result = try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy help"); let output = result.unwrap(); - assert!(output.contains("ambient"), "help should list ambient command: {output}"); + assert!( + output.contains("ambient"), + "help should list ambient command: {output}" + ); } #[test] fn help_output_includes_htop() { let result = try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy help"); let output = result.unwrap(); - assert!(output.contains("htop"), "help should list htop command: {output}"); + assert!( + output.contains("htop"), + "help should list htop command: {output}" + ); } } diff --git a/server/src/chat/commands/loc.rs b/server/src/chat/commands/loc.rs index 8eec1ff9..ee274010 100644 --- a/server/src/chat/commands/loc.rs +++ b/server/src/chat/commands/loc.rs @@ -152,11 +152,53 @@ fn loc_top_n(project_root: &std::path::Path, top_n: usize) -> String { fn is_source_extension(ext: &str) -> bool { matches!( ext, - "rs" | "ts" | "tsx" | "js" | "jsx" | "py" | "go" | "java" | "c" | "cpp" | "h" - | "hpp" | "cs" | "rb" | "swift" | "kt" | "scala" | "hs" | "ml" | "ex" | "exs" - | "clj" | "lua" | "sh" | "bash" | "zsh" | "fish" | "ps1" | "toml" | "yaml" - | "yml" | "json" | "md" | "html" | "css" | "scss" | "less" | "sql" | "graphql" - | "proto" | "tf" | "hcl" | "nix" | "r" | "jl" | "dart" | "vue" | "svelte" + "rs" | "ts" + | "tsx" + | "js" + | "jsx" + | "py" + | "go" + | "java" + | "c" + | "cpp" + | "h" + | "hpp" + | "cs" + | "rb" + | "swift" + | "kt" + | "scala" + | "hs" + | "ml" + | "ex" + | "exs" + | "clj" + | "lua" + | "sh" + | "bash" + | "zsh" + | "fish" + | "ps1" + | "toml" + | "yaml" + | "yml" + | "json" + | "md" + | "html" + | "css" + | "scss" + | "less" + | "sql" + | "graphql" + | "proto" + | "tf" + | "hcl" + | "nix" + | "r" + | "jl" + | "dart" + | "vue" + | "svelte" ) } @@ -202,7 +244,10 @@ mod tests { "@timmy help", ); let output = result.unwrap(); - assert!(output.contains("loc"), "help should list loc command: {output}"); + assert!( + output.contains("loc"), + "help should list loc command: {output}" + ); } #[test] @@ -220,7 +265,10 @@ mod tests { ); // At most 10 entries (numbered lines "1." through "10.") let count = output.lines().filter(|l| l.contains(". `")).count(); - assert!(count <= 10, "default should return at most 10 files, got {count}"); + assert!( + count <= 10, + "default should return at most 10 files, got {count}" + ); } #[test] @@ -233,7 +281,10 @@ mod tests { let ctx = make_ctx(&agents, &ambient_rooms, repo_root, "5"); let output = handle_loc(&ctx).unwrap(); let count = output.lines().filter(|l| l.contains(". `")).count(); - assert!(count <= 5, "loc 5 should return at most 5 files, got {count}"); + assert!( + count <= 5, + "loc 5 should return at most 5 files, got {count}" + ); } #[test] @@ -246,7 +297,10 @@ mod tests { let ctx = make_ctx(&agents, &ambient_rooms, repo_root, "20"); let output = handle_loc(&ctx).unwrap(); let count = output.lines().filter(|l| l.contains(". `")).count(); - assert!(count <= 20, "loc 20 should return at most 20 files, got {count}"); + assert!( + count <= 20, + "loc 20 should return at most 20 files, got {count}" + ); } #[test] diff --git a/server/src/chat/commands/overview.rs b/server/src/chat/commands/overview.rs index 5d427abb..f16a7911 100644 --- a/server/src/chat/commands/overview.rs +++ b/server/src/chat/commands/overview.rs @@ -110,7 +110,9 @@ fn find_story_name(root: &std::path::Path, num_str: &str) -> Option { // Try content store first. for id in crate::db::all_content_ids() { let file_num = id.split('_').next().unwrap_or(""); - if file_num == num_str && let Some(c) = crate::db::read_content(&id) { + if file_num == num_str + && let Some(c) = crate::db::read_content(&id) + { return crate::io::story_metadata::parse_front_matter(&c) .ok() .and_then(|m| m.name); @@ -119,7 +121,12 @@ fn find_story_name(root: &std::path::Path, num_str: &str) -> Option { // Fallback: filesystem scan. let stages = [ - "1_backlog", "2_current", "3_qa", "4_merge", "5_done", "6_archived", + "1_backlog", + "2_current", + "3_qa", + "4_merge", + "5_done", + "6_archived", ]; for stage in &stages { let dir = root.join(".huskies").join("work").join(stage); diff --git a/server/src/chat/commands/run_tests.rs b/server/src/chat/commands/run_tests.rs index fb723e59..4a33fc35 100644 --- a/server/src/chat/commands/run_tests.rs +++ b/server/src/chat/commands/run_tests.rs @@ -86,9 +86,7 @@ pub(super) fn handle_test(ctx: &CommandContext) -> Option { let mut result = format!("**Test: {status}**\n\n"); if tests_passed > 0 || tests_failed > 0 { - result.push_str(&format!( - "{tests_passed} passed, {tests_failed} failed\n\n" - )); + result.push_str(&format!("{tests_passed} passed, {tests_failed} failed\n\n")); } result.push_str(&format!("```\n{truncated}\n```")); @@ -128,7 +126,11 @@ fn parse_test_counts(output: &str) -> (u64, u64) { fn extract_count(line: &str, label: &str) -> Option { let pos = line.find(label)?; let before = line[..pos].trim_end(); - let num_str: String = before.chars().rev().take_while(|c| c.is_ascii_digit()).collect(); + let num_str: String = before + .chars() + .rev() + .take_while(|c| c.is_ascii_digit()) + .collect(); if num_str.is_empty() { return None; } @@ -250,10 +252,7 @@ mod tests { #[test] fn test_command_works_via_dispatch() { let dir = tempfile::tempdir().unwrap(); - write_script( - dir.path(), - "#!/usr/bin/env bash\necho 'ok'\nexit 0\n", - ); + write_script(dir.path(), "#!/usr/bin/env bash\necho 'ok'\nexit 0\n"); let agents = test_agents(); let ambient = test_ambient(); let room_id = "!test:example.com".to_string(); @@ -317,8 +316,14 @@ mod tests { let ambient = test_ambient(); let ctx = make_ctx(&agents, &ambient, dir.path(), ""); let output = handle_test(&ctx).unwrap(); - assert!(output.contains("PASS"), "no-arg should use project root: {output}"); - assert!(output.contains('7'), "should show count from project root script: {output}"); + assert!( + output.contains("PASS"), + "no-arg should use project root: {output}" + ); + assert!( + output.contains('7'), + "should show count from project root script: {output}" + ); } #[test] @@ -329,8 +334,14 @@ mod tests { let ambient = test_ambient(); let ctx = make_ctx(&agents, &ambient, dir.path(), "541"); let output = handle_test(&ctx).unwrap(); - assert!(output.contains("PASS"), "should run tests in worktree: {output}"); - assert!(output.contains('2'), "should show count from worktree script: {output}"); + assert!( + output.contains("PASS"), + "should run tests in worktree: {output}" + ); + assert!( + output.contains('2'), + "should show count from worktree script: {output}" + ); } #[test] @@ -382,6 +393,9 @@ mod tests { "run_tests with story number must respond via dispatch" ); let output = result.unwrap(); - assert!(output.contains("PASS"), "should PASS for valid worktree: {output}"); + assert!( + output.contains("PASS"), + "should PASS for valid worktree: {output}" + ); } } diff --git a/server/src/chat/commands/setup.rs b/server/src/chat/commands/setup.rs index 31f89080..99d28087 100644 --- a/server/src/chat/commands/setup.rs +++ b/server/src/chat/commands/setup.rs @@ -12,7 +12,7 @@ use super::CommandContext; use crate::http::mcp::wizard_tools::{ generation_hint, is_script_step, step_output_path, write_if_missing, }; -use crate::io::wizard::{format_wizard_state, StepStatus, WizardState}; +use crate::io::wizard::{StepStatus, WizardState, format_wizard_state}; pub(super) fn handle_setup(ctx: &CommandContext) -> Option { let sub = ctx.args.trim().to_ascii_lowercase(); @@ -84,17 +84,16 @@ fn wizard_confirm_reply(ctx: &CommandContext) -> String { let content = state.steps[idx].content.clone(); // Write content to disk (only if a file path exists and the file is absent). - let write_msg = - if let (Some(c), Some(ref path)) = (&content, step_output_path(root, step)) { - let executable = is_script_step(step); - match write_if_missing(path, c, executable) { - Ok(true) => format!(" File written: `{}`.", path.display()), - Ok(false) => format!(" File `{}` already exists — skipped.", path.display()), - Err(e) => return format!("Error: {e}"), - } - } else { - String::new() - }; + let write_msg = if let (Some(c), Some(ref path)) = (&content, step_output_path(root, step)) { + let executable = is_script_step(step); + match write_if_missing(path, c, executable) { + Ok(true) => format!(" File written: `{}`.", path.display()), + Ok(false) => format!(" File `{}` already exists — skipped.", path.display()), + Err(e) => return format!("Error: {e}"), + } + } else { + String::new() + }; if let Err(e) = state.confirm_step(step) { return format!("Cannot confirm step: {e}"); @@ -140,10 +139,7 @@ fn wizard_skip_reply(ctx: &CommandContext) -> String { } if state.completed { - format!( - "Step '{}' skipped. Setup wizard complete!", - step.label() - ) + format!("Step '{}' skipped. Setup wizard complete!", step.label()) } else { let next = &state.steps[state.current_step_index()]; format!( diff --git a/server/src/chat/commands/show.rs b/server/src/chat/commands/show.rs index ac328c22..a3d6e402 100644 --- a/server/src/chat/commands/show.rs +++ b/server/src/chat/commands/show.rs @@ -78,9 +78,16 @@ mod tests { #[test] fn show_command_appears_in_help() { - let result = super::super::tests::try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy help"); + let result = super::super::tests::try_cmd_addressed( + "Timmy", + "@timmy:homeserver.local", + "@timmy help", + ); let output = result.unwrap(); - assert!(output.contains("show"), "help should list show command: {output}"); + assert!( + output.contains("show"), + "help should list show command: {output}" + ); } #[test] @@ -167,7 +174,11 @@ mod tests { #[test] fn show_command_case_insensitive() { - let result = super::super::tests::try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy SHOW 1"); + let result = super::super::tests::try_cmd_addressed( + "Timmy", + "@timmy:homeserver.local", + "@timmy SHOW 1", + ); assert!(result.is_some(), "SHOW should match case-insensitively"); } } diff --git a/server/src/chat/commands/status.rs b/server/src/chat/commands/status.rs index 4a2cb92a..f3d00e48 100644 --- a/server/src/chat/commands/status.rs +++ b/server/src/chat/commands/status.rs @@ -119,14 +119,13 @@ fn build_status_from_items( .collect(); // Read token usage once for all stories to avoid repeated file I/O. - let cost_by_story: HashMap = - crate::agents::token_usage::read_all(project_root) - .unwrap_or_default() - .into_iter() - .fold(HashMap::new(), |mut map, r| { - *map.entry(r.story_id).or_insert(0.0) += r.usage.total_cost_usd; - map - }); + let cost_by_story: HashMap = crate::agents::token_usage::read_all(project_root) + .unwrap_or_default() + .into_iter() + .fold(HashMap::new(), |mut map, r| { + *map.entry(r.story_id).or_insert(0.0) += r.usage.total_cost_usd; + map + }); let config = ProjectConfig::load(project_root).ok(); @@ -165,10 +164,8 @@ fn build_status_from_items( } // Blocked items: Archived { reason: Blocked } shown with 🔴 indicator. - let mut blocked_items: Vec<&PipelineItem> = items - .iter() - .filter(|i| i.stage.is_blocked()) - .collect(); + let mut blocked_items: Vec<&PipelineItem> = + items.iter().filter(|i| i.stage.is_blocked()).collect(); blocked_items.sort_by(|a, b| a.story_id.0.cmp(&b.story_id.0)); if !blocked_items.is_empty() { out.push_str(&format!("**Blocked** ({})\n", blocked_items.len())); @@ -294,13 +291,21 @@ mod tests { #[test] fn status_command_matches() { - let result = super::super::tests::try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy status"); + let result = super::super::tests::try_cmd_addressed( + "Timmy", + "@timmy:homeserver.local", + "@timmy status", + ); assert!(result.is_some(), "status command should match"); } #[test] fn status_command_returns_pipeline_text() { - let result = super::super::tests::try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy status"); + let result = super::super::tests::try_cmd_addressed( + "Timmy", + "@timmy:homeserver.local", + "@timmy status", + ); let output = result.unwrap(); assert!( output.contains("Pipeline Status"), @@ -310,7 +315,11 @@ mod tests { #[test] fn status_command_case_insensitive() { - let result = super::super::tests::try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy STATUS"); + let result = super::super::tests::try_cmd_addressed( + "Timmy", + "@timmy:homeserver.local", + "@timmy STATUS", + ); assert!(result.is_some(), "STATUS should match case-insensitively"); } @@ -318,7 +327,10 @@ mod tests { #[test] fn short_label_extracts_number_and_name() { - let label = story_short_label("293_story_register_all_bot_commands", Some("Register all bot commands")); + let label = story_short_label( + "293_story_register_all_bot_commands", + Some("Register all bot commands"), + ); assert_eq!(label, "293 [story] — Register all bot commands"); } @@ -336,7 +348,10 @@ mod tests { #[test] fn short_label_does_not_include_underscore_slug() { - let label = story_short_label("293_story_register_all_bot_commands_in_the_command_registry", Some("Register all bot commands")); + let label = story_short_label( + "293_story_register_all_bot_commands_in_the_command_registry", + Some("Register all bot commands"), + ); assert!( !label.contains("story_register"), "label should not contain the slug portion: {label}" @@ -345,19 +360,28 @@ mod tests { #[test] fn short_label_shows_bug_type() { - let label = story_short_label("375_bug_default_project_toml", Some("Default project.toml issue")); + let label = story_short_label( + "375_bug_default_project_toml", + Some("Default project.toml issue"), + ); assert_eq!(label, "375 [bug] — Default project.toml issue"); } #[test] fn short_label_shows_spike_type() { - let label = story_short_label("61_spike_filesystem_watcher_architecture", Some("Filesystem watcher architecture")); + let label = story_short_label( + "61_spike_filesystem_watcher_architecture", + Some("Filesystem watcher architecture"), + ); assert_eq!(label, "61 [spike] — Filesystem watcher architecture"); } #[test] fn short_label_shows_refactor_type() { - let label = story_short_label("260_refactor_upgrade_libsqlite3_sys", Some("Upgrade libsqlite3-sys")); + let label = story_short_label( + "260_refactor_upgrade_libsqlite3_sys", + Some("Upgrade libsqlite3-sys"), + ); assert_eq!(label, "260 [refactor] — Upgrade libsqlite3-sys"); } @@ -506,7 +530,12 @@ mod tests { // Story 10 depends on story 999, which is NOT in all_items (treated as met) // OR present in backlog (unmet). Let's add dep 999 in Backlog stage (unmet). let items = vec![ - make_item_with_deps("10_story_waiting", "Waiting Story", Stage::Coding, vec![999]), + make_item_with_deps( + "10_story_waiting", + "Waiting Story", + Stage::Coding, + vec![999], + ), make_item("999_story_dep", "Dep Story", Stage::Backlog), ]; @@ -526,11 +555,20 @@ mod tests { // Dep 999 is in Done stage — met. let items = vec![ - make_item_with_deps("10_story_unblocked", "Unblocked Story", Stage::Coding, vec![999]), - make_item("999_story_dep", "Dep Story", Stage::Done { - merged_at: Utc::now(), - merge_commit: crate::pipeline_state::GitSha("abc123".to_string()), - }), + make_item_with_deps( + "10_story_unblocked", + "Unblocked Story", + Stage::Coding, + vec![999], + ), + make_item( + "999_story_dep", + "Dep Story", + Stage::Done { + merged_at: Utc::now(), + merge_commit: crate::pipeline_state::GitSha("abc123".to_string()), + }, + ), ]; let agents = AgentPool::new_test(3000); @@ -678,8 +716,12 @@ mod tests { // Must appear under Done, not Backlog. let done_pos = output.find("**Done**").expect("Done section must exist"); - let backlog_pos = output.find("**Backlog**").expect("Backlog section must exist"); - let story_pos = output.find("503 [story]").expect("story must appear in output"); + let backlog_pos = output + .find("**Backlog**") + .expect("Backlog section must exist"); + let story_pos = output + .find("503 [story]") + .expect("story must appear in output"); assert!( story_pos > done_pos, diff --git a/server/src/chat/commands/triage.rs b/server/src/chat/commands/triage.rs index 4b12f2f8..69adc065 100644 --- a/server/src/chat/commands/triage.rs +++ b/server/src/chat/commands/triage.rs @@ -33,17 +33,13 @@ pub(super) fn handle_triage(ctx: &CommandContext) -> Option { match find_story_by_number(num_str) { Some((story_id, item)) => Some(build_triage_dump(ctx, &story_id, &item, num_str)), - None => Some(format!( - "Story **{num_str}** not found in the pipeline." - )), + None => Some(format!("Story **{num_str}** not found in the pipeline.")), } } /// Find a pipeline item whose numeric prefix matches `num_str` by querying the /// CRDT state. Returns `(story_id, PipelineItem)` for the first match. -fn find_story_by_number( - num_str: &str, -) -> Option<(String, crate::pipeline_state::PipelineItem)> { +fn find_story_by_number(num_str: &str) -> Option<(String, crate::pipeline_state::PipelineItem)> { let items = crate::pipeline_state::read_all_typed(); for item in items { let file_num = item @@ -74,7 +70,10 @@ fn build_triage_dump( }; let meta = crate::io::story_metadata::parse_front_matter(&contents).ok(); - let name = meta.as_ref().and_then(|m| m.name.as_deref()).unwrap_or("(unnamed)"); + let name = meta + .as_ref() + .and_then(|m| m.name.as_deref()) + .unwrap_or("(unnamed)"); let mut out = String::new(); @@ -147,10 +146,7 @@ fn build_triage_dump( out.push_str(&format!("**Branch:** `{branch}`\n\n")); // ---- git diff --stat ---- - let diff_stat = run_git( - &wt_path, - &["diff", "--stat", "master...HEAD"], - ); + let diff_stat = run_git(&wt_path, &["diff", "--stat", "master...HEAD"]); if !diff_stat.is_empty() { out.push_str("**Diff stat (vs master):**\n```\n"); out.push_str(&diff_stat); @@ -162,12 +158,7 @@ fn build_triage_dump( // ---- Last 5 commits on feature branch ---- let log = run_git( &wt_path, - &[ - "log", - "master..HEAD", - "--pretty=format:%h %s", - "-5", - ], + &["log", "master..HEAD", "--pretty=format:%h %s", "-5"], ); if !log.is_empty() { out.push_str("**Recent commits (branch only):**\n```\n"); @@ -192,10 +183,15 @@ fn parse_acceptance_criteria(contents: &str) -> Vec<(bool, String)> { .lines() .filter_map(|line| { let trimmed = line.trim(); - if let Some(text) = trimmed.strip_prefix("- [x] ").or_else(|| trimmed.strip_prefix("- [X] ")) { + if let Some(text) = trimmed + .strip_prefix("- [x] ") + .or_else(|| trimmed.strip_prefix("- [X] ")) + { Some((true, text.to_string())) } else { - trimmed.strip_prefix("- [ ] ").map(|text| (false, text.to_string())) + trimmed + .strip_prefix("- [ ] ") + .map(|text| (false, text.to_string())) } }) .collect() @@ -248,7 +244,10 @@ mod tests { #[test] fn whatsup_command_is_not_registered() { let found = super::super::commands().iter().any(|c| c.name == "whatsup"); - assert!(!found, "whatsup command must not be in the registry (renamed to status)"); + assert!( + !found, + "whatsup command must not be in the registry (renamed to status)" + ); } #[test] @@ -340,7 +339,10 @@ mod tests { "---\nname: Backlog Item\n---\n", ); let output = status_triage_cmd(tmp.path(), "9901").unwrap(); - assert!(output.contains("9901"), "should show story number: {output}"); + assert!( + output.contains("9901"), + "should show story number: {output}" + ); assert!( output.contains("Backlog Item"), "should show story name: {output}" @@ -361,7 +363,10 @@ mod tests { "---\nname: QA Item\n---\n", ); let output = status_triage_cmd(tmp.path(), "9902").unwrap(); - assert!(output.contains("9902"), "should show story number: {output}"); + assert!( + output.contains("9902"), + "should show story number: {output}" + ); assert!( output.contains("QA Item"), "should show story name: {output}" @@ -439,7 +444,10 @@ mod tests { output.contains("depends_on") || output.contains("#477"), "should show depends_on field: {output}" ); - assert!(output.contains("478"), "should list all dependency numbers: {output}"); + assert!( + output.contains("478"), + "should list all dependency numbers: {output}" + ); } #[test] @@ -459,7 +467,6 @@ mod tests { ); } - // -- parse_acceptance_criteria ----------------------------------------- #[test] @@ -479,5 +486,4 @@ mod tests { let result = parse_acceptance_criteria(input); assert!(result.is_empty()); } - } diff --git a/server/src/chat/commands/unblock.rs b/server/src/chat/commands/unblock.rs index 8082652a..a07d9588 100644 --- a/server/src/chat/commands/unblock.rs +++ b/server/src/chat/commands/unblock.rs @@ -5,7 +5,10 @@ //! and returns a confirmation. use super::CommandContext; -use crate::io::story_metadata::{clear_front_matter_field, clear_front_matter_field_in_content, parse_front_matter, set_front_matter_field}; +use crate::io::story_metadata::{ + clear_front_matter_field, clear_front_matter_field_in_content, parse_front_matter, + set_front_matter_field, +}; use std::path::Path; /// Handle the `unblock` command. @@ -37,9 +40,7 @@ pub(crate) fn unblock_by_number(project_root: &Path, story_number: &str) -> Stri match crate::chat::lookup::find_story_by_number(project_root, story_number) { Some(found) => found, None => { - return format!( - "No story, bug, or spike with number **{story_number}** found." - ); + return format!("No story, bug, or spike with number **{story_number}** found."); } }; @@ -71,9 +72,7 @@ fn unblock_by_story_id(story_id: &str) -> String { let has_merge_failure = meta.merge_failure.is_some(); if !has_blocked && !has_merge_failure { - return format!( - "**{story_name}** ({story_id}) is not blocked. Nothing to unblock." - ); + return format!("**{story_name}** ({story_id}) is not blocked. Nothing to unblock."); } let mut updated = contents; @@ -94,9 +93,16 @@ fn unblock_by_story_id(story_id: &str) -> String { crate::db::write_item_with_content(story_id, &stage, &updated); let mut cleared = Vec::new(); - if has_blocked { cleared.push("blocked"); } - if has_merge_failure { cleared.push("merge_failure"); } - format!("Unblocked **{story_name}** ({story_id}). Cleared: {}. Retry count reset to 0.", cleared.join(", ")) + if has_blocked { + cleared.push("blocked"); + } + if has_merge_failure { + cleared.push("merge_failure"); + } + format!( + "Unblocked **{story_name}** ({story_id}). Cleared: {}. Retry count reset to 0.", + cleared.join(", ") + ) } /// Core unblock logic: reset blocked state for a known story file path. @@ -121,9 +127,7 @@ pub(crate) fn unblock_by_path(path: &Path, story_id: &str) -> String { let has_merge_failure = meta.merge_failure.is_some(); if !has_blocked && !has_merge_failure { - return format!( - "**{story_name}** ({story_id}) is not blocked. Nothing to unblock." - ); + return format!("**{story_name}** ({story_id}) is not blocked. Nothing to unblock."); } // Clear the blocked flag if present. @@ -147,9 +151,16 @@ pub(crate) fn unblock_by_path(path: &Path, story_id: &str) -> String { } let mut cleared = Vec::new(); - if has_blocked { cleared.push("blocked"); } - if has_merge_failure { cleared.push("merge_failure"); } - format!("Unblocked **{story_name}** ({story_id}). Cleared: {}. Retry count reset to 0.", cleared.join(", ")) + if has_blocked { + cleared.push("blocked"); + } + if has_merge_failure { + cleared.push("merge_failure"); + } + format!( + "Unblocked **{story_name}** ({story_id}). Cleared: {}. Retry count reset to 0.", + cleared.join(", ") + ) } // --------------------------------------------------------------------------- @@ -276,7 +287,8 @@ mod tests { let contents = crate::db::read_content("9903_story_stuck") .or_else(|| { std::fs::read_to_string( - tmp.path().join(".huskies/work/2_current/9903_story_stuck.md"), + tmp.path() + .join(".huskies/work/2_current/9903_story_stuck.md"), ) .ok() }) diff --git a/server/src/chat/commands/unreleased.rs b/server/src/chat/commands/unreleased.rs index 62b7820d..24ae1d18 100644 --- a/server/src/chat/commands/unreleased.rs +++ b/server/src/chat/commands/unreleased.rs @@ -17,9 +17,7 @@ pub(super) fn handle_unreleased(ctx: &CommandContext) -> Option { if commits.is_empty() { let msg = match &tag { - Some(t) => format!( - "No unreleased stories since the last release tag **{t}**." - ), + Some(t) => format!("No unreleased stories since the last release tag **{t}**."), None => "No release tags found and no story merge commits on master.".to_string(), }; return Some(msg); @@ -36,9 +34,7 @@ pub(super) fn handle_unreleased(ctx: &CommandContext) -> Option { if stories.is_empty() { let msg = match &tag { - Some(t) => format!( - "No unreleased stories since the last release tag **{t}**." - ), + Some(t) => format!("No unreleased stories since the last release tag **{t}**."), None => "No release tags found and no story merge commits on master.".to_string(), }; return Some(msg); @@ -50,8 +46,7 @@ pub(super) fn handle_unreleased(ctx: &CommandContext) -> Option { None => "**Unreleased stories (no prior release tag):**\n\n".to_string(), }; for (num, slug) in &stories { - let name = find_story_name(root, &num.to_string()) - .unwrap_or_else(|| slug_to_name(slug)); + let name = find_story_name(root, &num.to_string()).unwrap_or_else(|| slug_to_name(slug)); out.push_str(&format!("- **{num}** — {name}\n")); } Some(out) @@ -79,10 +74,7 @@ fn find_last_release_tag(root: &std::path::Path) -> Option { /// Return the subjects of all `huskies: merge …` commits reachable from HEAD /// but not from `since_tag` (or all commits when `since_tag` is `None`). -fn list_merge_commits_since( - root: &std::path::Path, - since_tag: Option<&str>, -) -> Vec { +fn list_merge_commits_since(root: &std::path::Path, since_tag: Option<&str>) -> Vec { use std::process::Command; let range = match since_tag { @@ -153,7 +145,9 @@ fn find_story_name(root: &std::path::Path, num_str: &str) -> Option { // Try content store first. for id in crate::db::all_content_ids() { let file_num = id.split('_').next().unwrap_or(""); - if file_num == num_str && let Some(c) = crate::db::read_content(&id) { + if file_num == num_str + && let Some(c) = crate::db::read_content(&id) + { return crate::io::story_metadata::parse_front_matter(&c) .ok() .and_then(|m| m.name); @@ -162,7 +156,12 @@ fn find_story_name(root: &std::path::Path, num_str: &str) -> Option { // Fallback: filesystem scan. const STAGES: &[&str] = &[ - "1_backlog", "2_current", "3_qa", "4_merge", "5_done", "6_archived", + "1_backlog", + "2_current", + "3_qa", + "4_merge", + "5_done", + "6_archived", ]; for stage in STAGES { let dir = root.join(".huskies").join("work").join(stage); @@ -225,7 +224,9 @@ mod tests { #[test] fn unreleased_command_is_registered() { - let found = super::super::commands().iter().any(|c| c.name == "unreleased"); + let found = super::super::commands() + .iter() + .any(|c| c.name == "unreleased"); assert!(found, "unreleased command must be in the registry"); } @@ -249,7 +250,10 @@ mod tests { let tmp = tempfile::TempDir::new().unwrap(); let output = unreleased_cmd_with_root(tmp.path()).unwrap(); // Should return some message (not panic), either about no tags or no commits. - assert!(!output.is_empty(), "should return a non-empty message: {output}"); + assert!( + !output.is_empty(), + "should return a non-empty message: {output}" + ); } #[test] @@ -261,7 +265,10 @@ mod tests { let output = unreleased_cmd_with_root(repo_root).unwrap(); // The response should mention "unreleased" or "no unreleased" — just make // sure it's non-empty and doesn't panic. - assert!(!output.is_empty(), "should return a non-empty message: {output}"); + assert!( + !output.is_empty(), + "should return a non-empty message: {output}" + ); } #[test] @@ -271,7 +278,10 @@ mod tests { "@timmy:homeserver.local", "@timmy UNRELEASED", ); - assert!(result.is_some(), "UNRELEASED should match case-insensitively"); + assert!( + result.is_some(), + "UNRELEASED should match case-insensitively" + ); } // -- parse_story_from_subject ------------------------------------------ diff --git a/server/src/chat/lookup.rs b/server/src/chat/lookup.rs index c74a316b..2da9a22f 100644 --- a/server/src/chat/lookup.rs +++ b/server/src/chat/lookup.rs @@ -80,7 +80,10 @@ mod tests { fn not_found_returns_none() { let tmp = tempfile::TempDir::new().unwrap(); let result = find_story_by_number(tmp.path(), "999"); - assert!(result.is_none(), "should return None when story is not found"); + assert!( + result.is_none(), + "should return None when story is not found" + ); } #[test] diff --git a/server/src/chat/mod.rs b/server/src/chat/mod.rs index 5a859752..6ffa63c1 100644 --- a/server/src/chat/mod.rs +++ b/server/src/chat/mod.rs @@ -6,11 +6,11 @@ pub mod commands; pub(crate) mod lookup; +#[cfg(test)] +pub(crate) mod test_helpers; pub mod timer; pub mod transport; pub mod util; -#[cfg(test)] -pub(crate) mod test_helpers; use async_trait::async_trait; @@ -96,8 +96,9 @@ mod tests { fn assert_transport() {} assert_transport::(); - let _: Arc = - Arc::new(crate::chat::transport::slack::SlackTransport::new("xoxb-test".to_string())); + let _: Arc = Arc::new( + crate::chat::transport::slack::SlackTransport::new("xoxb-test".to_string()), + ); } /// Verify that TwilioWhatsAppTransport satisfies the ChatTransport trait @@ -107,11 +108,12 @@ mod tests { fn assert_transport() {} assert_transport::(); - let _: Arc = - Arc::new(crate::chat::transport::whatsapp::TwilioWhatsAppTransport::new( + let _: Arc = Arc::new( + crate::chat::transport::whatsapp::TwilioWhatsAppTransport::new( "ACtest".to_string(), "authtoken".to_string(), "+14155551234".to_string(), - )); + ), + ); } } diff --git a/server/src/chat/timer.rs b/server/src/chat/timer.rs index 47692542..d62de925 100644 --- a/server/src/chat/timer.rs +++ b/server/src/chat/timer.rs @@ -161,10 +161,7 @@ pub(crate) async fn tick_once( } let remaining = store.list().len(); - crate::slog!( - "[timer] Tick: {} due, {remaining} remaining", - due.len() - ); + crate::slog!("[timer] Tick: {} due, {remaining} remaining", due.len()); for entry in due { crate::slog!("[timer] Timer fired for story {}", entry.story_id); @@ -287,9 +284,7 @@ pub fn spawn_rate_limit_auto_scheduler( } Ok(_) => {} Err(tokio::sync::broadcast::error::RecvError::Lagged(n)) => { - crate::slog!( - "[timer] Rate-limit auto-scheduler lagged, skipped {n} events" - ); + crate::slog!("[timer] Rate-limit auto-scheduler lagged, skipped {n} events"); } Err(tokio::sync::broadcast::error::RecvError::Closed) => { crate::slog!( @@ -398,44 +393,43 @@ pub async fn handle_timer_command( let story_id = match resolve_story_id(&story_number_or_id, project_root) { Some(id) => id, None => { - return format!( - "No story with number or ID **{story_number_or_id}** found." - ); + return format!("No story with number or ID **{story_number_or_id}** found."); } }; // The story must be in backlog or current. When the timer fires, // backlog stories are moved to current automatically. // Check CRDT state first, then fall back to filesystem. - let in_valid_stage = if let Ok(Some(item)) = crate::pipeline_state::read_typed(&story_id) { - use crate::pipeline_state::Stage; - matches!(item.stage, Stage::Backlog | Stage::Coding) - } else { - let work_dir = project_root.join(".huskies").join("work"); - work_dir.join("1_backlog").join(format!("{story_id}.md")).exists() - || work_dir.join("2_current").join(format!("{story_id}.md")).exists() - }; + let in_valid_stage = + if let Ok(Some(item)) = crate::pipeline_state::read_typed(&story_id) { + use crate::pipeline_state::Stage; + matches!(item.stage, Stage::Backlog | Stage::Coding) + } else { + let work_dir = project_root.join(".huskies").join("work"); + work_dir + .join("1_backlog") + .join(format!("{story_id}.md")) + .exists() + || work_dir + .join("2_current") + .join(format!("{story_id}.md")) + .exists() + }; if !in_valid_stage { - return format!( - "Story **{story_id}** is not in backlog or current." - ); + return format!("Story **{story_id}** is not in backlog or current."); } let scheduled_at = match next_occurrence_of_hhmm(&hhmm, tz_str) { Some(t) => t, None => { - return format!( - "Invalid time **{hhmm}**. Use `HH:MM` format (e.g. `14:30`)." - ); + return format!("Invalid time **{hhmm}**. Use `HH:MM` format (e.g. `14:30`)."); } }; match store.add(story_id.clone(), scheduled_at) { Ok(()) => { let (display_time, tz_label) = format_in_timezone(scheduled_at, tz_str); - format!( - "Timer set for **{story_id}** at **{display_time}** ({tz_label})." - ) + format!("Timer set for **{story_id}** at **{display_time}** ({tz_label}).") } Err(e) => format!("Failed to save timer: {e}"), } @@ -448,11 +442,7 @@ pub async fn handle_timer_command( let mut lines = vec!["**Pending timers:**".to_string()]; for t in &timers { let (display_time, _) = format_in_timezone(t.scheduled_at, tz_str); - lines.push(format!( - "- **{}** → {}", - t.story_id, - display_time - )); + lines.push(format!("- **{}** → {}", t.story_id, display_time)); } lines.join("\n") } @@ -465,13 +455,11 @@ pub async fn handle_timer_command( format!("No timer found for **{story_id}**.") } } - TimerCommand::BadArgs => { - "Usage:\n\ + TimerCommand::BadArgs => "Usage:\n\ - `timer ` — schedule deferred start\n\ - `timer list` — show pending timers\n\ - `timer cancel ` — remove a timer" - .to_string() - } + .to_string(), } } @@ -529,10 +517,7 @@ fn format_in_timezone(dt: DateTime, timezone: Option<&str>) -> (String, Str match timezone.and_then(|s| s.parse::().ok()) { Some(tz) => { let tz_time = dt.with_timezone(&tz); - ( - tz_time.format("%Y-%m-%d %H:%M").to_string(), - tz.to_string(), - ) + (tz_time.format("%Y-%m-%d %H:%M").to_string(), tz.to_string()) } None => { let local_time = dt.with_timezone(&Local); @@ -571,7 +556,12 @@ fn resolve_story_id(number_or_id: &str, project_root: &Path) -> Option { // --- DB-first lookup --- for id in crate::db::all_content_ids() { let file_num = id.split('_').next().unwrap_or(""); - if file_num == number_or_id && crate::pipeline_state::read_typed(&id).ok().flatten().is_some() { + if file_num == number_or_id + && crate::pipeline_state::read_typed(&id) + .ok() + .flatten() + .is_some() + { return Some(id); } } @@ -643,14 +633,20 @@ mod tests { #[test] fn next_occurrence_with_named_timezone_is_in_the_future() { let result = next_occurrence_of_hhmm("14:30", Some("Europe/London")).unwrap(); - assert!(result > Utc::now(), "next occurrence (Europe/London) must be in the future"); + assert!( + result > Utc::now(), + "next occurrence (Europe/London) must be in the future" + ); } #[test] fn next_occurrence_with_invalid_timezone_falls_back_to_local() { // An unrecognised timezone name falls back to chrono::Local (returns Some). let result = next_occurrence_of_hhmm("14:30", Some("Invalid/Zone")); - assert!(result.is_some(), "invalid timezone should fall back to local and return Some"); + assert!( + result.is_some(), + "invalid timezone should fall back to local and return Some" + ); } // ── extract_timer_command ─────────────────────────────────────────── @@ -679,11 +675,7 @@ mod tests { #[test] fn timer_cancel_story_id() { assert_eq!( - extract_timer_command( - "Timmy timer cancel 421_story_foo", - "Timmy", - "@bot:home" - ), + extract_timer_command("Timmy timer cancel 421_story_foo", "Timmy", "@bot:home"), Some(TimerCommand::Cancel { story_number_or_id: "421_story_foo".to_string() }) @@ -701,11 +693,7 @@ mod tests { #[test] fn timer_schedule_with_story_id() { assert_eq!( - extract_timer_command( - "Timmy timer 421_story_foo 14:30", - "Timmy", - "@bot:home" - ), + extract_timer_command("Timmy timer 421_story_foo 14:30", "Timmy", "@bot:home"), Some(TimerCommand::Schedule { story_number_or_id: "421_story_foo".to_string(), hhmm: "14:30".to_string(), @@ -727,11 +715,7 @@ mod tests { #[test] fn timer_schedule_missing_time_is_bad_args() { assert_eq!( - extract_timer_command( - "Timmy timer 421_story_foo", - "Timmy", - "@bot:home" - ), + extract_timer_command("Timmy timer 421_story_foo", "Timmy", "@bot:home"), Some(TimerCommand::BadArgs) ); } @@ -944,10 +928,7 @@ mod tests { dir.path(), ) .await; - assert!( - result.contains("No timer found"), - "unexpected: {result}" - ); + assert!(result.contains("No timer found"), "unexpected: {result}"); } #[tokio::test] @@ -1014,10 +995,7 @@ mod tests { dir.path(), ) .await; - assert!( - result.contains("Timer set for"), - "unexpected: {result}" - ); + assert!(result.contains("Timer set for"), "unexpected: {result}"); assert_eq!(store.list().len(), 1); } @@ -1111,7 +1089,10 @@ mod tests { "story should be in the content store after timer fires" ); // Timer was consumed. - assert!(store.list().is_empty(), "fired timer should be removed from store"); + assert!( + store.list().is_empty(), + "fired timer should be removed from store" + ); } // ── AC4: tick_once integration test ───────────────────────────────── diff --git a/server/src/chat/transport/discord/commands.rs b/server/src/chat/transport/discord/commands.rs index 125beb7a..cc5c933d 100644 --- a/server/src/chat/transport/discord/commands.rs +++ b/server/src/chat/transport/discord/commands.rs @@ -6,9 +6,9 @@ use std::sync::{Arc, Mutex}; use tokio::sync::{Mutex as TokioMutex, oneshot}; use crate::agents::AgentPool; +use crate::chat::ChatTransport; use crate::chat::transport::matrix::{ConversationEntry, ConversationRole, RoomConversation}; use crate::chat::util::is_permission_approval; -use crate::chat::ChatTransport; use crate::http::context::{PermissionDecision, PermissionForward}; use crate::slog; @@ -42,8 +42,7 @@ pub struct DiscordContext { /// Permission requests from the MCP `prompt_permission` tool arrive here. pub perm_rx: Arc>>, /// Pending permission replies keyed by channel ID. - pub pending_perm_replies: - Arc>>>, + pub pending_perm_replies: Arc>>>, /// Seconds before an unanswered permission prompt is auto-denied. pub permission_timeout_secs: u64, } @@ -135,16 +134,13 @@ pub(super) async fn handle_incoming_message( let total_ticks = (duration_secs as usize) / 2; for tick in 1..=total_ticks { tokio::time::sleep(interval).await; - let updated = - crate::chat::transport::matrix::htop::build_htop_message( - &agents, - (tick * 2) as u32, - duration_secs, - ); + let updated = crate::chat::transport::matrix::htop::build_htop_message( + &agents, + (tick * 2) as u32, + duration_secs, + ); let updated = markdown_to_discord(&updated); - if let Err(e) = - transport.edit_message(&ch, &msg_id, &updated, "").await - { + if let Err(e) = transport.edit_message(&ch, &msg_id, &updated, "").await { slog!("[discord] Failed to edit htop message: {e}"); break; } @@ -320,12 +316,7 @@ pub(super) async fn handle_incoming_message( } /// Forward a message to Claude Code and send the response back via Discord. -async fn handle_llm_message( - ctx: &DiscordContext, - channel: &str, - user: &str, - user_message: &str, -) { +async fn handle_llm_message(ctx: &DiscordContext, channel: &str, user: &str, user_message: &str) { use crate::chat::util::drain_complete_paragraphs; use crate::llm::providers::claude_code::{ClaudeCodeProvider, ClaudeCodeResult}; use std::sync::atomic::{AtomicBool, Ordering}; @@ -334,9 +325,7 @@ async fn handle_llm_message( // Look up existing session ID for this channel. let resume_session_id: Option = { let guard = ctx.history.lock().await; - guard - .get(channel) - .and_then(|conv| conv.session_id.clone()) + guard.get(channel).and_then(|conv| conv.session_id.clone()) }; let bot_name = &ctx.bot_name; @@ -446,9 +435,7 @@ async fn handle_llm_message( let last_text = messages .iter() .rev() - .find(|m| { - m.role == crate::llm::types::Role::Assistant && !m.content.is_empty() - }) + .find(|m| m.role == crate::llm::types::Role::Assistant && !m.content.is_empty()) .map(|m| m.content.clone()) .unwrap_or_default(); if !last_text.is_empty() { diff --git a/server/src/chat/transport/discord/gateway.rs b/server/src/chat/transport/discord/gateway.rs index 0bab7227..3a3ad0dc 100644 --- a/server/src/chat/transport/discord/gateway.rs +++ b/server/src/chat/transport/discord/gateway.rs @@ -150,8 +150,7 @@ async fn run_gateway(ctx: Arc) -> Result<(), String> { .ok_or("Gateway closed before Hello")? .map_err(|e| format!("Gateway read error: {e}"))?; - let hello_payload: GatewayPayload = - parse_ws_message(&hello).ok_or("Failed to parse Hello")?; + let hello_payload: GatewayPayload = parse_ws_message(&hello).ok_or("Failed to parse Hello")?; if hello_payload.op != OP_HELLO { return Err(format!( @@ -164,8 +163,7 @@ async fn run_gateway(ctx: Arc) -> Result<(), String> { serde_json::from_value(hello_payload.d.ok_or("Hello missing data")?) .map_err(|e| format!("Failed to parse Hello data: {e}"))?; - let heartbeat_interval = - std::time::Duration::from_millis(hello_data.heartbeat_interval); + let heartbeat_interval = std::time::Duration::from_millis(hello_data.heartbeat_interval); slog!( "[discord] Heartbeat interval: {}ms", hello_data.heartbeat_interval @@ -258,19 +256,12 @@ async fn run_gateway(ctx: Arc) -> Result<(), String> { && let Ok(ready) = serde_json::from_value::(d) { bot_user_id = Some(ready.user.id.clone()); - slog!( - "[discord] READY — bot user ID: {}", - ready.user.id - ); + slog!("[discord] READY — bot user ID: {}", ready.user.id); } } "MESSAGE_CREATE" => { if let Some(d) = payload.d { - dispatch_message( - Arc::clone(&ctx), - d, - bot_user_id.clone(), - ); + dispatch_message(Arc::clone(&ctx), d, bot_user_id.clone()); } } _ => {} @@ -355,15 +346,11 @@ fn dispatch_message( // Check if the bot was mentioned, or if we respond to all messages in // configured channels (ambient mode). - let bot_mentioned = bot_user_id.as_ref().is_some_and(|bid| { - msg.mentions.iter().any(|m| m.id == *bid) - }); + let bot_mentioned = bot_user_id + .as_ref() + .is_some_and(|bid| msg.mentions.iter().any(|m| m.id == *bid)); - let in_ambient = ctx - .ambient_rooms - .lock() - .unwrap() - .contains(&msg.channel_id); + let in_ambient = ctx.ambient_rooms.lock().unwrap().contains(&msg.channel_id); if !bot_mentioned && !in_ambient { return; @@ -392,8 +379,7 @@ fn dispatch_message( msg.channel_id ); - commands::handle_incoming_message(&ctx, &msg.channel_id, &author.id, &content) - .await; + commands::handle_incoming_message(&ctx, &msg.channel_id, &author.id, &content).await; }); } @@ -417,8 +403,7 @@ mod tests { let json = r#"{"op": 10, "d": {"heartbeat_interval": 41250}}"#; let payload: GatewayPayload = serde_json::from_str(json).unwrap(); assert_eq!(payload.op, OP_HELLO); - let hello: HelloData = - serde_json::from_value(payload.d.unwrap()).unwrap(); + let hello: HelloData = serde_json::from_value(payload.d.unwrap()).unwrap(); assert_eq!(hello.heartbeat_interval, 41250); } diff --git a/server/src/chat/transport/discord/meta.rs b/server/src/chat/transport/discord/meta.rs index c72668d0..84939796 100644 --- a/server/src/chat/transport/discord/meta.rs +++ b/server/src/chat/transport/discord/meta.rs @@ -181,8 +181,7 @@ mod tests { .create_async() .await; - let transport = - DiscordTransport::with_api_base("test-token".to_string(), server.url()); + let transport = DiscordTransport::with_api_base("test-token".to_string(), server.url()); let result = transport .send_message("123456", "hello", "

hello

") @@ -202,8 +201,7 @@ mod tests { .create_async() .await; - let transport = - DiscordTransport::with_api_base("test-token".to_string(), server.url()); + let transport = DiscordTransport::with_api_base("test-token".to_string(), server.url()); let result = transport.send_message("bad", "hello", "").await; assert!(result.is_err()); @@ -220,8 +218,7 @@ mod tests { .create_async() .await; - let transport = - DiscordTransport::with_api_base("test-token".to_string(), server.url()); + let transport = DiscordTransport::with_api_base("test-token".to_string(), server.url()); let result = transport .edit_message("123456", "999888777", "updated", "") @@ -240,12 +237,9 @@ mod tests { .create_async() .await; - let transport = - DiscordTransport::with_api_base("test-token".to_string(), server.url()); + let transport = DiscordTransport::with_api_base("test-token".to_string(), server.url()); - let result = transport - .edit_message("123456", "bad", "updated", "") - .await; + let result = transport.edit_message("123456", "bad", "updated", "").await; assert!(result.is_err()); assert!(result.unwrap_err().contains("404")); } @@ -259,8 +253,7 @@ mod tests { .create_async() .await; - let transport = - DiscordTransport::with_api_base("test-token".to_string(), server.url()); + let transport = DiscordTransport::with_api_base("test-token".to_string(), server.url()); assert!(transport.send_typing("123456", true).await.is_ok()); } @@ -281,8 +274,7 @@ mod tests { .create_async() .await; - let transport = - DiscordTransport::with_api_base("test-token".to_string(), server.url()); + let transport = DiscordTransport::with_api_base("test-token".to_string(), server.url()); let result = transport.send_message("123456", "hello", "").await; assert!(result.is_err()); @@ -296,7 +288,6 @@ mod tests { fn assert_transport() {} assert_transport::(); - let _: Arc = - Arc::new(DiscordTransport::new("test-token".to_string())); + let _: Arc = Arc::new(DiscordTransport::new("test-token".to_string())); } } diff --git a/server/src/chat/transport/matrix/assign.rs b/server/src/chat/transport/matrix/assign.rs index fc8e25b2..f4008f51 100644 --- a/server/src/chat/transport/matrix/assign.rs +++ b/server/src/chat/transport/matrix/assign.rs @@ -17,10 +17,7 @@ use std::path::Path; #[derive(Debug, PartialEq)] pub enum AssignCommand { /// Assign the story with this number to the given model. - Assign { - story_number: String, - model: String, - }, + Assign { story_number: String, model: String }, /// The user typed `assign` but without valid arguments. BadArgs, } @@ -96,9 +93,7 @@ pub async fn handle_assign( match crate::chat::lookup::find_story_by_number(project_root, story_number) { Some(found) => found, None => { - return format!( - "No story, bug, or spike with number **{story_number}** found." - ); + return format!("No story, bug, or spike with number **{story_number}** found."); } }; @@ -282,11 +277,8 @@ mod tests { fn extract_assign_command_multibyte_prefix_no_panic() { // "xxxx⏺ assign 42 opus" — ⏺ (U+23FA) is 3 bytes, starting at byte 4. // "@timmy" has len 6 so text[..6] lands inside ⏺ — panics without the fix. - let cmd = extract_assign_command( - "xxxx\u{23FA} assign 42 opus", - "Timmy", - "@timmy:home.local", - ); + let cmd = + extract_assign_command("xxxx\u{23FA} assign 42 opus", "Timmy", "@timmy:home.local"); assert_eq!(cmd, None); } @@ -453,7 +445,8 @@ mod tests { ); // Should indicate a restart occurred (not just "will be used when starts") assert!( - response.to_lowercase().contains("stop") || response.to_lowercase().contains("reassign"), + response.to_lowercase().contains("stop") + || response.to_lowercase().contains("reassign"), "response should indicate stop/reassign: {response}" ); } diff --git a/server/src/chat/transport/matrix/bot/context.rs b/server/src/chat/transport/matrix/bot/context.rs index e3cc745c..52bae64d 100644 --- a/server/src/chat/transport/matrix/bot/context.rs +++ b/server/src/chat/transport/matrix/bot/context.rs @@ -1,7 +1,7 @@ //! Matrix bot context — shared state for the Matrix bot (rooms, history, permissions). use crate::agents::AgentPool; -use crate::chat::timer::TimerStore; use crate::chat::ChatTransport; +use crate::chat::timer::TimerStore; use crate::http::context::{PermissionDecision, PermissionForward}; use matrix_sdk::ruma::{OwnedEventId, OwnedRoomId, OwnedUserId}; use std::collections::{HashMap, HashSet}; diff --git a/server/src/chat/transport/matrix/bot/format.rs b/server/src/chat/transport/matrix/bot/format.rs index a48c6c59..44fbf4ab 100644 --- a/server/src/chat/transport/matrix/bot/format.rs +++ b/server/src/chat/transport/matrix/bot/format.rs @@ -104,7 +104,10 @@ mod tests { #[test] fn startup_announcement_uses_configured_display_name_not_hardcoded() { assert_eq!(format_startup_announcement("HAL"), "HAL is online."); - assert_eq!(format_startup_announcement("Assistant"), "Assistant is online."); + assert_eq!( + format_startup_announcement("Assistant"), + "Assistant is online." + ); } #[test] diff --git a/server/src/chat/transport/matrix/bot/history.rs b/server/src/chat/transport/matrix/bot/history.rs index fa50de4a..034ef50d 100644 --- a/server/src/chat/transport/matrix/bot/history.rs +++ b/server/src/chat/transport/matrix/bot/history.rs @@ -71,11 +71,7 @@ pub fn load_history(project_root: &std::path::Path) -> HashMap() - .ok() - .map(|room_id| (room_id, v)) - }) + .filter_map(|(k, v)| k.parse::().ok().map(|room_id| (room_id, v))) .collect() } diff --git a/server/src/chat/transport/matrix/bot/mentions.rs b/server/src/chat/transport/matrix/bot/mentions.rs index 90db4704..83839b88 100644 --- a/server/src/chat/transport/matrix/bot/mentions.rs +++ b/server/src/chat/transport/matrix/bot/mentions.rs @@ -97,9 +97,7 @@ pub fn is_addressed_to_other(body: &str, bot_user_id: &OwnedUserId, bot_name: &s // Handles both "@localpart" and "@localpart:homeserver" forms. if let Some(rest) = lower.strip_prefix('@') { // Extract everything up to the first whitespace character. - let word_end = rest - .find(|c: char| c.is_whitespace()) - .unwrap_or(rest.len()); + let word_end = rest.find(|c: char| c.is_whitespace()).unwrap_or(rest.len()); let mention = &rest[..word_end]; // e.g. "sally" or "sally:example.com" // Strip the homeserver part to get just the localpart. diff --git a/server/src/chat/transport/matrix/bot/messages.rs b/server/src/chat/transport/matrix/bot/messages.rs index f715ae67..4c355f1c 100644 --- a/server/src/chat/transport/matrix/bot/messages.rs +++ b/server/src/chat/transport/matrix/bot/messages.rs @@ -82,9 +82,7 @@ pub(super) async fn on_room_message( // Always let "ambient on" through — it is the one command that must work // even when the bot is not mentioned and ambient mode is off, otherwise // there is no way to re-enable ambient mode without an @-mention. - let is_ambient_on = body - .to_ascii_lowercase() - .contains("ambient on"); + let is_ambient_on = body.to_ascii_lowercase().contains("ambient on"); if !is_addressed && !is_ambient && !is_ambient_on { slog!( @@ -97,7 +95,9 @@ pub(super) async fn on_room_message( // In ambient mode, ignore messages that are explicitly addressed to a // different entity (e.g. "sally: do X" or "@sally do X" when we are stu). // We still let through messages addressed to us and the "ambient on" command. - if is_ambient && !is_addressed && !is_ambient_on + if is_ambient + && !is_addressed + && !is_ambient_on && is_addressed_to_other(&body, &ctx.bot_user_id, &ctx.bot_name) { slog!( @@ -158,7 +158,10 @@ pub(super) async fn on_room_message( "Permission denied." }; let html = markdown_to_html(confirmation); - if let Ok(msg_id) = ctx.transport.send_message(&room_id_str, confirmation, &html).await + if let Ok(msg_id) = ctx + .transport + .send_message(&room_id_str, confirmation, &html) + .await && let Ok(event_id) = msg_id.parse() { ctx.bot_sent_event_ids.lock().await.insert(event_id); @@ -182,9 +185,14 @@ pub(super) async fn on_room_message( ambient_rooms: &ctx.ambient_rooms, room_id: &room_id_str, }; - if let Some((response, response_html)) = super::super::commands::try_handle_command_with_html(&dispatch, &user_message) { + if let Some((response, response_html)) = + super::super::commands::try_handle_command_with_html(&dispatch, &user_message) + { slog!("[matrix-bot] Handled bot command from {sender}"); - if let Ok(msg_id) = ctx.transport.send_message(&room_id_str, &response, &response_html).await + if let Ok(msg_id) = ctx + .transport + .send_message(&room_id_str, &response, &response_html) + .await && let Ok(event_id) = msg_id.parse() { ctx.bot_sent_event_ids.lock().await.insert(event_id); @@ -224,7 +232,10 @@ pub(super) async fn on_room_message( } }; let html = markdown_to_html(&response); - if let Ok(msg_id) = ctx.transport.send_message(&room_id_str, &response, &html).await + if let Ok(msg_id) = ctx + .transport + .send_message(&room_id_str, &response, &html) + .await && let Ok(event_id) = msg_id.parse() { ctx.bot_sent_event_ids.lock().await.insert(event_id); @@ -272,9 +283,7 @@ pub(super) async fn on_room_message( ) { let response = match del_cmd { super::super::delete::DeleteCommand::Delete { story_number } => { - slog!( - "[matrix-bot] Handling delete command from {sender}: story {story_number}" - ); + slog!("[matrix-bot] Handling delete command from {sender}: story {story_number}"); super::super::delete::handle_delete( &ctx.bot_name, &story_number, @@ -288,7 +297,10 @@ pub(super) async fn on_room_message( } }; let html = markdown_to_html(&response); - if let Ok(msg_id) = ctx.transport.send_message(&room_id_str, &response, &html).await + if let Ok(msg_id) = ctx + .transport + .send_message(&room_id_str, &response, &html) + .await && let Ok(event_id) = msg_id.parse() { ctx.bot_sent_event_ids.lock().await.insert(event_id); @@ -305,9 +317,7 @@ pub(super) async fn on_room_message( ) { let response = match rmtree_cmd { super::super::rmtree::RmtreeCommand::Rmtree { story_number } => { - slog!( - "[matrix-bot] Handling rmtree command from {sender}: story {story_number}" - ); + slog!("[matrix-bot] Handling rmtree command from {sender}: story {story_number}"); super::super::rmtree::handle_rmtree( &ctx.bot_name, &story_number, @@ -321,7 +331,10 @@ pub(super) async fn on_room_message( } }; let html = markdown_to_html(&response); - if let Ok(msg_id) = ctx.transport.send_message(&room_id_str, &response, &html).await + if let Ok(msg_id) = ctx + .transport + .send_message(&room_id_str, &response, &html) + .await && let Ok(event_id) = msg_id.parse() { ctx.bot_sent_event_ids.lock().await.insert(event_id); @@ -361,7 +374,10 @@ pub(super) async fn on_room_message( } }; let html = markdown_to_html(&response); - if let Ok(msg_id) = ctx.transport.send_message(&room_id_str, &response, &html).await + if let Ok(msg_id) = ctx + .transport + .send_message(&room_id_str, &response, &html) + .await && let Ok(event_id) = msg_id.parse() { ctx.bot_sent_event_ids.lock().await.insert(event_id); @@ -387,7 +403,10 @@ pub(super) async fn on_room_message( ) .await; let html = markdown_to_html(&response); - if let Ok(msg_id) = ctx.transport.send_message(&room_id_str, &response, &html).await + if let Ok(msg_id) = ctx + .transport + .send_message(&room_id_str, &response, &html) + .await && let Ok(event_id) = msg_id.parse() { ctx.bot_sent_event_ids.lock().await.insert(event_id); @@ -408,19 +427,22 @@ pub(super) async fn on_room_message( // Acknowledge immediately — the rebuild may take a while or re-exec. let ack = "Rebuilding server… this may take a moment."; let ack_html = markdown_to_html(ack); - if let Ok(msg_id) = ctx.transport.send_message(&room_id_str, ack, &ack_html).await + if let Ok(msg_id) = ctx + .transport + .send_message(&room_id_str, ack, &ack_html) + .await && let Ok(event_id) = msg_id.parse() { ctx.bot_sent_event_ids.lock().await.insert(event_id); } - let response = super::super::rebuild::handle_rebuild( - &ctx.bot_name, - &ctx.project_root, - &ctx.agents, - ) - .await; + let response = + super::super::rebuild::handle_rebuild(&ctx.bot_name, &ctx.project_root, &ctx.agents) + .await; let html = markdown_to_html(&response); - if let Ok(msg_id) = ctx.transport.send_message(&room_id_str, &response, &html).await + if let Ok(msg_id) = ctx + .transport + .send_message(&room_id_str, &response, &html) + .await && let Ok(event_id) = msg_id.parse() { ctx.bot_sent_event_ids.lock().await.insert(event_id); @@ -443,7 +465,10 @@ pub(super) async fn on_room_message( ) .await; let html = markdown_to_html(&response); - if let Ok(msg_id) = ctx.transport.send_message(&room_id_str, &response, &html).await + if let Ok(msg_id) = ctx + .transport + .send_message(&room_id_str, &response, &html) + .await && let Ok(event_id) = msg_id.parse() { ctx.bot_sent_event_ids.lock().await.insert(event_id); @@ -470,9 +495,7 @@ pub(super) async fn handle_message( // flattening history into a text prefix. let resume_session_id: Option = { let guard = ctx.history.lock().await; - guard - .get(&room_id) - .and_then(|conv| conv.session_id.clone()) + guard.get(&room_id).and_then(|conv| conv.session_id.clone()) }; // The prompt is just the current message with sender attribution. @@ -501,7 +524,9 @@ pub(super) async fn handle_message( let post_task = tokio::spawn(async move { while let Some(chunk) = msg_rx.recv().await { let html = markdown_to_html(&chunk); - if let Ok(msg_id) = post_transport.send_message(&post_room_id, &chunk, &html).await + if let Ok(msg_id) = post_transport + .send_message(&post_room_id, &chunk, &html) + .await && let Ok(event_id) = msg_id.parse() { sent_ids_for_post.lock().await.insert(event_id); @@ -631,9 +656,7 @@ pub(super) async fn handle_message( Err(e) => { slog!("[matrix-bot] LLM error: {e}"); let err_msg = if let Some(url) = crate::llm::oauth::extract_login_url_from_error(&e) { - format!( - "Authentication required. [Click here to log in to Claude]({url})" - ) + format!("Authentication required. [Click here to log in to Claude]({url})") } else { format!("Error processing your request: {e}") }; @@ -654,7 +677,11 @@ pub(super) async fn handle_message( let conv = guard.entry(room_id).or_default(); // Store the session ID so the next turn uses --resume. - slog!("[matrix-bot] storing session_id: {:?} (was: {:?})", new_session_id, conv.session_id); + slog!( + "[matrix-bot] storing session_id: {:?} (was: {:?})", + new_session_id, + conv.session_id + ); if new_session_id.is_some() { conv.session_id = new_session_id; } @@ -713,7 +740,10 @@ mod tests { let err = "OAuth session expired or credentials missing. Please log in: http://localhost:3001/oauth/authorize"; let url = crate::llm::oauth::extract_login_url_from_error(err); assert!(url.is_some(), "should extract URL from OAuth error"); - let msg = format!("Authentication required. [Click here to log in to Claude]({})", url.unwrap()); + let msg = format!( + "Authentication required. [Click here to log in to Claude]({})", + url.unwrap() + ); assert!(msg.contains("http://localhost:3001/oauth/authorize")); assert!(msg.contains("[Click here to log in to Claude]")); } diff --git a/server/src/chat/transport/matrix/bot/run.rs b/server/src/chat/transport/matrix/bot/run.rs index 4de6711f..e64eec1f 100644 --- a/server/src/chat/transport/matrix/bot/run.rs +++ b/server/src/chat/transport/matrix/bot/run.rs @@ -1,12 +1,12 @@ //! Matrix bot run loop — connects to the homeserver and processes sync events. use crate::agents::AgentPool; use crate::slog; -use matrix_sdk::{Client, LoopCtrl, config::SyncSettings}; use matrix_sdk::ruma::OwnedRoomId; -use std::sync::atomic::{AtomicBool, AtomicU64, Ordering}; +use matrix_sdk::{Client, LoopCtrl, config::SyncSettings}; use std::collections::{HashMap, HashSet}; use std::path::PathBuf; use std::sync::Arc; +use std::sync::atomic::{AtomicBool, AtomicU64, Ordering}; use tokio::sync::Mutex as TokioMutex; use tokio::sync::{mpsc, watch}; @@ -73,7 +73,10 @@ pub async fn run_bot( .ok_or_else(|| "No user ID after login".to_string())? .to_owned(); - slog!("[matrix-bot] Logged in as {bot_user_id} (device: {})", login_response.device_id); + slog!( + "[matrix-bot] Logged in as {bot_user_id} (device: {})", + login_response.device_id + ); // Bootstrap cross-signing keys for E2EE verification support. // Pass the bot's password for UIA (User-Interactive Authentication) — @@ -81,9 +84,7 @@ pub async fn run_bot( { use matrix_sdk::ruma::api::client::uiaa; let password_auth = uiaa::AuthData::Password(uiaa::Password::new( - uiaa::UserIdentifier::UserIdOrLocalpart( - config.username.clone().unwrap_or_default(), - ), + uiaa::UserIdentifier::UserIdOrLocalpart(config.username.clone().unwrap_or_default()), config.password.clone().unwrap_or_default(), )); if let Err(e) = client @@ -171,11 +172,7 @@ pub async fn run_bot( ); // Restore persisted ambient rooms from config. - let persisted_ambient: HashSet = config - .ambient_rooms - .iter() - .cloned() - .collect(); + let persisted_ambient: HashSet = config.ambient_rooms.iter().cloned().collect(); if !persisted_ambient.is_empty() { slog!( "[matrix-bot] Restored ambient mode for {} room(s): {:?}", @@ -189,11 +186,13 @@ pub async fn run_bot( "whatsapp" => { if config.whatsapp_provider == "twilio" { slog!("[matrix-bot] Using WhatsApp/Twilio transport"); - Arc::new(crate::chat::transport::whatsapp::TwilioWhatsAppTransport::new( - config.twilio_account_sid.clone().unwrap_or_default(), - config.twilio_auth_token.clone().unwrap_or_default(), - config.twilio_whatsapp_number.clone().unwrap_or_default(), - )) + Arc::new( + crate::chat::transport::whatsapp::TwilioWhatsAppTransport::new( + config.twilio_account_sid.clone().unwrap_or_default(), + config.twilio_auth_token.clone().unwrap_or_default(), + config.twilio_whatsapp_number.clone().unwrap_or_default(), + ), + ) } else { slog!("[matrix-bot] Using WhatsApp/Meta transport"); Arc::new(crate::chat::transport::whatsapp::WhatsAppTransport::new( @@ -208,7 +207,9 @@ pub async fn run_bot( } _ => { slog!("[matrix-bot] Using Matrix transport"); - Arc::new(super::super::transport_impl::MatrixTransport::new(client.clone())) + Arc::new(super::super::transport_impl::MatrixTransport::new( + client.clone(), + )) } }; @@ -222,10 +223,7 @@ pub async fn run_bot( project_root.join(".huskies").join("timers.json"), )); // Auto-schedule timers when an agent hits a hard rate limit. - crate::chat::timer::spawn_rate_limit_auto_scheduler( - Arc::clone(&timer_store), - watcher_rx_auto, - ); + crate::chat::timer::spawn_rate_limit_auto_scheduler(Arc::clone(&timer_store), watcher_rx_auto); let ctx = BotContext { bot_user_id, @@ -246,7 +244,9 @@ pub async fn run_bot( timer_store, }; - slog!("[matrix-bot] Cryptographic identity verification is always ON — commands from unencrypted rooms or unverified devices are rejected"); + slog!( + "[matrix-bot] Cryptographic identity verification is always ON — commands from unencrypted rooms or unverified devices are rejected" + ); // Register event handlers and inject shared context. client.add_event_handler_context(ctx); @@ -256,8 +256,7 @@ pub async fn run_bot( // Spawn the stage-transition notification listener before entering the // sync loop so it starts receiving watcher events immediately. - let notif_room_id_strings: Vec = - notif_room_ids.iter().map(|r| r.to_string()).collect(); + let notif_room_id_strings: Vec = notif_room_ids.iter().map(|r| r.to_string()).collect(); super::super::notifications::spawn_notification_listener( Arc::clone(&transport), move || notif_room_id_strings.clone(), @@ -269,8 +268,7 @@ pub async fn run_bot( // configured rooms when the server is about to stop (SIGINT/SIGTERM or rebuild). { let shutdown_transport = Arc::clone(&transport); - let shutdown_rooms: Vec = - announce_room_ids.iter().map(|r| r.to_string()).collect(); + let shutdown_rooms: Vec = announce_room_ids.iter().map(|r| r.to_string()).collect(); let shutdown_bot_name = announce_bot_name.clone(); let mut rx = shutdown_rx; tokio::spawn(async move { @@ -400,8 +398,7 @@ mod tests { #[test] fn io_error_is_not_fatal() { let e: matrix_sdk::Error = - std::io::Error::new(std::io::ErrorKind::ConnectionRefused, "connection refused") - .into(); + std::io::Error::new(std::io::ErrorKind::ConnectionRefused, "connection refused").into(); assert!(!is_fatal_sync_error(&e)); } @@ -423,7 +420,11 @@ mod tests { const MAX_BACKOFF_SECS: u64 = 300; let steps: Vec = std::iter::successors(Some(5u64), |&d| { let next = (d * 2).min(MAX_BACKOFF_SECS); - if next < MAX_BACKOFF_SECS { Some(next) } else { None } + if next < MAX_BACKOFF_SECS { + Some(next) + } else { + None + } }) .collect(); // First few steps: 5, 10, 20, 40, 80, 160 @@ -433,4 +434,3 @@ mod tests { assert_eq!(steps[3], 40); } } - diff --git a/server/src/chat/transport/matrix/bot/verification.rs b/server/src/chat/transport/matrix/bot/verification.rs index aed7fcae..f59d4eec 100644 --- a/server/src/chat/transport/matrix/bot/verification.rs +++ b/server/src/chat/transport/matrix/bot/verification.rs @@ -84,8 +84,9 @@ pub(super) async fn on_to_device_verification_request( } break; } - VerificationRequestState::Done - | VerificationRequestState::Cancelled(_) => break, + VerificationRequestState::Done | VerificationRequestState::Cancelled(_) => { + break; + } _ => {} } } @@ -100,10 +101,7 @@ pub(super) async fn on_to_device_verification_request( /// Modern Element sends `m.key.verification.request` as an `m.room.message` /// event rather than a to-device event. We look for that message type and /// drive the same SAS flow as the to-device handler. -pub(super) async fn on_room_verification_request( - ev: OriginalSyncRoomMessageEvent, - client: Client, -) { +pub(super) async fn on_room_verification_request(ev: OriginalSyncRoomMessageEvent, client: Client) { // Only act on in-room verification request messages. if !matches!(ev.content.msgtype, MessageType::VerificationRequest(_)) { return; @@ -152,8 +150,9 @@ pub(super) async fn on_room_verification_request( } break; } - VerificationRequestState::Done - | VerificationRequestState::Cancelled(_) => break, + VerificationRequestState::Done | VerificationRequestState::Cancelled(_) => { + break; + } _ => {} } } diff --git a/server/src/chat/transport/matrix/config.rs b/server/src/chat/transport/matrix/config.rs index 3210ac30..af1dd65e 100644 --- a/server/src/chat/transport/matrix/config.rs +++ b/server/src/chat/transport/matrix/config.rs @@ -77,7 +77,6 @@ pub struct BotConfig { // ── WhatsApp Business API fields ───────────────────────────────── // These are only required when `transport = "whatsapp"`. - /// WhatsApp Business phone number ID from the Meta dashboard. #[serde(default)] pub whatsapp_phone_number_id: Option, @@ -105,7 +104,6 @@ pub struct BotConfig { // ── Twilio WhatsApp fields ───────────────────────────────────────── // Only required when `transport = "whatsapp"` and `whatsapp_provider = "twilio"`. - /// Twilio Account SID (starts with `AC`). #[serde(default)] pub twilio_account_sid: Option, @@ -126,7 +124,6 @@ pub struct BotConfig { // ── Slack Bot API fields ───────────────────────────────────────── // These are only required when `transport = "slack"`. - /// Slack Bot User OAuth Token (starts with `xoxb-`). #[serde(default)] pub slack_bot_token: Option, @@ -139,7 +136,6 @@ pub struct BotConfig { // ── Discord Bot API fields ────────────────────────────────────── // These are only required when `transport = "discord"`. - /// Discord bot token from the Discord Developer Portal. #[serde(default)] pub discord_bot_token: Option, @@ -189,21 +185,33 @@ impl BotConfig { if config.transport == "whatsapp" { if config.whatsapp_provider == "twilio" { // Validate Twilio-specific fields. - if config.twilio_account_sid.as_ref().is_none_or(|s| s.is_empty()) { + if config + .twilio_account_sid + .as_ref() + .is_none_or(|s| s.is_empty()) + { eprintln!( "[bot] bot.toml: whatsapp_provider=\"twilio\" requires \ twilio_account_sid" ); return None; } - if config.twilio_auth_token.as_ref().is_none_or(|s| s.is_empty()) { + if config + .twilio_auth_token + .as_ref() + .is_none_or(|s| s.is_empty()) + { eprintln!( "[bot] bot.toml: whatsapp_provider=\"twilio\" requires \ twilio_auth_token" ); return None; } - if config.twilio_whatsapp_number.as_ref().is_none_or(|s| s.is_empty()) { + if config + .twilio_whatsapp_number + .as_ref() + .is_none_or(|s| s.is_empty()) + { eprintln!( "[bot] bot.toml: whatsapp_provider=\"twilio\" requires \ twilio_whatsapp_number" @@ -212,21 +220,33 @@ impl BotConfig { } } else { // Validate Meta (default) WhatsApp fields. - if config.whatsapp_phone_number_id.as_ref().is_none_or(|s| s.is_empty()) { + if config + .whatsapp_phone_number_id + .as_ref() + .is_none_or(|s| s.is_empty()) + { eprintln!( "[bot] bot.toml: transport=\"whatsapp\" requires \ whatsapp_phone_number_id" ); return None; } - if config.whatsapp_access_token.as_ref().is_none_or(|s| s.is_empty()) { + if config + .whatsapp_access_token + .as_ref() + .is_none_or(|s| s.is_empty()) + { eprintln!( "[bot] bot.toml: transport=\"whatsapp\" requires \ whatsapp_access_token" ); return None; } - if config.whatsapp_verify_token.as_ref().is_none_or(|s| s.is_empty()) { + if config + .whatsapp_verify_token + .as_ref() + .is_none_or(|s| s.is_empty()) + { eprintln!( "[bot] bot.toml: transport=\"whatsapp\" requires \ whatsapp_verify_token" @@ -243,7 +263,11 @@ impl BotConfig { ); return None; } - if config.slack_signing_secret.as_ref().is_none_or(|s| s.is_empty()) { + if config + .slack_signing_secret + .as_ref() + .is_none_or(|s| s.is_empty()) + { eprintln!( "[bot] bot.toml: transport=\"slack\" requires \ slack_signing_secret" @@ -259,7 +283,11 @@ impl BotConfig { } } else if config.transport == "discord" { // Validate Discord-specific fields. - if config.discord_bot_token.as_ref().is_none_or(|s| s.is_empty()) { + if config + .discord_bot_token + .as_ref() + .is_none_or(|s| s.is_empty()) + { eprintln!( "[bot] bot.toml: transport=\"discord\" requires \ discord_bot_token" @@ -276,21 +304,15 @@ impl BotConfig { } else { // Default transport is Matrix — validate Matrix-specific fields. if config.homeserver.as_ref().is_none_or(|s| s.is_empty()) { - eprintln!( - "[bot] bot.toml: transport=\"matrix\" requires homeserver" - ); + eprintln!("[bot] bot.toml: transport=\"matrix\" requires homeserver"); return None; } if config.username.as_ref().is_none_or(|s| s.is_empty()) { - eprintln!( - "[bot] bot.toml: transport=\"matrix\" requires username" - ); + eprintln!("[bot] bot.toml: transport=\"matrix\" requires username"); return None; } if config.password.as_ref().is_none_or(|s| s.is_empty()) { - eprintln!( - "[bot] bot.toml: transport=\"matrix\" requires password" - ); + eprintln!("[bot] bot.toml: transport=\"matrix\" requires password"); return None; } if config.room_ids.is_empty() { @@ -402,7 +424,10 @@ enabled = true let result = BotConfig::load(tmp.path()); assert!(result.is_some()); let config = result.unwrap(); - assert_eq!(config.homeserver.as_deref(), Some("https://matrix.example.com")); + assert_eq!( + config.homeserver.as_deref(), + Some("https://matrix.example.com") + ); assert_eq!(config.username.as_deref(), Some("@bot:example.com")); assert_eq!( config.effective_room_ids(), @@ -761,18 +786,9 @@ whatsapp_verify_token = "my-verify" .unwrap(); let config = BotConfig::load(tmp.path()).unwrap(); assert_eq!(config.transport, "whatsapp"); - assert_eq!( - config.whatsapp_phone_number_id.as_deref(), - Some("123456") - ); - assert_eq!( - config.whatsapp_access_token.as_deref(), - Some("EAAtoken") - ); - assert_eq!( - config.whatsapp_verify_token.as_deref(), - Some("my-verify") - ); + assert_eq!(config.whatsapp_phone_number_id.as_deref(), Some("123456")); + assert_eq!(config.whatsapp_access_token.as_deref(), Some("EAAtoken")); + assert_eq!(config.whatsapp_verify_token.as_deref(), Some("my-verify")); } #[test] @@ -1106,14 +1122,8 @@ discord_channel_ids = ["123456789012345678"] .unwrap(); let config = BotConfig::load(tmp.path()).unwrap(); assert_eq!(config.transport, "discord"); - assert_eq!( - config.discord_bot_token.as_deref(), - Some("Bot.Token.Here") - ); - assert_eq!( - config.discord_channel_ids, - vec!["123456789012345678"] - ); + assert_eq!(config.discord_bot_token.as_deref(), Some("Bot.Token.Here")); + assert_eq!(config.discord_channel_ids, vec!["123456789012345678"]); } #[test] @@ -1176,9 +1186,6 @@ discord_allowed_users = ["111222333", "444555666"] "#, ) .unwrap(); - assert_eq!( - config.discord_allowed_users, - vec!["111222333", "444555666"] - ); + assert_eq!(config.discord_allowed_users, vec!["111222333", "444555666"]); } } diff --git a/server/src/chat/transport/matrix/delete.rs b/server/src/chat/transport/matrix/delete.rs index 01ec9696..8aa81286 100644 --- a/server/src/chat/transport/matrix/delete.rs +++ b/server/src/chat/transport/matrix/delete.rs @@ -65,9 +65,7 @@ pub async fn handle_delete( match crate::chat::lookup::find_story_by_number(project_root, story_number) { Some(found) => found, None => { - return format!( - "No story, bug, or spike with number **{story_number}** found." - ); + return format!("No story, bug, or spike with number **{story_number}** found."); } }; diff --git a/server/src/chat/transport/matrix/htop.rs b/server/src/chat/transport/matrix/htop.rs index 3d3f131f..1382bb3b 100644 --- a/server/src/chat/transport/matrix/htop.rs +++ b/server/src/chat/transport/matrix/htop.rs @@ -13,9 +13,9 @@ use std::time::Duration; use tokio::sync::{Mutex as TokioMutex, watch}; use crate::agents::{AgentPool, AgentStatus}; +use crate::chat::ChatTransport; use crate::chat::util::strip_bot_mention; use crate::slog; -use crate::chat::ChatTransport; use super::bot::markdown_to_html; @@ -51,7 +51,11 @@ pub type HtopSessions = Arc>>; /// - `htop stop` → `Stop` /// - `htop 10m` → `Start { duration_secs: 600 }` /// - `htop 120` → `Start { duration_secs: 120 }` (bare seconds) -pub fn extract_htop_command(message: &str, bot_name: &str, bot_user_id: &str) -> Option { +pub fn extract_htop_command( + message: &str, + bot_name: &str, + bot_user_id: &str, +) -> Option { let stripped = strip_bot_mention(message, bot_name, bot_user_id); let trimmed = stripped.trim(); @@ -261,7 +265,10 @@ pub async fn run_htop_loop( let text = build_htop_message(&agents, tick as u32, duration_secs); let html = markdown_to_html(&text); - if let Err(e) = transport.edit_message(&room_id, &initial_message_id, &text, &html).await { + if let Err(e) = transport + .edit_message(&room_id, &initial_message_id, &text, &html) + .await + { slog!("[htop] Failed to update message: {e}"); return; } @@ -274,7 +281,10 @@ pub async fn run_htop_loop( async fn send_stopped_message(transport: &dyn ChatTransport, room_id: &str, message_id: &str) { let text = "**htop** — monitoring stopped."; let html = markdown_to_html(text); - if let Err(e) = transport.edit_message(room_id, message_id, text, &html).await { + if let Err(e) = transport + .edit_message(room_id, message_id, text, &html) + .await + { slog!("[htop] Failed to send stop message: {e}"); } } @@ -302,7 +312,10 @@ pub async fn handle_htop_start( // Send the initial message. let initial_text = build_htop_message(&agents, 0, duration_secs); let initial_html = markdown_to_html(&initial_text); - let message_id = match transport.send_message(room_id, &initial_text, &initial_html).await { + let message_id = match transport + .send_message(room_id, &initial_text, &initial_html) + .await + { Ok(id) => id, Err(e) => { slog!("[htop] Failed to send initial message: {e}"); diff --git a/server/src/chat/transport/matrix/mod.rs b/server/src/chat/transport/matrix/mod.rs index 7aeece38..786d3775 100644 --- a/server/src/chat/transport/matrix/mod.rs +++ b/server/src/chat/transport/matrix/mod.rs @@ -21,11 +21,11 @@ pub mod commands; pub(crate) mod config; pub mod delete; pub mod htop; +pub mod notifications; pub mod rebuild; pub mod reset; pub mod rmtree; pub mod start; -pub mod notifications; pub mod transport_impl; pub use bot::{ConversationEntry, ConversationRole, RoomConversation}; @@ -92,9 +92,16 @@ pub fn spawn_bot( let watcher_rx = watcher_tx.subscribe(); let watcher_rx_auto = watcher_tx.subscribe(); tokio::spawn(async move { - if let Err(e) = - bot::run_bot(config, root, watcher_rx, watcher_rx_auto, perm_rx, agents, shutdown_rx) - .await + if let Err(e) = bot::run_bot( + config, + root, + watcher_rx, + watcher_rx_auto, + perm_rx, + agents, + shutdown_rx, + ) + .await { crate::slog!("[matrix-bot] Fatal error: {e}"); } diff --git a/server/src/chat/transport/matrix/notifications.rs b/server/src/chat/transport/matrix/notifications.rs index 45b39f2f..c166bc04 100644 --- a/server/src/chat/transport/matrix/notifications.rs +++ b/server/src/chat/transport/matrix/notifications.rs @@ -3,11 +3,11 @@ //! Subscribes to [`WatcherEvent`] broadcasts and posts a notification to all //! configured Matrix rooms whenever a work item moves between pipeline stages. +use crate::chat::ChatTransport; use crate::config::ProjectConfig; use crate::io::story_metadata::parse_front_matter; use crate::io::watcher::WatcherEvent; use crate::slog; -use crate::chat::ChatTransport; use std::collections::HashMap; use std::path::{Path, PathBuf}; use std::sync::Arc; @@ -81,9 +81,7 @@ pub fn format_error_notification( let name = story_name.unwrap_or(item_id); let plain = format!("\u{274c} #{number} {name} \u{2014} {reason}"); - let html = format!( - "\u{274c} #{number} {name} \u{2014} {reason}" - ); + let html = format!("\u{274c} #{number} {name} \u{2014} {reason}"); (plain, html) } @@ -113,9 +111,8 @@ pub fn format_blocked_notification( let name = story_name.unwrap_or(item_id); let plain = format!("\u{1f6ab} #{number} {name} \u{2014} BLOCKED: {reason}"); - let html = format!( - "\u{1f6ab} #{number} {name} \u{2014} BLOCKED: {reason}" - ); + let html = + format!("\u{1f6ab} #{number} {name} \u{2014} BLOCKED: {reason}"); (plain, html) } @@ -126,7 +123,6 @@ const RATE_LIMIT_DEBOUNCE: Duration = Duration::from_secs(60); /// into a single notification (only the final stage is announced). const STAGE_TRANSITION_DEBOUNCE: Duration = Duration::from_millis(200); - /// Format a rate limit warning notification message. /// /// Returns `(plain_text, html)` suitable for `ChatTransport::send_message`. @@ -138,9 +134,8 @@ pub fn format_rate_limit_notification( let number = extract_story_number(item_id).unwrap_or(item_id); let name = story_name.unwrap_or(item_id); - let plain = format!( - "\u{26a0}\u{fe0f} #{number} {name} \u{2014} {agent_name} hit an API rate limit" - ); + let plain = + format!("\u{26a0}\u{fe0f} #{number} {name} \u{2014} {agent_name} hit an API rate limit"); let html = format!( "\u{26a0}\u{fe0f} #{number} {name} \u{2014} \ {agent_name} hit an API rate limit" @@ -223,9 +218,7 @@ pub fn spawn_notification_listener( // and must be skipped — the old inferred_from_stage fallback // produced wrong notifications for stories that skipped stages // (e.g. "QA → Merge" when QA was never entered). - let from_display = from_stage - .as_deref() - .map(stage_display_name); + let from_display = from_stage.as_deref().map(stage_display_name); let Some(from_display) = from_display else { continue; // creation or unknown transition — skip }; @@ -246,33 +239,24 @@ pub fn spawn_notification_listener( e.2 = story_name.clone(); } }) - .or_insert_with(|| { - (from_display.to_string(), stage.clone(), story_name) - }); + .or_insert_with(|| (from_display.to_string(), stage.clone(), story_name)); // Start or extend the debounce window. - flush_deadline = - Some(tokio::time::Instant::now() + STAGE_TRANSITION_DEBOUNCE); + flush_deadline = Some(tokio::time::Instant::now() + STAGE_TRANSITION_DEBOUNCE); } Ok(WatcherEvent::MergeFailure { ref story_id, ref reason, }) => { - let story_name = - read_story_name(&project_root, "4_merge", story_id); - let (plain, html) = format_error_notification( - story_id, - story_name.as_deref(), - reason, - ); + let story_name = read_story_name(&project_root, "4_merge", story_id); + let (plain, html) = + format_error_notification(story_id, story_name.as_deref(), reason); slog!("[bot] Sending error notification: {plain}"); for room_id in &get_room_ids() { if let Err(e) = transport.send_message(room_id, &plain, &html).await { - slog!( - "[bot] Failed to send error notification to {room_id}: {e}" - ); + slog!("[bot] Failed to send error notification to {room_id}: {e}"); } } } @@ -303,11 +287,8 @@ pub fn spawn_notification_listener( rate_limit_last_notified.insert(debounce_key, now); let story_name = find_story_name_any_stage(&project_root, story_id); - let (plain, html) = format_rate_limit_notification( - story_id, - story_name.as_deref(), - agent_name, - ); + let (plain, html) = + format_rate_limit_notification(story_id, story_name.as_deref(), agent_name); slog!("[bot] Sending rate-limit notification: {plain}"); @@ -325,19 +306,14 @@ pub fn spawn_notification_listener( ref reason, }) => { let story_name = find_story_name_any_stage(&project_root, story_id); - let (plain, html) = format_blocked_notification( - story_id, - story_name.as_deref(), - reason, - ); + let (plain, html) = + format_blocked_notification(story_id, story_name.as_deref(), reason); slog!("[bot] Sending blocked notification: {plain}"); for room_id in &get_room_ids() { if let Err(e) = transport.send_message(room_id, &plain, &html).await { - slog!( - "[bot] Failed to send blocked notification to {room_id}: {e}" - ); + slog!("[bot] Failed to send blocked notification to {room_id}: {e}"); } } } @@ -362,14 +338,10 @@ pub fn spawn_notification_listener( } Ok(_) => {} // Ignore other events Err(broadcast::error::RecvError::Lagged(n)) => { - slog!( - "[bot] Notification listener lagged, skipped {n} events" - ); + slog!("[bot] Notification listener lagged, skipped {n} events"); } Err(broadcast::error::RecvError::Closed) => { - slog!( - "[bot] Watcher channel closed, stopping notification listener" - ); + slog!("[bot] Watcher channel closed, stopping notification listener"); // Flush any coalesced transitions that haven't fired yet. for (item_id, (from_display, to_stage_key, story_name)) in pending_transitions.drain() @@ -383,12 +355,8 @@ pub fn spawn_notification_listener( ); slog!("[bot] Sending stage notification: {plain}"); for room_id in &get_room_ids() { - if let Err(e) = - transport.send_message(room_id, &plain, &html).await - { - slog!( - "[bot] Failed to send notification to {room_id}: {e}" - ); + if let Err(e) = transport.send_message(room_id, &plain, &html).await { + slog!("[bot] Failed to send notification to {room_id}: {e}"); } } } @@ -402,8 +370,8 @@ pub fn spawn_notification_listener( #[cfg(test)] mod tests { use super::*; - use async_trait::async_trait; use crate::chat::MessageId; + use async_trait::async_trait; // ── MockTransport ─────────────────────────────────────────────────────── @@ -417,18 +385,38 @@ mod tests { impl MockTransport { fn new() -> (Arc, CallLog) { let calls: CallLog = Arc::new(std::sync::Mutex::new(Vec::new())); - (Arc::new(Self { calls: Arc::clone(&calls) }), calls) + ( + Arc::new(Self { + calls: Arc::clone(&calls), + }), + calls, + ) } } #[async_trait] impl crate::chat::ChatTransport for MockTransport { - async fn send_message(&self, room_id: &str, plain: &str, html: &str) -> Result { - self.calls.lock().unwrap().push((room_id.to_string(), plain.to_string(), html.to_string())); + async fn send_message( + &self, + room_id: &str, + plain: &str, + html: &str, + ) -> Result { + self.calls.lock().unwrap().push(( + room_id.to_string(), + plain.to_string(), + html.to_string(), + )); Ok("mock-msg-id".to_string()) } - async fn edit_message(&self, _room_id: &str, _id: &str, _plain: &str, _html: &str) -> Result<(), String> { + async fn edit_message( + &self, + _room_id: &str, + _id: &str, + _plain: &str, + _html: &str, + ) -> Result<(), String> { Ok(()) } @@ -462,10 +450,12 @@ mod tests { tmp.path().to_path_buf(), ); - watcher_tx.send(WatcherEvent::RateLimitWarning { - story_id: "365_story_rate_limit".to_string(), - agent_name: "coder-1".to_string(), - }).unwrap(); + watcher_tx + .send(WatcherEvent::RateLimitWarning { + story_id: "365_story_rate_limit".to_string(), + agent_name: "coder-1".to_string(), + }) + .unwrap(); // Give the spawned task time to process the event. tokio::time::sleep(std::time::Duration::from_millis(100)).await; @@ -475,9 +465,15 @@ mod tests { let (room_id, plain, _html) = &calls[0]; assert_eq!(room_id, "!room123:example.org"); assert!(plain.contains("365"), "plain should contain story number"); - assert!(plain.contains("Rate Limit Test Story"), "plain should contain story name"); + assert!( + plain.contains("Rate Limit Test Story"), + "plain should contain story name" + ); assert!(plain.contains("coder-1"), "plain should contain agent name"); - assert!(plain.contains("rate limit"), "plain should mention rate limit"); + assert!( + plain.contains("rate limit"), + "plain should mention rate limit" + ); } /// AC4: a second RateLimitWarning for the same agent within the debounce @@ -498,16 +494,22 @@ mod tests { // Send the same warning twice in rapid succession. for _ in 0..2 { - watcher_tx.send(WatcherEvent::RateLimitWarning { - story_id: "42_story_debounce".to_string(), - agent_name: "coder-2".to_string(), - }).unwrap(); + watcher_tx + .send(WatcherEvent::RateLimitWarning { + story_id: "42_story_debounce".to_string(), + agent_name: "coder-2".to_string(), + }) + .unwrap(); } tokio::time::sleep(std::time::Duration::from_millis(100)).await; let calls = calls.lock().unwrap(); - assert_eq!(calls.len(), 1, "Debounce should suppress the second notification"); + assert_eq!( + calls.len(), + 1, + "Debounce should suppress the second notification" + ); } /// AC4 (corollary): warnings for different agents are NOT debounced against @@ -526,19 +528,27 @@ mod tests { tmp.path().to_path_buf(), ); - watcher_tx.send(WatcherEvent::RateLimitWarning { - story_id: "42_story_foo".to_string(), - agent_name: "coder-1".to_string(), - }).unwrap(); - watcher_tx.send(WatcherEvent::RateLimitWarning { - story_id: "42_story_foo".to_string(), - agent_name: "coder-2".to_string(), - }).unwrap(); + watcher_tx + .send(WatcherEvent::RateLimitWarning { + story_id: "42_story_foo".to_string(), + agent_name: "coder-1".to_string(), + }) + .unwrap(); + watcher_tx + .send(WatcherEvent::RateLimitWarning { + story_id: "42_story_foo".to_string(), + agent_name: "coder-2".to_string(), + }) + .unwrap(); tokio::time::sleep(std::time::Duration::from_millis(100)).await; let calls = calls.lock().unwrap(); - assert_eq!(calls.len(), 2, "Different agents should each trigger a notification"); + assert_eq!( + calls.len(), + 2, + "Different agents should each trigger a notification" + ); } // ── dynamic room IDs (WhatsApp ambient_rooms pattern) ─────────────────── @@ -573,25 +583,40 @@ mod tests { ); // Add a room after the listener is spawned (simulates a user messaging first). - rooms.lock().unwrap().insert("phone:+15551234567".to_string()); + rooms + .lock() + .unwrap() + .insert("phone:+15551234567".to_string()); - watcher_tx.send(WatcherEvent::WorkItem { - stage: "3_qa".to_string(), - item_id: "10_story_foo".to_string(), - action: "qa".to_string(), - commit_msg: "huskies: qa 10_story_foo".to_string(), - from_stage: Some("2_current".to_string()), - }).unwrap(); + watcher_tx + .send(WatcherEvent::WorkItem { + stage: "3_qa".to_string(), + item_id: "10_story_foo".to_string(), + action: "qa".to_string(), + commit_msg: "huskies: qa 10_story_foo".to_string(), + from_stage: Some("2_current".to_string()), + }) + .unwrap(); // Wait longer than STAGE_TRANSITION_DEBOUNCE (200ms) so the coalesced // notification flushes. tokio::time::sleep(std::time::Duration::from_millis(350)).await; let calls = calls.lock().unwrap(); - assert_eq!(calls.len(), 1, "Should deliver to the dynamically added room"); + assert_eq!( + calls.len(), + 1, + "Should deliver to the dynamically added room" + ); assert_eq!(calls[0].0, "phone:+15551234567"); - assert!(calls[0].1.contains("10"), "plain should contain story number"); - assert!(calls[0].1.contains("Foo Story"), "plain should contain story name"); + assert!( + calls[0].1.contains("10"), + "plain should contain story number" + ); + assert!( + calls[0].1.contains("Foo Story"), + "plain should contain story name" + ); } /// When no rooms are registered (e.g. no WhatsApp users have messaged yet), @@ -603,20 +628,17 @@ mod tests { let (watcher_tx, watcher_rx) = broadcast::channel::(16); let (transport, calls) = MockTransport::new(); - spawn_notification_listener( - transport, - Vec::new, - watcher_rx, - tmp.path().to_path_buf(), - ); + spawn_notification_listener(transport, Vec::new, watcher_rx, tmp.path().to_path_buf()); - watcher_tx.send(WatcherEvent::WorkItem { - stage: "3_qa".to_string(), - item_id: "10_story_foo".to_string(), - action: "qa".to_string(), - commit_msg: "huskies: qa 10_story_foo".to_string(), - from_stage: Some("2_current".to_string()), - }).unwrap(); + watcher_tx + .send(WatcherEvent::WorkItem { + stage: "3_qa".to_string(), + item_id: "10_story_foo".to_string(), + action: "qa".to_string(), + commit_msg: "huskies: qa 10_story_foo".to_string(), + from_stage: Some("2_current".to_string()), + }) + .unwrap(); tokio::time::sleep(std::time::Duration::from_millis(100)).await; @@ -660,11 +682,7 @@ mod tests { #[test] fn read_story_name_reads_from_front_matter() { let tmp = tempfile::tempdir().unwrap(); - let stage_dir = tmp - .path() - .join(".huskies") - .join("work") - .join("2_current"); + let stage_dir = tmp.path().join(".huskies").join("work").join("2_current"); std::fs::create_dir_all(&stage_dir).unwrap(); std::fs::write( stage_dir.join("42_story_my_feature.md"), @@ -686,11 +704,7 @@ mod tests { #[test] fn read_story_name_returns_none_for_missing_name_field() { let tmp = tempfile::tempdir().unwrap(); - let stage_dir = tmp - .path() - .join(".huskies") - .join("work") - .join("2_current"); + let stage_dir = tmp.path().join(".huskies").join("work").join("2_current"); std::fs::create_dir_all(&stage_dir).unwrap(); std::fs::write( stage_dir.join("42_story_no_name.md"), @@ -706,8 +720,11 @@ mod tests { #[test] fn format_error_notification_with_story_name() { - let (plain, html) = - format_error_notification("262_story_bot_errors", Some("Bot error notifications"), "merge conflict in src/main.rs"); + let (plain, html) = format_error_notification( + "262_story_bot_errors", + Some("Bot error notifications"), + "merge conflict in src/main.rs", + ); assert_eq!( plain, "\u{274c} #262 Bot error notifications \u{2014} merge conflict in src/main.rs" @@ -720,12 +737,8 @@ mod tests { #[test] fn format_error_notification_without_story_name_falls_back_to_item_id() { - let (plain, _html) = - format_error_notification("42_bug_fix_thing", None, "tests failed"); - assert_eq!( - plain, - "\u{274c} #42 42_bug_fix_thing \u{2014} tests failed" - ); + let (plain, _html) = format_error_notification("42_bug_fix_thing", None, "tests failed"); + assert_eq!(plain, "\u{274c} #42 42_bug_fix_thing \u{2014} tests failed"); } #[test] @@ -759,8 +772,7 @@ mod tests { #[test] fn format_blocked_notification_falls_back_to_item_id() { - let (plain, _html) = - format_blocked_notification("42_story_thing", None, "empty diff"); + let (plain, _html) = format_blocked_notification("42_story_thing", None, "empty diff"); assert_eq!( plain, "\u{1f6ab} #42 42_story_thing \u{2014} BLOCKED: empty diff" @@ -792,10 +804,12 @@ mod tests { tmp.path().to_path_buf(), ); - watcher_tx.send(WatcherEvent::StoryBlocked { - story_id: "425_story_blocking_test".to_string(), - reason: "Retry limit exceeded (3/3) at coder stage".to_string(), - }).unwrap(); + watcher_tx + .send(WatcherEvent::StoryBlocked { + story_id: "425_story_blocking_test".to_string(), + reason: "Retry limit exceeded (3/3) at coder stage".to_string(), + }) + .unwrap(); tokio::time::sleep(std::time::Duration::from_millis(100)).await; @@ -804,10 +818,22 @@ mod tests { let (room_id, plain, html) = &calls[0]; assert_eq!(room_id, "!room123:example.org"); assert!(plain.contains("425"), "plain should contain story number"); - assert!(plain.contains("Blocking Test Story"), "plain should contain story name"); - assert!(plain.contains("BLOCKED"), "plain should contain BLOCKED label"); - assert!(plain.contains("Retry limit exceeded"), "plain should contain the reason"); - assert!(html.contains("BLOCKED"), "html should contain BLOCKED label"); + assert!( + plain.contains("Blocking Test Story"), + "plain should contain story name" + ); + assert!( + plain.contains("BLOCKED"), + "plain should contain BLOCKED label" + ); + assert!( + plain.contains("Retry limit exceeded"), + "plain should contain the reason" + ); + assert!( + html.contains("BLOCKED"), + "html should contain BLOCKED label" + ); } /// StoryBlocked with no room registered should not panic. @@ -818,17 +844,14 @@ mod tests { let (watcher_tx, watcher_rx) = broadcast::channel::(16); let (transport, calls) = MockTransport::new(); - spawn_notification_listener( - transport, - Vec::new, - watcher_rx, - tmp.path().to_path_buf(), - ); + spawn_notification_listener(transport, Vec::new, watcher_rx, tmp.path().to_path_buf()); - watcher_tx.send(WatcherEvent::StoryBlocked { - story_id: "42_story_no_rooms".to_string(), - reason: "empty diff".to_string(), - }).unwrap(); + watcher_tx + .send(WatcherEvent::StoryBlocked { + story_id: "42_story_no_rooms".to_string(), + reason: "empty diff".to_string(), + }) + .unwrap(); tokio::time::sleep(std::time::Duration::from_millis(100)).await; @@ -840,11 +863,8 @@ mod tests { #[test] fn format_rate_limit_notification_includes_agent_and_story() { - let (plain, html) = format_rate_limit_notification( - "365_story_my_feature", - Some("My Feature"), - "coder-2", - ); + let (plain, html) = + format_rate_limit_notification("365_story_my_feature", Some("My Feature"), "coder-2"); assert_eq!( plain, "\u{26a0}\u{fe0f} #365 My Feature \u{2014} coder-2 hit an API rate limit" @@ -857,8 +877,7 @@ mod tests { #[test] fn format_rate_limit_notification_falls_back_to_item_id() { - let (plain, _html) = - format_rate_limit_notification("42_story_thing", None, "coder-1"); + let (plain, _html) = format_rate_limit_notification("42_story_thing", None, "coder-1"); assert_eq!( plain, "\u{26a0}\u{fe0f} #42 42_story_thing \u{2014} coder-1 hit an API rate limit" @@ -869,12 +888,8 @@ mod tests { #[test] fn format_notification_done_stage_includes_party_emoji() { - let (plain, html) = format_stage_notification( - "353_story_done", - Some("Done Story"), - "Merge", - "Done", - ); + let (plain, html) = + format_stage_notification("353_story_done", Some("Done Story"), "Merge", "Done"); assert_eq!( plain, "\u{1f389} #353 Done Story \u{2014} Merge \u{2192} Done" @@ -887,12 +902,8 @@ mod tests { #[test] fn format_notification_non_done_stage_has_no_emoji() { - let (plain, _html) = format_stage_notification( - "42_story_thing", - Some("Some Story"), - "Backlog", - "Current", - ); + let (plain, _html) = + format_stage_notification("42_story_thing", Some("Some Story"), "Backlog", "Current"); assert!(!plain.contains("\u{1f389}")); } @@ -916,26 +927,14 @@ mod tests { #[test] fn format_notification_without_story_name_falls_back_to_item_id() { - let (plain, _html) = format_stage_notification( - "42_bug_fix_thing", - None, - "Current", - "QA", - ); - assert_eq!( - plain, - "#42 42_bug_fix_thing \u{2014} Current \u{2192} QA" - ); + let (plain, _html) = format_stage_notification("42_bug_fix_thing", None, "Current", "QA"); + assert_eq!(plain, "#42 42_bug_fix_thing \u{2014} Current \u{2192} QA"); } #[test] fn format_notification_non_numeric_id_uses_full_id() { - let (plain, _html) = format_stage_notification( - "abc_story_thing", - Some("Some Story"), - "QA", - "Merge", - ); + let (plain, _html) = + format_stage_notification("abc_story_thing", Some("Some Story"), "QA", "Merge"); assert_eq!( plain, "#abc_story_thing Some Story \u{2014} QA \u{2192} Merge" @@ -967,15 +966,21 @@ mod tests { tmp.path().to_path_buf(), ); - watcher_tx.send(WatcherEvent::RateLimitWarning { - story_id: "42_story_suppress".to_string(), - agent_name: "coder-1".to_string(), - }).unwrap(); + watcher_tx + .send(WatcherEvent::RateLimitWarning { + story_id: "42_story_suppress".to_string(), + agent_name: "coder-1".to_string(), + }) + .unwrap(); tokio::time::sleep(std::time::Duration::from_millis(100)).await; let calls = calls.lock().unwrap(); - assert_eq!(calls.len(), 0, "RateLimitWarning should be suppressed when rate_limit_notifications = false"); + assert_eq!( + calls.len(), + 0, + "RateLimitWarning should be suppressed when rate_limit_notifications = false" + ); } /// RateLimitHardBlock is never posted to Matrix — it is logged server-side only. @@ -994,11 +999,13 @@ mod tests { ); let reset_at = chrono::Utc::now() + chrono::Duration::hours(1); - watcher_tx.send(WatcherEvent::RateLimitHardBlock { - story_id: "42_story_hard_block".to_string(), - agent_name: "coder-1".to_string(), - reset_at, - }).unwrap(); + watcher_tx + .send(WatcherEvent::RateLimitHardBlock { + story_id: "42_story_hard_block".to_string(), + agent_name: "coder-1".to_string(), + reset_at, + }) + .unwrap(); tokio::time::sleep(std::time::Duration::from_millis(100)).await; @@ -1028,10 +1035,12 @@ mod tests { tmp.path().to_path_buf(), ); - watcher_tx.send(WatcherEvent::StoryBlocked { - story_id: "42_story_blocked".to_string(), - reason: "retry limit exceeded".to_string(), - }).unwrap(); + watcher_tx + .send(WatcherEvent::StoryBlocked { + story_id: "42_story_blocked".to_string(), + reason: "retry limit exceeded".to_string(), + }) + .unwrap(); tokio::time::sleep(std::time::Duration::from_millis(100)).await; @@ -1064,10 +1073,12 @@ mod tests { ); // First warning is sent. - watcher_tx.send(WatcherEvent::RateLimitWarning { - story_id: "42_story_reload".to_string(), - agent_name: "coder-1".to_string(), - }).unwrap(); + watcher_tx + .send(WatcherEvent::RateLimitWarning { + story_id: "42_story_reload".to_string(), + agent_name: "coder-1".to_string(), + }) + .unwrap(); tokio::time::sleep(std::time::Duration::from_millis(100)).await; // Disable notifications and trigger hot-reload. @@ -1080,14 +1091,20 @@ mod tests { tokio::time::sleep(std::time::Duration::from_millis(100)).await; // Second warning (different agent to bypass debounce) should be suppressed. - watcher_tx.send(WatcherEvent::RateLimitWarning { - story_id: "42_story_reload".to_string(), - agent_name: "coder-2".to_string(), - }).unwrap(); + watcher_tx + .send(WatcherEvent::RateLimitWarning { + story_id: "42_story_reload".to_string(), + agent_name: "coder-2".to_string(), + }) + .unwrap(); tokio::time::sleep(std::time::Duration::from_millis(100)).await; let calls = calls.lock().unwrap(); - assert_eq!(calls.len(), 1, "Only the first warning should be sent; second should be suppressed after hot-reload"); + assert_eq!( + calls.len(), + 1, + "Only the first warning should be sent; second should be suppressed after hot-reload" + ); } // ── Bug 549: synthetic events with from_stage=None must not notify ────── @@ -1111,19 +1128,22 @@ mod tests { ); // Synthetic reassign event within 4_merge — no actual stage change. - watcher_tx.send(WatcherEvent::WorkItem { - stage: "4_merge".to_string(), - item_id: "549_story_skip_qa".to_string(), - action: "reassign".to_string(), - commit_msg: String::new(), - from_stage: None, - }).unwrap(); + watcher_tx + .send(WatcherEvent::WorkItem { + stage: "4_merge".to_string(), + item_id: "549_story_skip_qa".to_string(), + action: "reassign".to_string(), + commit_msg: String::new(), + from_stage: None, + }) + .unwrap(); tokio::time::sleep(std::time::Duration::from_millis(350)).await; let calls = calls.lock().unwrap(); assert_eq!( - calls.len(), 0, + calls.len(), + 0, "Synthetic events with from_stage=None must not generate notifications" ); } @@ -1152,13 +1172,15 @@ mod tests { ); // Story skips QA: from_stage is 2_current, not 3_qa. - watcher_tx.send(WatcherEvent::WorkItem { - stage: "4_merge".to_string(), - item_id: "549_story_skip_qa".to_string(), - action: "merge".to_string(), - commit_msg: "huskies: merge 549_story_skip_qa".to_string(), - from_stage: Some("2_current".to_string()), - }).unwrap(); + watcher_tx + .send(WatcherEvent::WorkItem { + stage: "4_merge".to_string(), + item_id: "549_story_skip_qa".to_string(), + action: "merge".to_string(), + commit_msg: "huskies: merge 549_story_skip_qa".to_string(), + from_stage: Some("2_current".to_string()), + }) + .unwrap(); tokio::time::sleep(std::time::Duration::from_millis(350)).await; diff --git a/server/src/chat/transport/matrix/rebuild.rs b/server/src/chat/transport/matrix/rebuild.rs index 08826e5b..24e767bc 100644 --- a/server/src/chat/transport/matrix/rebuild.rs +++ b/server/src/chat/transport/matrix/rebuild.rs @@ -73,11 +73,8 @@ mod tests { #[test] fn extract_with_full_user_id() { - let cmd = extract_rebuild_command( - "@timmy:home.local rebuild", - "Timmy", - "@timmy:home.local", - ); + let cmd = + extract_rebuild_command("@timmy:home.local rebuild", "Timmy", "@timmy:home.local"); assert_eq!(cmd, Some(RebuildCommand)); } diff --git a/server/src/chat/transport/matrix/reset.rs b/server/src/chat/transport/matrix/reset.rs index f8dbc617..76e9cd82 100644 --- a/server/src/chat/transport/matrix/reset.rs +++ b/server/src/chat/transport/matrix/reset.rs @@ -50,7 +50,9 @@ pub async fn handle_reset( ) -> String { { let mut guard = history.lock().await; - let conv = guard.entry(room_id.clone()).or_insert_with(RoomConversation::default); + let conv = guard + .entry(room_id.clone()) + .or_insert_with(RoomConversation::default); conv.session_id = None; conv.entries.clear(); crate::chat::transport::matrix::bot::save_history(project_root, &guard); @@ -75,8 +77,7 @@ mod tests { #[test] fn extract_with_full_user_id() { - let cmd = - extract_reset_command("@timmy:home.local reset", "Timmy", "@timmy:home.local"); + let cmd = extract_reset_command("@timmy:home.local reset", "Timmy", "@timmy:home.local"); assert_eq!(cmd, Some(ResetCommand)); } @@ -115,21 +116,27 @@ mod tests { let room_id: OwnedRoomId = "!test:example.com".parse().unwrap(); let history: ConversationHistory = Arc::new(TokioMutex::new({ let mut m = HashMap::new(); - m.insert(room_id.clone(), RoomConversation { - session_id: Some("old-session-id".to_string()), - entries: vec![ConversationEntry { - role: ConversationRole::User, - sender: "@alice:example.com".to_string(), - content: "previous message".to_string(), - }], - }); + m.insert( + room_id.clone(), + RoomConversation { + session_id: Some("old-session-id".to_string()), + entries: vec![ConversationEntry { + role: ConversationRole::User, + sender: "@alice:example.com".to_string(), + content: "previous message".to_string(), + }], + }, + ); m })); let tmp = tempfile::tempdir().unwrap(); let response = handle_reset("Timmy", &room_id, &history, tmp.path()).await; - assert!(response.contains("reset"), "response should mention reset: {response}"); + assert!( + response.contains("reset"), + "response should mention reset: {response}" + ); let guard = history.lock().await; let conv = guard.get(&room_id).unwrap(); diff --git a/server/src/chat/transport/matrix/rmtree.rs b/server/src/chat/transport/matrix/rmtree.rs index 9d870eb3..d5143eba 100644 --- a/server/src/chat/transport/matrix/rmtree.rs +++ b/server/src/chat/transport/matrix/rmtree.rs @@ -107,9 +107,7 @@ pub async fn handle_rmtree( return format!("Failed to remove worktree for story {story_number}: {e}"); } - crate::slog!( - "[matrix-bot] rmtree command: removed worktree for {story_id} (bot={bot_name})" - ); + crate::slog!("[matrix-bot] rmtree command: removed worktree for {story_id} (bot={bot_name})"); let mut response = format!("Removed worktree for **{story_id}**."); if !stopped_agents.is_empty() { @@ -131,11 +129,8 @@ mod tests { #[test] fn extract_with_full_user_id() { - let cmd = extract_rmtree_command( - "@timmy:home.local rmtree 42", - "Timmy", - "@timmy:home.local", - ); + let cmd = + extract_rmtree_command("@timmy:home.local rmtree 42", "Timmy", "@timmy:home.local"); assert_eq!( cmd, Some(RmtreeCommand::Rmtree { diff --git a/server/src/chat/transport/matrix/start.rs b/server/src/chat/transport/matrix/start.rs index 9e5e9993..5e841d60 100644 --- a/server/src/chat/transport/matrix/start.rs +++ b/server/src/chat/transport/matrix/start.rs @@ -84,9 +84,7 @@ pub async fn handle_start( match crate::chat::lookup::find_story_by_number(project_root, story_number) { Some(found) => found, None => { - return format!( - "No story, bug, or spike with number **{story_number}** found." - ); + return format!("No story, bug, or spike with number **{story_number}** found."); } }; @@ -115,7 +113,13 @@ pub async fn handle_start( ); match agents - .start_agent(project_root, &story_id, resolved_agent.as_deref(), None, None) + .start_agent( + project_root, + &story_id, + resolved_agent.as_deref(), + None, + None, + ) .await { Ok(info) => { @@ -231,7 +235,14 @@ mod tests { async fn handle_start_returns_not_found_for_unknown_number() { let tmp = tempfile::tempdir().unwrap(); let project_root = tmp.path(); - for stage in &["1_backlog", "2_current", "3_qa", "4_merge", "5_done", "6_archived"] { + for stage in &[ + "1_backlog", + "2_current", + "3_qa", + "4_merge", + "5_done", + "6_archived", + ] { std::fs::create_dir_all(project_root.join(".huskies").join("work").join(stage)) .unwrap(); } @@ -276,7 +287,8 @@ mod tests { "response must not say 'Failed' when coders are busy: {response}" ); assert!( - response.to_lowercase().contains("queue") || response.to_lowercase().contains("available"), + response.to_lowercase().contains("queue") + || response.to_lowercase().contains("available"), "response must mention queued/available state: {response}" ); } diff --git a/server/src/chat/transport/slack/commands.rs b/server/src/chat/transport/slack/commands.rs index 9e8a63f9..0f96a121 100644 --- a/server/src/chat/transport/slack/commands.rs +++ b/server/src/chat/transport/slack/commands.rs @@ -1,21 +1,21 @@ //! Slack incoming message dispatch and slash command handling. +use serde::{Deserialize, Serialize}; use std::collections::HashMap; use std::collections::HashSet; use std::path::PathBuf; use std::sync::{Arc, Mutex}; use tokio::sync::{Mutex as TokioMutex, oneshot}; -use serde::{Deserialize, Serialize}; +use super::format::markdown_to_slack; +use super::history::{SlackConversationHistory, save_slack_history}; +use super::meta::SlackTransport; use crate::agents::AgentPool; +use crate::chat::ChatTransport; use crate::chat::transport::matrix::{ConversationEntry, ConversationRole, RoomConversation}; use crate::chat::util::is_permission_approval; -use crate::slog; -use crate::chat::ChatTransport; use crate::http::context::{PermissionDecision, PermissionForward}; -use super::meta::SlackTransport; -use super::history::{SlackConversationHistory, save_slack_history}; -use super::format::markdown_to_slack; +use crate::slog; // ── Slash command types ───────────────────────────────────────────────── @@ -81,8 +81,7 @@ pub struct SlackWebhookContext { /// Permission requests from the MCP `prompt_permission` tool arrive here. pub perm_rx: Arc>>, /// Pending permission replies keyed by channel ID. - pub pending_perm_replies: - Arc>>>, + pub pending_perm_replies: Arc>>>, /// Seconds before an unanswered permission prompt is auto-denied. pub permission_timeout_secs: u64, } @@ -154,8 +153,11 @@ pub(super) async fn handle_incoming_message( } HtopCommand::Start { duration_secs } => { // On Slack, htop uses native message editing for live updates. - let snapshot = - crate::chat::transport::matrix::htop::build_htop_message(&ctx.agents, 0, duration_secs); + let snapshot = crate::chat::transport::matrix::htop::build_htop_message( + &ctx.agents, + 0, + duration_secs, + ); let snapshot = markdown_to_slack(&snapshot); let msg_id = match ctx.transport.send_message(channel, &snapshot, "").await { Ok(id) => id, @@ -179,9 +181,7 @@ pub(super) async fn handle_incoming_message( duration_secs, ); let updated = markdown_to_slack(&updated); - if let Err(e) = - transport.edit_message(&ch, &msg_id, &updated, "").await - { + if let Err(e) = transport.edit_message(&ch, &msg_id, &updated, "").await { slog!("[slack] Failed to edit htop message: {e}"); break; } @@ -245,7 +245,9 @@ pub(super) async fn handle_incoming_message( ) { let response = match rmtree_cmd { crate::chat::transport::matrix::rmtree::RmtreeCommand::Rmtree { story_number } => { - slog!("[slack] Handling rmtree command from {user} in {channel}: story {story_number}"); + slog!( + "[slack] Handling rmtree command from {user} in {channel}: story {story_number}" + ); crate::chat::transport::matrix::rmtree::handle_rmtree( &ctx.bot_name, &story_number, @@ -273,7 +275,9 @@ pub(super) async fn handle_incoming_message( slog!("[slack] Handling reset command from {user} in {channel}"); { let mut guard = ctx.history.lock().await; - let conv = guard.entry(channel.to_string()).or_insert_with(RoomConversation::default); + let conv = guard + .entry(channel.to_string()) + .or_insert_with(RoomConversation::default); conv.session_id = None; conv.entries.clear(); save_slack_history(&ctx.project_root, &guard); @@ -295,7 +299,9 @@ pub(super) async fn handle_incoming_message( story_number, agent_hint, } => { - slog!("[slack] Handling start command from {user} in {channel}: story {story_number}"); + slog!( + "[slack] Handling start command from {user} in {channel}: story {story_number}" + ); crate::chat::transport::matrix::start::handle_start( &ctx.bot_name, &story_number, @@ -320,8 +326,13 @@ pub(super) async fn handle_incoming_message( &ctx.bot_user_id, ) { let response = match assign_cmd { - crate::chat::transport::matrix::assign::AssignCommand::Assign { story_number, model } => { - slog!("[slack] Handling assign command from {user} in {channel}: story {story_number} model {model}"); + crate::chat::transport::matrix::assign::AssignCommand::Assign { + story_number, + model, + } => { + slog!( + "[slack] Handling assign command from {user} in {channel}: story {story_number} model {model}" + ); crate::chat::transport::matrix::assign::handle_assign( &ctx.bot_name, &story_number, @@ -352,17 +363,15 @@ async fn handle_llm_message( user: &str, user_message: &str, ) { - use crate::llm::providers::claude_code::{ClaudeCodeProvider, ClaudeCodeResult}; use crate::chat::util::drain_complete_paragraphs; + use crate::llm::providers::claude_code::{ClaudeCodeProvider, ClaudeCodeResult}; use std::sync::atomic::{AtomicBool, Ordering}; use tokio::sync::watch; // Look up existing session ID for this channel. let resume_session_id: Option = { let guard = ctx.history.lock().await; - guard - .get(channel) - .and_then(|conv| conv.session_id.clone()) + guard.get(channel).and_then(|conv| conv.session_id.clone()) }; let bot_name = &ctx.bot_name; @@ -383,7 +392,9 @@ async fn handle_llm_message( let post_task = tokio::spawn(async move { while let Some(chunk) = msg_rx.recv().await { let formatted = markdown_to_slack(&chunk); - let _ = post_transport.send_message(&post_channel, &formatted, "").await; + let _ = post_transport + .send_message(&post_channel, &formatted, "") + .await; } }); @@ -472,9 +483,7 @@ async fn handle_llm_message( let last_text = messages .iter() .rev() - .find(|m| { - m.role == crate::llm::types::Role::Assistant && !m.content.is_empty() - }) + .find(|m| m.role == crate::llm::types::Role::Assistant && !m.content.is_empty()) .map(|m| m.content.clone()) .unwrap_or_default(); if !last_text.is_empty() { @@ -559,7 +568,10 @@ mod tests { #[test] fn slash_command_maps_status() { - assert_eq!(slash_command_to_bot_keyword("/huskies-status"), Some("status")); + assert_eq!( + slash_command_to_bot_keyword("/huskies-status"), + Some("status") + ); } #[test] @@ -600,9 +612,8 @@ mod tests { response_type: "ephemeral", text: "hello".to_string(), }; - let json: serde_json::Value = serde_json::from_str( - &serde_json::to_string(&resp).unwrap() - ).unwrap(); + let json: serde_json::Value = + serde_json::from_str(&serde_json::to_string(&resp).unwrap()).unwrap(); assert_eq!(json["response_type"], "ephemeral"); assert_eq!(json["text"], "hello"); } @@ -642,7 +653,10 @@ mod tests { }; let result = try_handle_command(&dispatch, &synthetic); - assert!(result.is_some(), "status slash command should produce output via registry"); + assert!( + result.is_some(), + "status slash command should produce output via registry" + ); assert!(result.unwrap().contains("Pipeline Status")); } @@ -671,7 +685,10 @@ mod tests { let result = try_handle_command(&dispatch, &synthetic); assert!(result.is_some(), "show slash command should produce output"); let output = result.unwrap(); - assert!(output.contains("999"), "show output should reference the story number: {output}"); + assert!( + output.contains("999"), + "show output should reference the story number: {output}" + ); } // ── rebuild command extraction ───────────────────────────────────── @@ -704,7 +721,10 @@ mod tests { "Huskies", "slack-bot", ); - assert!(result.is_none(), "'status' should not be recognised as rebuild"); + assert!( + result.is_none(), + "'status' should not be recognised as rebuild" + ); } // ── reset command extraction ─────────────────────────────────────── @@ -731,21 +751,26 @@ mod tests { #[tokio::test] async fn reset_command_clears_slack_session() { + use crate::chat::transport::matrix::{ + ConversationEntry, ConversationRole, RoomConversation, + }; use std::sync::Arc; use tokio::sync::Mutex as TokioMutex; - use crate::chat::transport::matrix::{ConversationEntry, ConversationRole, RoomConversation}; let channel = "C01ABCDEF"; let history: SlackConversationHistory = Arc::new(TokioMutex::new({ let mut m = HashMap::new(); - m.insert(channel.to_string(), RoomConversation { - session_id: Some("old-session".to_string()), - entries: vec![ConversationEntry { - role: ConversationRole::User, - sender: "U01GHIJKL".to_string(), - content: "previous message".to_string(), - }], - }); + m.insert( + channel.to_string(), + RoomConversation { + session_id: Some("old-session".to_string()), + entries: vec![ConversationEntry { + role: ConversationRole::User, + sender: "U01GHIJKL".to_string(), + content: "previous message".to_string(), + }], + }, + ); m })); @@ -755,7 +780,9 @@ mod tests { { let mut guard = history.lock().await; - let conv = guard.entry(channel.to_string()).or_insert_with(RoomConversation::default); + let conv = guard + .entry(channel.to_string()) + .or_insert_with(RoomConversation::default); conv.session_id = None; conv.entries.clear(); save_slack_history(tmp.path(), &guard); @@ -862,6 +889,9 @@ mod tests { "Timmy", "@timmy:home.local", ); - assert!(result.is_none(), "'status' should not be recognised as assign on Slack"); + assert!( + result.is_none(), + "'status' should not be recognised as assign on Slack" + ); } } diff --git a/server/src/chat/transport/slack/format.rs b/server/src/chat/transport/slack/format.rs index 3ef8ff84..798b41cd 100644 --- a/server/src/chat/transport/slack/format.rs +++ b/server/src/chat/transport/slack/format.rs @@ -20,10 +20,8 @@ pub fn markdown_to_slack(text: &str) -> String { LazyLock::new(|| Regex::new(r"(?m)^#{1,6}\s+(.+)$").unwrap()); static RE_BOLD_ITALIC: LazyLock = LazyLock::new(|| Regex::new(r"\*\*\*(.+?)\*\*\*").unwrap()); - static RE_BOLD: LazyLock = - LazyLock::new(|| Regex::new(r"\*\*(.+?)\*\*").unwrap()); - static RE_STRIKETHROUGH: LazyLock = - LazyLock::new(|| Regex::new(r"~~(.+?)~~").unwrap()); + static RE_BOLD: LazyLock = LazyLock::new(|| Regex::new(r"\*\*(.+?)\*\*").unwrap()); + static RE_STRIKETHROUGH: LazyLock = LazyLock::new(|| Regex::new(r"~~(.+?)~~").unwrap()); static RE_LINK: LazyLock = LazyLock::new(|| Regex::new(r"\[([^\]]+)\]\(([^)]+)\)").unwrap()); @@ -105,8 +103,14 @@ mod tests { fn slack_fenced_code_block_preserved() { let input = "```rust\nlet x = 1;\n```"; let output = markdown_to_slack(input); - assert!(output.contains("let x = 1;"), "code block content must be preserved"); - assert!(output.contains("```"), "fenced code delimiters must be preserved"); + assert!( + output.contains("let x = 1;"), + "code block content must be preserved" + ); + assert!( + output.contains("```"), + "fenced code delimiters must be preserved" + ); } #[test] diff --git a/server/src/chat/transport/slack/meta.rs b/server/src/chat/transport/slack/meta.rs index f6d73110..6cd88849 100644 --- a/server/src/chat/transport/slack/meta.rs +++ b/server/src/chat/transport/slack/meta.rs @@ -104,9 +104,8 @@ impl ChatTransport for SlackTransport { return Err(format!("Slack API returned {status}: {resp_text}")); } - let parsed: SlackApiResponse = serde_json::from_str(&resp_text).map_err(|e| { - format!("Failed to parse Slack API response: {e} — body: {resp_text}") - })?; + let parsed: SlackApiResponse = serde_json::from_str(&resp_text) + .map_err(|e| format!("Failed to parse Slack API response: {e} — body: {resp_text}"))?; if !parsed.ok { return Err(format!( @@ -190,10 +189,7 @@ mod tests { .create_async() .await; - let transport = SlackTransport::with_api_base( - "xoxb-test-token".to_string(), - server.url(), - ); + let transport = SlackTransport::with_api_base("xoxb-test-token".to_string(), server.url()); let result = transport .send_message("C01ABCDEF", "hello", "

hello

") @@ -212,14 +208,9 @@ mod tests { .create_async() .await; - let transport = SlackTransport::with_api_base( - "xoxb-test-token".to_string(), - server.url(), - ); + let transport = SlackTransport::with_api_base("xoxb-test-token".to_string(), server.url()); - let result = transport - .send_message("C_INVALID", "hello", "") - .await; + let result = transport.send_message("C_INVALID", "hello", "").await; assert!(result.is_err()); assert!( result.unwrap_err().contains("channel_not_found"), @@ -237,10 +228,7 @@ mod tests { .create_async() .await; - let transport = SlackTransport::with_api_base( - "xoxb-test-token".to_string(), - server.url(), - ); + let transport = SlackTransport::with_api_base("xoxb-test-token".to_string(), server.url()); let result = transport .edit_message("C01ABCDEF", "1234567890.123456", "updated", "") @@ -258,10 +246,7 @@ mod tests { .create_async() .await; - let transport = SlackTransport::with_api_base( - "xoxb-test-token".to_string(), - server.url(), - ); + let transport = SlackTransport::with_api_base("xoxb-test-token".to_string(), server.url()); let result = transport .edit_message("C01ABCDEF", "bad-ts", "updated", "") @@ -287,10 +272,7 @@ mod tests { .create_async() .await; - let transport = SlackTransport::with_api_base( - "xoxb-test-token".to_string(), - server.url(), - ); + let transport = SlackTransport::with_api_base("xoxb-test-token".to_string(), server.url()); let result = transport.send_message("C01ABCDEF", "hello", "").await; assert!(result.is_err()); diff --git a/server/src/chat/transport/slack/mod.rs b/server/src/chat/transport/slack/mod.rs index ed2196fd..2aee55fe 100644 --- a/server/src/chat/transport/slack/mod.rs +++ b/server/src/chat/transport/slack/mod.rs @@ -12,15 +12,15 @@ pub mod history; pub mod meta; pub mod verify; +pub use commands::SlackWebhookContext; +pub use format::markdown_to_slack; pub use history::load_slack_history; pub use meta::SlackTransport; -pub use format::markdown_to_slack; -pub use commands::SlackWebhookContext; use serde::Deserialize; -use poem::{Request, Response, handler, http::StatusCode}; use crate::slog; +use poem::{Request, Response, handler, http::StatusCode}; // ── Slack Events API types ────────────────────────────────────────────── @@ -71,10 +71,7 @@ pub async fn webhook_receive( .header("X-Slack-Request-Timestamp") .unwrap_or("") .to_string(); - let signature = req - .header("X-Slack-Signature") - .unwrap_or("") - .to_string(); + let signature = req.header("X-Slack-Signature").unwrap_or("").to_string(); let bytes = match body.into_bytes().await { Ok(b) => b, @@ -98,9 +95,7 @@ pub async fn webhook_receive( Ok(e) => e, Err(e) => { slog!("[slack] Failed to parse webhook payload: {e}"); - return Response::builder() - .status(StatusCode::OK) - .body("ok"); + return Response::builder().status(StatusCode::OK).body("ok"); } }; @@ -124,8 +119,7 @@ pub async fn webhook_receive( && event.r#type.as_deref() == Some("message") && event.subtype.is_none() && event.bot_id.is_none() - && let (Some(channel), Some(user), Some(text)) = - (event.channel, event.user, event.text) + && let (Some(channel), Some(user), Some(text)) = (event.channel, event.user, event.text) && ctx.channel_ids.contains(&channel) { let ctx = Arc::clone(*ctx); @@ -135,9 +129,7 @@ pub async fn webhook_receive( }); } - Response::builder() - .status(StatusCode::OK) - .body("ok") + Response::builder().status(StatusCode::OK).body("ok") } /// POST /webhook/slack/command — receive incoming Slack slash commands. @@ -155,10 +147,7 @@ pub async fn slash_command_receive( .header("X-Slack-Request-Timestamp") .unwrap_or("") .to_string(); - let signature = req - .header("X-Slack-Signature") - .unwrap_or("") - .to_string(); + let signature = req.header("X-Slack-Signature").unwrap_or("").to_string(); let bytes = match body.into_bytes().await { Ok(b) => b, @@ -178,16 +167,15 @@ pub async fn slash_command_receive( .body("Invalid signature"); } - let payload: commands::SlackSlashCommandPayload = - match serde_urlencoded::from_bytes(&bytes) { - Ok(p) => p, - Err(e) => { - slog!("[slack] Failed to parse slash command payload: {e}"); - return Response::builder() - .status(StatusCode::BAD_REQUEST) - .body("Bad request"); - } - }; + let payload: commands::SlackSlashCommandPayload = match serde_urlencoded::from_bytes(&bytes) { + Ok(p) => p, + Err(e) => { + slog!("[slack] Failed to parse slash command payload: {e}"); + return Response::builder() + .status(StatusCode::BAD_REQUEST) + .body("Bad request"); + } + }; slog!( "[slack] Slash command from {}: {} {}", diff --git a/server/src/chat/transport/slack/verify.rs b/server/src/chat/transport/slack/verify.rs index 436ba8a4..9855c4dc 100644 --- a/server/src/chat/transport/slack/verify.rs +++ b/server/src/chat/transport/slack/verify.rs @@ -215,7 +215,12 @@ mod tests { let body = b"test body"; let sig = compute_test_signature("correct-secret", timestamp, body); - assert!(!verify_slack_signature("wrong-secret", timestamp, body, &sig)); + assert!(!verify_slack_signature( + "wrong-secret", + timestamp, + body, + &sig + )); } /// Helper to compute a test signature using our sha256 + HMAC implementation. diff --git a/server/src/chat/transport/whatsapp/commands.rs b/server/src/chat/transport/whatsapp/commands.rs index ed5b6f93..327f707f 100644 --- a/server/src/chat/transport/whatsapp/commands.rs +++ b/server/src/chat/transport/whatsapp/commands.rs @@ -1,22 +1,24 @@ //! WhatsApp command handling — processes incoming WhatsApp messages as bot commands. use std::sync::Arc; -use crate::chat::transport::matrix::{ConversationEntry, ConversationRole, RoomConversation}; -use crate::chat::util::is_permission_approval; -use crate::http::context::{PermissionDecision}; -use crate::slog; use super::WhatsAppWebhookContext; use super::format::{chunk_for_whatsapp, markdown_to_whatsapp}; use super::history::save_whatsapp_history; +use crate::chat::transport::matrix::{ConversationEntry, ConversationRole, RoomConversation}; +use crate::chat::util::is_permission_approval; +use crate::http::context::PermissionDecision; +use crate::slog; /// Dispatch an incoming WhatsApp message to bot commands. -pub(super) async fn handle_incoming_message(ctx: &WhatsAppWebhookContext, sender: &str, message: &str) { +pub(super) async fn handle_incoming_message( + ctx: &WhatsAppWebhookContext, + sender: &str, + message: &str, +) { use crate::chat::commands::{CommandDispatch, try_handle_command}; // Allowlist check: when configured, silently ignore unauthorized senders. - if !ctx.allowed_phones.is_empty() - && !ctx.allowed_phones.iter().any(|p| p == sender) - { + if !ctx.allowed_phones.is_empty() && !ctx.allowed_phones.iter().any(|p| p == sender) { slog!("[whatsapp] Ignoring message from unauthorized sender: {sender}"); return; } @@ -173,7 +175,9 @@ pub(super) async fn handle_incoming_message(ctx: &WhatsAppWebhookContext, sender slog!("[whatsapp] Handling reset command from {sender}"); { let mut guard = ctx.history.lock().await; - let conv = guard.entry(sender.to_string()).or_insert_with(RoomConversation::default); + let conv = guard + .entry(sender.to_string()) + .or_insert_with(RoomConversation::default); conv.session_id = None; conv.entries.clear(); save_whatsapp_history(&ctx.project_root, &guard); @@ -219,8 +223,13 @@ pub(super) async fn handle_incoming_message(ctx: &WhatsAppWebhookContext, sender &ctx.bot_user_id, ) { let response = match assign_cmd { - crate::chat::transport::matrix::assign::AssignCommand::Assign { story_number, model } => { - slog!("[whatsapp] Handling assign command from {sender}: story {story_number} model {model}"); + crate::chat::transport::matrix::assign::AssignCommand::Assign { + story_number, + model, + } => { + slog!( + "[whatsapp] Handling assign command from {sender}: story {story_number} model {model}" + ); crate::chat::transport::matrix::assign::handle_assign( &ctx.bot_name, &story_number, @@ -385,9 +394,7 @@ async fn handle_llm_message(ctx: &WhatsAppWebhookContext, sender: &str, user_mes Err(e) => { slog!("[whatsapp] LLM error: {e}"); let err_msg = if let Some(url) = crate::llm::oauth::extract_login_url_from_error(&e) { - format!( - "Authentication required. Log in to Claude here: {url}" - ) + format!("Authentication required. Log in to Claude here: {url}") } else { format!("Error processing your request: {e}") }; @@ -434,20 +441,18 @@ async fn handle_llm_message(ctx: &WhatsAppWebhookContext, sender: &str, user_mes #[cfg(test)] mod tests { - use crate::agents::AgentPool; - use crate::io::watcher::WatcherEvent; - use crate::chat::transport::matrix::{ConversationEntry, ConversationRole, RoomConversation}; - use super::super::history::{MessagingWindowTracker, WhatsAppConversationHistory}; use super::super::WhatsAppWebhookContext; + use super::super::history::{MessagingWindowTracker, WhatsAppConversationHistory}; use super::*; + use crate::agents::AgentPool; + use crate::chat::transport::matrix::{ConversationEntry, ConversationRole, RoomConversation}; + use crate::io::watcher::WatcherEvent; use std::collections::HashMap; use std::sync::Arc; use tokio::sync::Mutex as TokioMutex; /// Build a minimal WhatsAppWebhookContext for allowlist tests. - fn make_ctx_with_allowlist( - allowed_phones: Vec, - ) -> Arc { + fn make_ctx_with_allowlist(allowed_phones: Vec) -> Arc { struct NullTransport; #[async_trait::async_trait] @@ -505,9 +510,15 @@ mod tests { let err = "OAuth session expired or credentials missing. Please log in: http://localhost:3001/oauth/authorize"; let url = crate::llm::oauth::extract_login_url_from_error(err); assert!(url.is_some(), "should extract URL from OAuth error"); - let msg = format!("Authentication required. Log in to Claude here: {}", url.unwrap()); + let msg = format!( + "Authentication required. Log in to Claude here: {}", + url.unwrap() + ); assert!(msg.contains("http://localhost:3001/oauth/authorize")); - assert!(!msg.contains('['), "WhatsApp message should not use Markdown link syntax"); + assert!( + !msg.contains('['), + "WhatsApp message should not use Markdown link syntax" + ); } #[test] @@ -594,7 +605,10 @@ mod tests { "Timmy", "@timmy:home.local", ); - assert!(result.is_none(), "'status' should not be recognised as rebuild"); + assert!( + result.is_none(), + "'status' should not be recognised as rebuild" + ); } // ── reset command extraction ─────────────────────────────────────── @@ -624,14 +638,17 @@ mod tests { let sender = "+15555550100"; let history: WhatsAppConversationHistory = Arc::new(TokioMutex::new({ let mut m = HashMap::new(); - m.insert(sender.to_string(), RoomConversation { - session_id: Some("old-session".to_string()), - entries: vec![ConversationEntry { - role: ConversationRole::User, - sender: sender.to_string(), - content: "previous message".to_string(), - }], - }); + m.insert( + sender.to_string(), + RoomConversation { + session_id: Some("old-session".to_string()), + entries: vec![ConversationEntry { + role: ConversationRole::User, + sender: sender.to_string(), + content: "previous message".to_string(), + }], + }, + ); m })); @@ -641,7 +658,9 @@ mod tests { { let mut guard = history.lock().await; - let conv = guard.entry(sender.to_string()).or_insert_with(RoomConversation::default); + let conv = guard + .entry(sender.to_string()) + .or_insert_with(RoomConversation::default); conv.session_id = None; conv.entries.clear(); save_whatsapp_history(tmp.path(), &guard); @@ -748,7 +767,10 @@ mod tests { "Timmy", "@timmy:home.local", ); - assert!(result.is_none(), "'status' should not be recognised as rmtree"); + assert!( + result.is_none(), + "'status' should not be recognised as rmtree" + ); } // ── assign command extraction ────────────────────────────────────── @@ -805,6 +827,9 @@ mod tests { "Timmy", "@timmy:home.local", ); - assert!(result.is_none(), "'status' should not be recognised as assign"); + assert!( + result.is_none(), + "'status' should not be recognised as assign" + ); } } diff --git a/server/src/chat/transport/whatsapp/format.rs b/server/src/chat/transport/whatsapp/format.rs index e1c93180..d62a83f5 100644 --- a/server/src/chat/transport/whatsapp/format.rs +++ b/server/src/chat/transport/whatsapp/format.rs @@ -66,14 +66,11 @@ pub fn markdown_to_whatsapp(text: &str) -> String { LazyLock::new(|| Regex::new(r"(?m)^#{1,6}\s+(.+)$").unwrap()); static RE_BOLD_ITALIC: LazyLock = LazyLock::new(|| Regex::new(r"\*\*\*(.+?)\*\*\*").unwrap()); - static RE_BOLD: LazyLock = - LazyLock::new(|| Regex::new(r"\*\*(.+?)\*\*").unwrap()); - static RE_STRIKETHROUGH: LazyLock = - LazyLock::new(|| Regex::new(r"~~(.+?)~~").unwrap()); + static RE_BOLD: LazyLock = LazyLock::new(|| Regex::new(r"\*\*(.+?)\*\*").unwrap()); + static RE_STRIKETHROUGH: LazyLock = LazyLock::new(|| Regex::new(r"~~(.+?)~~").unwrap()); static RE_LINK: LazyLock = LazyLock::new(|| Regex::new(r"\[([^\]]+)\]\(([^)]+)\)").unwrap()); - static RE_HR: LazyLock = - LazyLock::new(|| Regex::new(r"(?m)^---+$").unwrap()); + static RE_HR: LazyLock = LazyLock::new(|| Regex::new(r"(?m)^---+$").unwrap()); // 1. Protect fenced code blocks by replacing them with placeholders. let mut code_blocks: Vec = Vec::new(); diff --git a/server/src/chat/transport/whatsapp/meta.rs b/server/src/chat/transport/whatsapp/meta.rs index 18023147..abcc00d3 100644 --- a/server/src/chat/transport/whatsapp/meta.rs +++ b/server/src/chat/transport/whatsapp/meta.rs @@ -2,9 +2,9 @@ use async_trait::async_trait; use serde::{Deserialize, Serialize}; +use super::history::MessagingWindowTracker; use crate::chat::{ChatTransport, MessageId}; use crate::slog; -use super::history::MessagingWindowTracker; // ── API base URLs (overridable for tests) ──────────────────────────────── @@ -55,7 +55,11 @@ impl WhatsAppTransport { } #[cfg(test)] - pub(crate) fn with_api_base(phone_number_id: String, access_token: String, api_base: String) -> Self { + pub(crate) fn with_api_base( + phone_number_id: String, + access_token: String, + api_base: String, + ) -> Self { Self { phone_number_id, access_token, diff --git a/server/src/chat/transport/whatsapp/mod.rs b/server/src/chat/transport/whatsapp/mod.rs index 4722e3d4..2e3dd62c 100644 --- a/server/src/chat/transport/whatsapp/mod.rs +++ b/server/src/chat/transport/whatsapp/mod.rs @@ -13,9 +13,9 @@ pub mod history; pub mod meta; pub mod twilio; -pub use history::{load_whatsapp_history, MessagingWindowTracker, WhatsAppConversationHistory}; +pub use history::{MessagingWindowTracker, WhatsAppConversationHistory, load_whatsapp_history}; pub use meta::WhatsAppTransport; -pub use twilio::{extract_twilio_text_messages, TwilioWhatsAppTransport}; +pub use twilio::{TwilioWhatsAppTransport, extract_twilio_text_messages}; use serde::Deserialize; use std::collections::{HashMap, HashSet}; @@ -132,8 +132,7 @@ pub struct WhatsAppWebhookContext { /// Permission requests from the MCP `prompt_permission` tool arrive here. pub perm_rx: Arc>>, /// Pending permission replies keyed by sender phone number. - pub pending_perm_replies: - Arc>>>, + pub pending_perm_replies: Arc>>>, /// Seconds before an unanswered permission prompt is auto-denied. pub permission_timeout_secs: u64, } diff --git a/server/src/chat/util.rs b/server/src/chat/util.rs index 3e563114..e34c913a 100644 --- a/server/src/chat/util.rs +++ b/server/src/chat/util.rs @@ -202,9 +202,7 @@ pub fn normalize_line_breaks(text: &str) -> String { return true; } // Horizontal rules: lines made entirely of -, *, or _ (at least 3 chars). - let all_hr_chars = trimmed - .chars() - .all(|c| matches!(c, '-' | '*' | '_' | ' ')); + let all_hr_chars = trimmed.chars().all(|c| matches!(c, '-' | '*' | '_' | ' ')); let hr_char_count = trimmed.chars().filter(|c| !c.is_whitespace()).count(); all_hr_chars && hr_char_count >= 3 } @@ -389,11 +387,7 @@ mod tests { #[test] fn strip_mention_emoji_display_name_no_separator() { // Display name with emoji, no separator - let rest = strip_bot_mention( - "timmy ⚡️ ambient on", - "timmy ⚡️", - "@timmy:homeserver.local", - ); + let rest = strip_bot_mention("timmy ⚡️ ambient on", "timmy ⚡️", "@timmy:homeserver.local"); assert_eq!(rest, "ambient on"); } @@ -638,9 +632,18 @@ mod tests { let output = normalize_line_breaks(input); // Prose sentences before and after the code block get doubled. // The code block itself is preserved. - assert!(output.contains("First sentence.\n\nSecond sentence."), "prose before code: {output}"); - assert!(output.contains("```rust\nlet x = 1;\nlet y = 2;\n```"), "code block preserved: {output}"); - assert!(output.contains("Third sentence.\n\nFourth sentence."), "prose after code: {output}"); + assert!( + output.contains("First sentence.\n\nSecond sentence."), + "prose before code: {output}" + ); + assert!( + output.contains("```rust\nlet x = 1;\nlet y = 2;\n```"), + "code block preserved: {output}" + ); + assert!( + output.contains("Third sentence.\n\nFourth sentence."), + "prose after code: {output}" + ); } #[test] diff --git a/server/src/config.rs b/server/src/config.rs index ed295740..14c6aafe 100644 --- a/server/src/config.rs +++ b/server/src/config.rs @@ -101,7 +101,6 @@ fn default_rate_limit_notifications() -> bool { true } - #[derive(Debug, Clone, Deserialize)] #[allow(dead_code)] pub struct ComponentConfig { @@ -288,27 +287,28 @@ impl ProjectConfig { // Parsed successfully but no agents — could be legacy or no agent section. // Try legacy format. if let Ok(legacy) = toml::from_str::(content) - && let Some(agent) = legacy.agent { - slog!( - "[config] Warning: [agent] table is deprecated. \ + && let Some(agent) = legacy.agent + { + slog!( + "[config] Warning: [agent] table is deprecated. \ Use [[agent]] array format instead." - ); - let config = ProjectConfig { - component: legacy.component, - agent: vec![agent], - watcher: legacy.watcher, - default_qa: legacy.default_qa, - default_coder_model: legacy.default_coder_model, - max_coders: legacy.max_coders, - max_retries: legacy.max_retries, - base_branch: legacy.base_branch, - rate_limit_notifications: legacy.rate_limit_notifications, - timezone: legacy.timezone, - rendezvous: None, - }; - validate_agents(&config.agent)?; - return Ok(config); - } + ); + let config = ProjectConfig { + component: legacy.component, + agent: vec![agent], + watcher: legacy.watcher, + default_qa: legacy.default_qa, + default_coder_model: legacy.default_coder_model, + max_coders: legacy.max_coders, + max_retries: legacy.max_retries, + base_branch: legacy.base_branch, + rate_limit_notifications: legacy.rate_limit_notifications, + timezone: legacy.timezone, + rendezvous: None, + }; + validate_agents(&config.agent)?; + return Ok(config); + } // No agent section at all Ok(config) } @@ -411,10 +411,11 @@ impl ProjectConfig { args.push(model.clone()); } if let Some(ref tools) = agent.allowed_tools - && !tools.is_empty() { - args.push("--allowedTools".to_string()); - args.push(tools.join(",")); - } + && !tools.is_empty() + { + args.push("--allowedTools".to_string()); + args.push(tools.join(",")); + } if let Some(turns) = agent.max_turns { args.push("--max-turns".to_string()); args.push(turns.to_string()); @@ -443,19 +444,21 @@ fn validate_agents(agents: &[AgentConfig]) -> Result<(), String> { return Err(format!("Duplicate agent name: '{}'", agent.name)); } if let Some(budget) = agent.max_budget_usd - && budget <= 0.0 { - return Err(format!( - "Agent '{}': max_budget_usd must be positive, got {budget}", - agent.name - )); - } + && budget <= 0.0 + { + return Err(format!( + "Agent '{}': max_budget_usd must be positive, got {budget}", + agent.name + )); + } if let Some(turns) = agent.max_turns - && turns == 0 { - return Err(format!( - "Agent '{}': max_turns must be positive, got 0", - agent.name - )); - } + && turns == 0 + { + return Err(format!( + "Agent '{}': max_turns must be positive, got 0", + agent.name + )); + } if let Some(ref runtime) = agent.runtime { match runtime.as_str() { "claude-code" | "gemini" => {} @@ -957,10 +960,7 @@ name = "coder" runtime = "claude-code" "#; let config = ProjectConfig::parse(toml_str).unwrap(); - assert_eq!( - config.agent[0].runtime, - Some("claude-code".to_string()) - ); + assert_eq!(config.agent[0].runtime, Some("claude-code".to_string())); } #[test] @@ -1067,7 +1067,10 @@ prompt = "git difftool {{base_branch}}...HEAD" name = "coder" "#; let config = ProjectConfig::parse(toml_str).unwrap(); - assert!(config.rate_limit_notifications, "rate_limit_notifications should default to true"); + assert!( + config.rate_limit_notifications, + "rate_limit_notifications should default to true" + ); } #[test] diff --git a/server/src/crdt_state.rs b/server/src/crdt_state.rs index 8f29b71a..57ebf40c 100644 --- a/server/src/crdt_state.rs +++ b/server/src/crdt_state.rs @@ -20,8 +20,8 @@ use bft_json_crdt::op::ROOT_ID; use fastcrypto::ed25519::Ed25519KeyPair; use fastcrypto::traits::ToFromBytes; use serde_json::json; -use sqlx::sqlite::SqliteConnectOptions; use sqlx::SqlitePool; +use sqlx::sqlite::SqliteConnectOptions; use std::path::Path; use tokio::sync::{broadcast, mpsc}; @@ -218,10 +218,9 @@ pub async fn init(db_path: &Path) -> Result<(), sqlx::Error> { let mut crdt = BaseCrdt::::new(&keypair); // Replay persisted ops to reconstruct state. - let rows: Vec<(String,)> = - sqlx::query_as("SELECT op_json FROM crdt_ops ORDER BY rowid ASC") - .fetch_all(&pool) - .await?; + let rows: Vec<(String,)> = sqlx::query_as("SELECT op_json FROM crdt_ops ORDER BY rowid ASC") + .fetch_all(&pool) + .await?; let mut all_ops_vec = Vec::with_capacity(rows.len()); for (op_json,) in &rows { @@ -316,7 +315,13 @@ pub fn init_for_test() { let keypair = make_keypair(); let crdt = BaseCrdt::::new(&keypair); let (persist_tx, _rx) = mpsc::unbounded_channel(); - let state = CrdtState { crdt, keypair, index: HashMap::new(), node_index: HashMap::new(), persist_tx }; + let state = CrdtState { + crdt, + keypair, + index: HashMap::new(), + node_index: HashMap::new(), + persist_tx, + }; let _ = lock.set(Mutex::new(state)); } }); @@ -458,9 +463,7 @@ pub fn write_item( }); } if let Some(b) = blocked { - apply_and_persist(&mut state, |s| { - s.crdt.doc.items[idx].blocked.set(b) - }); + apply_and_persist(&mut state, |s| s.crdt.doc.items[idx].blocked.set(b)); } if let Some(d) = depends_on { apply_and_persist(&mut state, |s| { @@ -473,14 +476,10 @@ pub fn write_item( }); } if let Some(ca) = claimed_at { - apply_and_persist(&mut state, |s| { - s.crdt.doc.items[idx].claimed_at.set(ca) - }); + apply_and_persist(&mut state, |s| s.crdt.doc.items[idx].claimed_at.set(ca)); } if let Some(ma) = merged_at { - apply_and_persist(&mut state, |s| { - s.crdt.doc.items[idx].merged_at.set(ma) - }); + apply_and_persist(&mut state, |s| s.crdt.doc.items[idx].merged_at.set(ma)); } // Broadcast a CrdtEvent if the stage actually changed. @@ -514,9 +513,7 @@ pub fn write_item( }) .into(); - apply_and_persist(&mut state, |s| { - s.crdt.doc.items.insert(ROOT_ID, item_json) - }); + apply_and_persist(&mut state, |s| s.crdt.doc.items.insert(ROOT_ID, item_json)); // Rebuild index after insertion (indices may shift). state.index = rebuild_index(&state.crdt); @@ -561,11 +558,9 @@ pub fn apply_remote_op(op: SignedOp) -> bool { let pre_stages: HashMap = state .index .iter() - .filter_map(|(sid, &idx)| { - match state.crdt.doc.items[idx].stage.view() { - JsonValue::String(s) => Some((sid.clone(), s)), - _ => None, - } + .filter_map(|(sid, &idx)| match state.crdt.doc.items[idx].stage.view() { + JsonValue::String(s) => Some((sid.clone(), s)), + _ => None, }) .collect(); @@ -668,9 +663,7 @@ pub fn write_claim(story_id: &str) -> bool { apply_and_persist(&mut state, |s| { s.crdt.doc.items[idx].claimed_by.set(node_id.clone()) }); - apply_and_persist(&mut state, |s| { - s.crdt.doc.items[idx].claimed_at.set(now) - }); + apply_and_persist(&mut state, |s| s.crdt.doc.items[idx].claimed_at.set(now)); true } @@ -690,9 +683,7 @@ pub fn release_claim(story_id: &str) { apply_and_persist(&mut state, |s| { s.crdt.doc.items[idx].claimed_by.set(String::new()) }); - apply_and_persist(&mut state, |s| { - s.crdt.doc.items[idx].claimed_at.set(0.0) - }); + apply_and_persist(&mut state, |s| s.crdt.doc.items[idx].claimed_at.set(0.0)); } /// Check if this node currently holds the claim on a pipeline item. @@ -725,9 +716,7 @@ pub fn write_node_presence(node_id: &str, address: &str, last_seen: f64, alive: apply_and_persist(&mut state, |s| { s.crdt.doc.nodes[idx].last_seen.set(last_seen) }); - apply_and_persist(&mut state, |s| { - s.crdt.doc.nodes[idx].alive.set(alive) - }); + apply_and_persist(&mut state, |s| s.crdt.doc.nodes[idx].alive.set(alive)); apply_and_persist(&mut state, |s| { s.crdt.doc.nodes[idx].address.set(address.to_string()) }); @@ -741,9 +730,7 @@ pub fn write_node_presence(node_id: &str, address: &str, last_seen: f64, alive: }) .into(); - apply_and_persist(&mut state, |s| { - s.crdt.doc.nodes.insert(ROOT_ID, node_json) - }); + apply_and_persist(&mut state, |s| s.crdt.doc.nodes.insert(ROOT_ID, node_json)); // Rebuild node index after insertion. state.node_index = rebuild_node_index(&state.crdt); @@ -1019,8 +1006,7 @@ pub fn read_item(story_id: &str) -> Option { /// or an `Err` if the CRDT layer isn't initialised or the story_id is /// unknown to the in-memory state. pub fn evict_item(story_id: &str) -> Result<(), String> { - let state_mutex = get_crdt() - .ok_or_else(|| "CRDT layer not initialised".to_string())?; + let state_mutex = get_crdt().ok_or_else(|| "CRDT layer not initialised".to_string())?; let mut state = state_mutex .lock() .map_err(|e| format!("CRDT lock poisoned: {e}"))?; @@ -1033,12 +1019,10 @@ pub fn evict_item(story_id: &str) -> Result<(), String> { // Resolve the item's OpId before the closure (the closure will mutably // borrow `state`, so we can't access `state.crdt.doc.items` from inside). - let item_id = state - .crdt - .doc - .items - .id_at(idx) - .ok_or_else(|| format!("Item index {idx} for '{story_id}' did not resolve to an OpId"))?; + let item_id = + state.crdt.doc.items.id_at(idx).ok_or_else(|| { + format!("Item index {idx} for '{story_id}' did not resolve to an OpId") + })?; // Write the delete op via the existing apply_and_persist machinery. // This signs the op, applies it to the in-memory CRDT (marking the item @@ -1084,9 +1068,7 @@ fn extract_item_view(item: &PipelineItemCrdt) -> Option { _ => None, }; let depends_on = match item.depends_on.view() { - JsonValue::String(s) if !s.is_empty() => { - serde_json::from_str::>(&s).ok() - } + JsonValue::String(s) if !s.is_empty() => serde_json::from_str::>(&s).ok(), _ => None, }; @@ -1142,9 +1124,9 @@ pub fn dep_is_done_crdt(dep_number: u32) -> bool { pub fn dep_is_archived_crdt(dep_number: u32) -> bool { let prefix = format!("{dep_number}_"); if let Some(items) = read_all_items() { - items.iter().any(|item| { - item.story_id.starts_with(&prefix) && item.stage == "6_archived" - }) + items + .iter() + .any(|item| item.story_id.starts_with(&prefix) && item.stage == "6_archived") } else { false } @@ -1226,8 +1208,14 @@ mod tests { assert_eq!(view.len(), 1); let item = &crdt.doc.items[0]; - assert_eq!(item.story_id.view(), JsonValue::String("10_story_test".to_string())); - assert_eq!(item.stage.view(), JsonValue::String("2_current".to_string())); + assert_eq!( + item.story_id.view(), + JsonValue::String("10_story_test".to_string()) + ); + assert_eq!( + item.stage.view(), + JsonValue::String("2_current".to_string()) + ); } #[test] @@ -1252,7 +1240,10 @@ mod tests { crdt.apply(insert_op); // Update stage - let stage_op = crdt.doc.items[0].stage.set("2_current".to_string()).sign(&kp); + let stage_op = crdt.doc.items[0] + .stage + .set("2_current".to_string()) + .sign(&kp); crdt.apply(stage_op); assert_eq!( @@ -1283,10 +1274,16 @@ mod tests { let op1 = crdt1.doc.items.insert(ROOT_ID, item_json).sign(&kp); crdt1.apply(op1.clone()); - let op2 = crdt1.doc.items[0].stage.set("2_current".to_string()).sign(&kp); + let op2 = crdt1.doc.items[0] + .stage + .set("2_current".to_string()) + .sign(&kp); crdt1.apply(op2.clone()); - let op3 = crdt1.doc.items[0].name.set("Updated Name".to_string()).sign(&kp); + let op3 = crdt1.doc.items[0] + .name + .set("Updated Name".to_string()) + .sign(&kp); crdt1.apply(op3.clone()); // Replay ops on a fresh CRDT. @@ -1568,7 +1565,11 @@ mod tests { "claimed_at": 0.0, }) .into(); - let op = crdt.doc.items.insert(bft_json_crdt::op::ROOT_ID, item).sign(&kp); + let op = crdt + .doc + .items + .insert(bft_json_crdt::op::ROOT_ID, item) + .sign(&kp); // This uses the global state which may not be initialised in tests. let _ = apply_remote_op(op); } @@ -1591,7 +1592,11 @@ mod tests { "claimed_at": 0.0, }) .into(); - let op = crdt.doc.items.insert(bft_json_crdt::op::ROOT_ID, item).sign(&kp); + let op = crdt + .doc + .items + .insert(bft_json_crdt::op::ROOT_ID, item) + .sign(&kp); let json1 = serde_json::to_string(&op).unwrap(); let roundtripped: SignedOp = serde_json::from_str(&json1).unwrap(); @@ -1620,7 +1625,11 @@ mod tests { "claimed_at": 0.0, }) .into(); - let op = crdt.doc.items.insert(bft_json_crdt::op::ROOT_ID, item).sign(&kp); + let op = crdt + .doc + .items + .insert(bft_json_crdt::op::ROOT_ID, item) + .sign(&kp); tx.send(op.clone()).unwrap(); let received = rx.try_recv().unwrap(); @@ -1693,7 +1702,10 @@ mod tests { // Now update the stage. The stage LwwRegisterCrdt for this item starts // at our_seq=0, so this field op gets seq=1. Crucially: seq=1 < seq=6. let idx = rebuild_index(&crdt)["511_story_target"]; - let stage_op = crdt.doc.items[idx].stage.set("2_current".to_string()).sign(&kp); + let stage_op = crdt.doc.items[idx] + .stage + .set("2_current".to_string()) + .sign(&kp); crdt.apply(stage_op.clone()); // stage_op.inner.seq == 1 @@ -1808,8 +1820,11 @@ mod tests { apply_and_persist(&mut state, |s| s.crdt.doc.items.insert(ROOT_ID, item_json)); - let error_entries = crate::log_buffer::global() - .get_recent_entries(1000, None, Some(&crate::log_buffer::LogLevel::Error)); + let error_entries = crate::log_buffer::global().get_recent_entries( + 1000, + None, + Some(&crate::log_buffer::LogLevel::Error), + ); assert!( error_entries.len() > before_errors, diff --git a/server/src/crdt_sync.rs b/server/src/crdt_sync.rs index 5e7e3914..00bd120d 100644 --- a/server/src/crdt_sync.rs +++ b/server/src/crdt_sync.rs @@ -408,7 +408,9 @@ mod tests { // Serialise op1 into a SyncMessage::Op. let op1_json = serde_json::to_string(&op1).unwrap(); - let wire_msg = SyncMessage::Op { op: op1_json.clone() }; + let wire_msg = SyncMessage::Op { + op: op1_json.clone(), + }; let wire_json = serde_json::to_string(&wire_msg).unwrap(); // ── Node B: receive the op through protocol ── @@ -517,10 +519,7 @@ mod tests { .sign(&kp); crdt_a.apply(op2.clone()); - let op3 = crdt_a.doc.items[0] - .stage - .set("3_qa".to_string()) - .sign(&kp); + let op3 = crdt_a.doc.items[0].stage.set("3_qa".to_string()).sign(&kp); crdt_a.apply(op3.clone()); // Serialise all ops as a bulk message (simulates partition heal). @@ -623,7 +622,10 @@ name = "test" // Simulate a clean reconnect. consecutive_failures = 0; - assert_eq!(consecutive_failures, 0, "counter must reset to 0 on success"); + assert_eq!( + consecutive_failures, 0, + "counter must reset to 0 on success" + ); // Next error is attempt 1 — well below the ERROR threshold. consecutive_failures += 1; @@ -685,7 +687,10 @@ name = "test" assert_eq!(crdt.doc.items.view().len(), 1); // Stage update also deduplicated correctly. - let stage_op = crdt.doc.items[0].stage.set("2_current".to_string()).sign(&kp); + let stage_op = crdt.doc.items[0] + .stage + .set("2_current".to_string()) + .sign(&kp); assert_eq!(crdt.apply(stage_op.clone()), OpState::Ok); assert_eq!( crdt.doc.items[0].stage.view(), @@ -806,10 +811,7 @@ name = "test" .set("2_current".to_string()) .sign(&kp); crdt_a.apply(op2.clone()); - let op3 = crdt_a.doc.items[0] - .stage - .set("3_qa".to_string()) - .sign(&kp); + let op3 = crdt_a.doc.items[0].stage.set("3_qa".to_string()).sign(&kp); crdt_a.apply(op3.clone()); // Receiver applies all ops in the correct order. @@ -830,7 +832,7 @@ name = "test" /// pending op is evicted (queue never grows beyond the cap). #[test] fn causal_queue_overflow_drops_oldest() { - use bft_json_crdt::json_crdt::{BaseCrdt, OpState, CAUSAL_QUEUE_MAX}; + use bft_json_crdt::json_crdt::{BaseCrdt, CAUSAL_QUEUE_MAX, OpState}; use bft_json_crdt::keypair::make_keypair; use bft_json_crdt::op::ROOT_ID; use serde_json::json; @@ -854,11 +856,7 @@ name = "test" "claimed_at": 0.0, }) .into(); - let phantom_op = source - .doc - .items - .insert(ROOT_ID, phantom_item) - .sign(&kp); + let phantom_op = source.doc.items.insert(ROOT_ID, phantom_item).sign(&kp); // Receiver never sees phantom_op, so any op declaring it as a dep will // sit in the causal queue forever (until evicted by overflow). @@ -871,9 +869,7 @@ name = "test" for i in 0..CAUSAL_QUEUE_MAX + 5 { let stage_name = format!("stage_{i}"); // Generate from source so seq numbers are valid. - let op = source - .doc - .items[0] + let op = source.doc.items[0] .stage .set(stage_name) .sign_with_dependencies(&kp, vec![&phantom_op]); @@ -1006,8 +1002,13 @@ name = "test" .iter() .filter_map(|item| { if let JV::Object(m) = CrdtNode::view(item) { - m.get("story_id") - .and_then(|s| if let JV::String(s) = s { Some(s.clone()) } else { None }) + m.get("story_id").and_then(|s| { + if let JV::String(s) = s { + Some(s.clone()) + } else { + None + } + }) } else { None } @@ -1194,15 +1195,9 @@ name = "test" .set("2_current".to_string()) .sign(&kp); crdt.apply(op2.clone()); - let op3 = crdt.doc.items[0] - .stage - .set("3_qa".to_string()) - .sign(&kp); + let op3 = crdt.doc.items[0].stage.set("3_qa".to_string()).sign(&kp); crdt.apply(op3.clone()); - let op4 = crdt.doc.items[0] - .stage - .set("4_merge".to_string()) - .sign(&kp); + let op4 = crdt.doc.items[0].stage.set("4_merge".to_string()).sign(&kp); crdt.apply(op4.clone()); // Send more ops than the channel capacity without consuming. @@ -1245,8 +1240,8 @@ name = "test" use serde_json::json; use std::sync::{Arc, Mutex}; use tokio::net::TcpListener; - use tokio_tungstenite::{accept_async, connect_async}; use tokio_tungstenite::tungstenite::Message as TMsg; + use tokio_tungstenite::{accept_async, connect_async}; use crate::crdt_state::PipelineDoc; @@ -1271,7 +1266,9 @@ name = "test" // Serialise A's full state as a bulk message. let op1_json = serde_json::to_string(&op1).unwrap(); - let bulk_msg = SyncMessage::Bulk { ops: vec![op1_json] }; + let bulk_msg = SyncMessage::Bulk { + ops: vec![op1_json], + }; let bulk_wire = serde_json::to_string(&bulk_msg).unwrap(); // ── Start Node A's WebSocket server on a random port ─────────────── @@ -1349,11 +1346,17 @@ name = "test" // ── Assert convergence ───────────────────────────────────────────── // Node B received Node A's item. - assert_eq!(crdt_b.doc.items.view().len(), 2, - "Node B must see both items after sync"); - let has_a_item = crdt_b.doc.items.view().iter().any(|item| { - item.story_id.view() == JV::String("508_e2e_convergence".to_string()) - }); + assert_eq!( + crdt_b.doc.items.view().len(), + 2, + "Node B must see both items after sync" + ); + let has_a_item = crdt_b + .doc + .items + .view() + .iter() + .any(|item| item.story_id.view() == JV::String("508_e2e_convergence".to_string())); assert!(has_a_item, "Node B must have Node A's item"); // Node A received Node B's op via the WebSocket. @@ -1378,8 +1381,8 @@ name = "test" use futures::{SinkExt, StreamExt}; use serde_json::json; use tokio::net::TcpListener; - use tokio_tungstenite::{accept_async, connect_async}; use tokio_tungstenite::tungstenite::Message as TMsg; + use tokio_tungstenite::{accept_async, connect_async}; use crate::crdt_state::PipelineDoc; @@ -1482,10 +1485,7 @@ name = "test" } // B sends its bulk state to A. - sink_b - .send(TMsg::Text(b_bulk_wire.into())) - .await - .unwrap(); + sink_b.send(TMsg::Text(b_bulk_wire.into())).await.unwrap(); tokio::time::sleep(std::time::Duration::from_millis(50)).await; @@ -1504,26 +1504,41 @@ name = "test" // ── Assert convergence ───────────────────────────────────────────── // Both nodes must have 2 items. - assert_eq!(crdt_a.doc.items.view().len(), 2, - "A must have 2 items after healing"); - assert_eq!(crdt_b.doc.items.view().len(), 2, - "B must have 2 items after healing"); + assert_eq!( + crdt_a.doc.items.view().len(), + 2, + "A must have 2 items after healing" + ); + assert_eq!( + crdt_b.doc.items.view().len(), + 2, + "B must have 2 items after healing" + ); // A must see B's story. - let b_story_on_a = crdt_a.doc.items.view().iter().any(|item| { - item.story_id.view() == JV::String("508_heal_b".to_string()) - }); + let b_story_on_a = crdt_a + .doc + .items + .view() + .iter() + .any(|item| item.story_id.view() == JV::String("508_heal_b".to_string())); assert!(b_story_on_a, "A must have B's story after healing"); // B must see A's stage advance. - let a_story_on_b = crdt_b.doc.items.view().iter().any(|item| { - item.story_id.view() == JV::String("508_heal_a".to_string()) - }); + let a_story_on_b = crdt_b + .doc + .items + .view() + .iter() + .any(|item| item.story_id.view() == JV::String("508_heal_a".to_string())); assert!(a_story_on_b, "B must have A's story after healing"); // CRDT views must be byte-identical (convergence). let view_a = serde_json::to_string(&CrdtNode::view(&crdt_a.doc.items)).unwrap(); let view_b = serde_json::to_string(&CrdtNode::view(&crdt_b.doc.items)).unwrap(); - assert_eq!(view_a, view_b, "Both nodes must converge to identical state"); + assert_eq!( + view_a, view_b, + "Both nodes must converge to identical state" + ); } } diff --git a/server/src/crdt_wire.rs b/server/src/crdt_wire.rs index 377ae65b..065a33d4 100644 --- a/server/src/crdt_wire.rs +++ b/server/src/crdt_wire.rs @@ -121,7 +121,11 @@ mod tests { // ── helpers ────────────────────────────────────────────────────────────── /// Build a fresh CRDT and return its keypair along with a signed insert op. - fn make_insert_op() -> (BaseCrdt, bft_json_crdt::keypair::Ed25519KeyPair, SignedOp) { + fn make_insert_op() -> ( + BaseCrdt, + bft_json_crdt::keypair::Ed25519KeyPair, + SignedOp, + ) { let kp = make_keypair(); let mut crdt = BaseCrdt::::new(&kp); let item: JsonValue = json!({ @@ -172,11 +176,7 @@ mod tests { fn roundtrip_delete_op() { let (mut crdt, kp, insert_op) = make_insert_op(); // Delete the inserted item. - let delete_op = crdt - .doc - .items - .delete(insert_op.inner.id) - .sign(&kp); + let delete_op = crdt.doc.items.delete(insert_op.inner.id).sign(&kp); crdt.apply(delete_op.clone()); let bytes = encode(&delete_op); diff --git a/server/src/db/mod.rs b/server/src/db/mod.rs index 16ec4fbc..5a17708c 100644 --- a/server/src/db/mod.rs +++ b/server/src/db/mod.rs @@ -16,8 +16,8 @@ /// no filesystem scan is needed after migration. use crate::io::story_metadata::parse_front_matter; use crate::slog; -use sqlx::sqlite::SqliteConnectOptions; use sqlx::SqlitePool; +use sqlx::sqlite::SqliteConnectOptions; use std::collections::HashMap; use std::path::Path; use std::sync::{Mutex, OnceLock}; @@ -86,14 +86,18 @@ pub fn read_content(story_id: &str) -> Option { /// /// Updates the in-memory store immediately. pub fn write_content(story_id: &str, content: &str) { - if let Some(store) = get_content_store() && let Ok(mut map) = store.lock() { + if let Some(store) = get_content_store() + && let Ok(mut map) = store.lock() + { map.insert(story_id.to_string(), content.to_string()); } } /// Remove a story's content from the in-memory store. pub fn delete_content(story_id: &str) { - if let Some(store) = get_content_store() && let Ok(mut map) = store.lock() { + if let Some(store) = get_content_store() + && let Ok(mut map) = store.lock() + { map.remove(story_id); } } @@ -103,7 +107,9 @@ pub fn delete_content(story_id: &str) { /// Safe to call multiple times — the `OnceLock` is set at most once. pub fn ensure_content_store() { #[cfg(not(test))] - { let _ = CONTENT_STORE.set(Mutex::new(HashMap::new())); } + { + let _ = CONTENT_STORE.set(Mutex::new(HashMap::new())); + } #[cfg(test)] { @@ -222,11 +228,7 @@ pub async fn init(db_path: &Path) -> Result<(), sqlx::Error> { /// /// This is the primary write path for the DB-backed pipeline. It updates /// the CRDT, the in-memory content store, and the SQLite shadow table. -pub fn write_item_with_content( - story_id: &str, - stage: &str, - content: &str, -) { +pub fn write_item_with_content(story_id: &str, stage: &str, content: &str) { let (name, agent, retry_count, blocked, depends_on) = match parse_front_matter(content) { Ok(meta) => ( meta.name, @@ -389,7 +391,9 @@ pub fn next_item_number() -> u32 { .chars() .take_while(|c| c.is_ascii_digit()) .collect(); - if let Ok(n) = num_str.parse::() && n > max_num { + if let Ok(n) = num_str.parse::() + && n > max_num + { max_num = n; } } @@ -397,7 +401,9 @@ pub fn next_item_number() -> u32 { // Also scan the content store (might have items not yet in CRDT). for id in all_content_ids() { let num_str: String = id.chars().take_while(|c| c.is_ascii_digit()).collect(); - if let Ok(n) = num_str.parse::() && n > max_num { + if let Ok(n) = num_str.parse::() + && n > max_num + { max_num = n; } } @@ -405,7 +411,6 @@ pub fn next_item_number() -> u32 { max_num + 1 } - #[cfg(test)] mod tests { use super::*; @@ -427,10 +432,7 @@ mod tests { .filename(&db_path) .create_if_missing(true); let pool = SqlitePool::connect_with(options).await.unwrap(); - sqlx::migrate!("./migrations") - .run(&pool) - .await - .unwrap(); + sqlx::migrate!("./migrations").run(&pool).await.unwrap(); // Write a story file in a temp stage dir. let stage_dir = tmp.path().join("2_current"); @@ -472,13 +474,12 @@ mod tests { .unwrap(); // Query back and verify. - let row: (String, Option, String) = sqlx::query_as( - "SELECT id, name, stage FROM pipeline_items WHERE id = ?1", - ) - .bind("10_story_shadow_test") - .fetch_one(&pool) - .await - .unwrap(); + let row: (String, Option, String) = + sqlx::query_as("SELECT id, name, stage FROM pipeline_items WHERE id = ?1") + .bind("10_story_shadow_test") + .fetch_one(&pool) + .await + .unwrap(); assert_eq!(row.0, "10_story_shadow_test"); assert_eq!(row.1.as_deref(), Some("Shadow Test")); @@ -512,10 +513,7 @@ mod tests { .filename(&db_path) .create_if_missing(true); let pool = SqlitePool::connect_with(options).await.unwrap(); - sqlx::migrate!("./migrations") - .run(&pool) - .await - .unwrap(); + sqlx::migrate!("./migrations").run(&pool).await.unwrap(); // Verify content column exists by inserting a full row. let now = chrono::Utc::now().to_rfc3339(); @@ -538,13 +536,12 @@ mod tests { .await .unwrap(); - let row: (Option,) = sqlx::query_as( - "SELECT content FROM pipeline_items WHERE id = ?1", - ) - .bind("99_story_col_test") - .fetch_one(&pool) - .await - .unwrap(); + let row: (Option,) = + sqlx::query_as("SELECT content FROM pipeline_items WHERE id = ?1") + .bind("99_story_col_test") + .fetch_one(&pool) + .await + .unwrap(); assert_eq!(row.0.as_deref(), Some(content)); } @@ -556,10 +553,7 @@ mod tests { .filename(&db_path) .create_if_missing(true); let pool = SqlitePool::connect_with(options).await.unwrap(); - sqlx::migrate!("./migrations") - .run(&pool) - .await - .unwrap(); + sqlx::migrate!("./migrations").run(&pool).await.unwrap(); let now = chrono::Utc::now().to_rfc3339(); @@ -610,12 +604,11 @@ mod tests { .await .unwrap(); - let row: (String,) = - sqlx::query_as("SELECT stage FROM pipeline_items WHERE id = ?1") - .bind("5_story_move") - .fetch_one(&pool) - .await - .unwrap(); + let row: (String,) = sqlx::query_as("SELECT stage FROM pipeline_items WHERE id = ?1") + .bind("5_story_move") + .fetch_one(&pool) + .await + .unwrap(); assert_eq!(row.0, "2_current"); } @@ -709,5 +702,4 @@ mod tests { row.map(|r| r.0) ); } - } diff --git a/server/src/gateway.rs b/server/src/gateway.rs index dfdb43be..f063d76e 100644 --- a/server/src/gateway.rs +++ b/server/src/gateway.rs @@ -13,7 +13,7 @@ use poem::web::Data; use poem::{Body, Request, Response}; use reqwest::Client; use serde::{Deserialize, Serialize}; -use serde_json::{json, Value}; +use serde_json::{Value, json}; use std::collections::BTreeMap; use std::path::Path; use std::sync::Arc; @@ -41,8 +41,7 @@ impl GatewayConfig { pub fn load(path: &Path) -> Result { let contents = std::fs::read_to_string(path) .map_err(|e| format!("cannot read {}: {e}", path.display()))?; - toml::from_str(&contents) - .map_err(|e| format!("invalid projects.toml: {e}")) + toml::from_str(&contents).map_err(|e| format!("invalid projects.toml: {e}")) } } @@ -117,11 +116,21 @@ struct JsonRpcError { impl JsonRpcResponse { fn success(id: Option, result: Value) -> Self { - Self { jsonrpc: "2.0", id, result: Some(result), error: None } + Self { + jsonrpc: "2.0", + id, + result: Some(result), + error: None, + } } fn error(id: Option, code: i64, message: String) -> Self { - Self { jsonrpc: "2.0", id, result: None, error: Some(JsonRpcError { code, message }) } + Self { + jsonrpc: "2.0", + id, + result: None, + error: Some(JsonRpcError { code, message }), + } } } @@ -147,22 +156,32 @@ pub async fn gateway_mcp_post_handler( let content_type = req.header("content-type").unwrap_or(""); if !content_type.is_empty() && !content_type.contains("application/json") { return to_json_response(JsonRpcResponse::error( - None, -32700, "Unsupported Content-Type; expected application/json".into(), + None, + -32700, + "Unsupported Content-Type; expected application/json".into(), )); } let bytes = match body.into_bytes().await { Ok(b) => b, - Err(_) => return to_json_response(JsonRpcResponse::error(None, -32700, "Parse error".into())), + Err(_) => { + return to_json_response(JsonRpcResponse::error(None, -32700, "Parse error".into())); + } }; let rpc: JsonRpcRequest = match serde_json::from_slice(&bytes) { Ok(r) => r, - Err(_) => return to_json_response(JsonRpcResponse::error(None, -32700, "Parse error".into())), + Err(_) => { + return to_json_response(JsonRpcResponse::error(None, -32700, "Parse error".into())); + } }; if rpc.jsonrpc != "2.0" { - return to_json_response(JsonRpcResponse::error(rpc.id, -32600, "Invalid JSON-RPC version".into())); + return to_json_response(JsonRpcResponse::error( + rpc.id, + -32600, + "Invalid JSON-RPC version".into(), + )); } // Accept notifications silently. @@ -185,7 +204,8 @@ pub async fn gateway_mcp_post_handler( } } "tools/call" => { - let tool_name = rpc.params + let tool_name = rpc + .params .get("name") .and_then(|v| v.as_str()) .unwrap_or(""); @@ -200,7 +220,9 @@ pub async fn gateway_mcp_post_handler( .header("Content-Type", "application/json") .body(Body::from(resp_body)), Err(e) => to_json_response(JsonRpcResponse::error( - rpc.id, -32603, format!("proxy error: {e}"), + rpc.id, + -32603, + format!("proxy error: {e}"), )), } } @@ -213,7 +235,9 @@ pub async fn gateway_mcp_post_handler( .header("Content-Type", "application/json") .body(Body::from(resp_body)), Err(e) => to_json_response(JsonRpcResponse::error( - rpc.id, -32603, format!("proxy error: {e}"), + rpc.id, + -32603, + format!("proxy error: {e}"), )), } } @@ -295,14 +319,17 @@ async fn merge_tools_list( "params": {} }); - let resp = state.client + let resp = state + .client .post(&mcp_url) .json(&rpc_body) .send() .await .map_err(|e| format!("failed to reach {mcp_url}: {e}"))?; - let resp_json: Value = resp.json().await + let resp_json: Value = resp + .json() + .await .map_err(|e| format!("invalid JSON from upstream: {e}"))?; let mut tools: Vec = resp_json @@ -320,14 +347,12 @@ async fn merge_tools_list( } /// Proxy a raw MCP request body to the active project's container. -async fn proxy_mcp_call( - state: &GatewayState, - request_bytes: &[u8], -) -> Result, String> { +async fn proxy_mcp_call(state: &GatewayState, request_bytes: &[u8]) -> Result, String> { let url = state.active_url().await?; let mcp_url = format!("{}/mcp", url.trim_end_matches('/')); - let resp = state.client + let resp = state + .client .post(&mcp_url) .header("Content-Type", "application/json") .body(request_bytes.to_vec()) @@ -374,8 +399,12 @@ async fn handle_switch_project(params: &Value, state: &GatewayState) -> JsonRpcR if !state.config.projects.contains_key(project) { let available: Vec<&str> = state.config.projects.keys().map(|s| s.as_str()).collect(); return JsonRpcResponse::error( - None, -32602, - format!("unknown project '{project}'. Available: {}", available.join(", ")), + None, + -32602, + format!( + "unknown project '{project}'. Available: {}", + available.join(", ") + ), ); } @@ -431,7 +460,9 @@ async fn handle_gateway_status(state: &GatewayState) -> JsonRpcResponse { }), ) } - Err(e) => JsonRpcResponse::error(None, -32603, format!("invalid upstream response: {e}")), + Err(e) => { + JsonRpcResponse::error(None, -32603, format!("invalid upstream response: {e}")) + } } } Err(e) => JsonRpcResponse::error(None, -32603, format!("failed to reach {mcp_url}: {e}")), @@ -500,7 +531,11 @@ pub async fn gateway_health_handler(state: Data<&Arc>) -> Response "projects": statuses, }); - let status = if all_healthy { StatusCode::OK } else { StatusCode::SERVICE_UNAVAILABLE }; + let status = if all_healthy { + StatusCode::OK + } else { + StatusCode::SERVICE_UNAVAILABLE + }; Response::builder() .status(status) .header("Content-Type", "application/json") @@ -519,7 +554,13 @@ pub async fn run(config_path: &Path, port: u16) -> Result<(), std::io::Error> { crate::slog!("[gateway] Starting gateway on port {port}, active project: {active}"); crate::slog!( "[gateway] Registered projects: {}", - state_arc.config.projects.keys().cloned().collect::>().join(", ") + state_arc + .config + .projects + .keys() + .cloned() + .collect::>() + .join(", ") ); let route = poem::Route::new() @@ -569,15 +610,27 @@ url = "http://localhost:3002" #[test] fn gateway_state_rejects_empty_config() { - let config = GatewayConfig { projects: BTreeMap::new() }; + let config = GatewayConfig { + projects: BTreeMap::new(), + }; assert!(GatewayState::new(config).is_err()); } #[test] fn gateway_state_sets_first_project_active() { let mut projects = BTreeMap::new(); - projects.insert("alpha".into(), ProjectEntry { url: "http://a:3001".into() }); - projects.insert("beta".into(), ProjectEntry { url: "http://b:3002".into() }); + projects.insert( + "alpha".into(), + ProjectEntry { + url: "http://a:3001".into(), + }, + ); + projects.insert( + "beta".into(), + ProjectEntry { + url: "http://b:3002".into(), + }, + ); let config = GatewayConfig { projects }; let state = GatewayState::new(config).unwrap(); let active = state.active_project.blocking_read().clone(); @@ -587,7 +640,8 @@ url = "http://localhost:3002" #[test] fn gateway_tool_definitions_has_expected_tools() { let defs = gateway_tool_definitions(); - let names: Vec<&str> = defs.iter() + let names: Vec<&str> = defs + .iter() .filter_map(|d| d.get("name").and_then(|n| n.as_str())) .collect(); assert!(names.contains(&"switch_project")); @@ -598,8 +652,18 @@ url = "http://localhost:3002" #[tokio::test] async fn switch_project_to_known_project() { let mut projects = BTreeMap::new(); - projects.insert("alpha".into(), ProjectEntry { url: "http://a:3001".into() }); - projects.insert("beta".into(), ProjectEntry { url: "http://b:3002".into() }); + projects.insert( + "alpha".into(), + ProjectEntry { + url: "http://a:3001".into(), + }, + ); + projects.insert( + "beta".into(), + ProjectEntry { + url: "http://b:3002".into(), + }, + ); let config = GatewayConfig { projects }; let state = GatewayState::new(config).unwrap(); @@ -614,7 +678,12 @@ url = "http://localhost:3002" #[tokio::test] async fn switch_project_to_unknown_project_fails() { let mut projects = BTreeMap::new(); - projects.insert("alpha".into(), ProjectEntry { url: "http://a:3001".into() }); + projects.insert( + "alpha".into(), + ProjectEntry { + url: "http://a:3001".into(), + }, + ); let config = GatewayConfig { projects }; let state = GatewayState::new(config).unwrap(); @@ -626,7 +695,12 @@ url = "http://localhost:3002" #[tokio::test] async fn active_url_returns_correct_url() { let mut projects = BTreeMap::new(); - projects.insert("myproj".into(), ProjectEntry { url: "http://my:3001".into() }); + projects.insert( + "myproj".into(), + ProjectEntry { + url: "http://my:3001".into(), + }, + ); let config = GatewayConfig { projects }; let state = GatewayState::new(config).unwrap(); @@ -654,10 +728,14 @@ url = "http://localhost:3002" fn load_config_from_file() { let dir = tempfile::tempdir().unwrap(); let path = dir.path().join("projects.toml"); - std::fs::write(&path, r#" + std::fs::write( + &path, + r#" [projects.test] url = "http://localhost:9999" -"#).unwrap(); +"#, + ) + .unwrap(); let config = GatewayConfig::load(&path).unwrap(); assert_eq!(config.projects.len(), 1); diff --git a/server/src/http/agents_sse.rs b/server/src/http/agents_sse.rs index 135ac95f..4eeb385a 100644 --- a/server/src/http/agents_sse.rs +++ b/server/src/http/agents_sse.rs @@ -61,9 +61,10 @@ pub async fn agent_stream( .header("Content-Type", "text/event-stream") .header("Cache-Control", "no-cache") .header("Connection", "keep-alive") - .body(Body::from_bytes_stream( - futures::StreamExt::map(stream, |r| r.map(bytes::Bytes::from)), - )) + .body(Body::from_bytes_stream(futures::StreamExt::map( + stream, + |r| r.map(bytes::Bytes::from), + ))) } #[cfg(test)] @@ -77,10 +78,7 @@ mod tests { fn test_app(ctx: Arc) -> impl poem::Endpoint { Route::new() - .at( - "/agents/:story_id/:agent_name/stream", - get(agent_stream), - ) + .at("/agents/:story_id/:agent_name/stream", get(agent_stream)) .data(ctx) } @@ -123,10 +121,7 @@ mod tests { }); let cli = poem::test::TestClient::new(test_app(ctx)); - let resp = cli - .get("/agents/1_story/coder-1/stream") - .send() - .await; + let resp = cli.get("/agents/1_story/coder-1/stream").send().await; let body = resp.0.into_body().into_string().await.unwrap(); @@ -178,15 +173,18 @@ mod tests { }); let cli = poem::test::TestClient::new(test_app(ctx)); - let resp = cli - .get("/agents/2_story/coder-1/stream") - .send() - .await; + let resp = cli.get("/agents/2_story/coder-1/stream").send().await; let body = resp.0.into_body().into_string().await.unwrap(); - assert!(body.contains("step 1 output"), "Output must be forwarded: {body}"); - assert!(body.contains("\"type\":\"done\""), "Done event must be forwarded: {body}"); + assert!( + body.contains("step 1 output"), + "Output must be forwarded: {body}" + ); + assert!( + body.contains("\"type\":\"done\""), + "Done event must be forwarded: {body}" + ); } #[tokio::test] @@ -195,10 +193,7 @@ mod tests { let ctx = Arc::new(AppContext::new_test(tmp.path().to_path_buf())); let cli = poem::test::TestClient::new(test_app(ctx)); - let resp = cli - .get("/agents/nonexistent/coder-1/stream") - .send() - .await; + let resp = cli.get("/agents/nonexistent/coder-1/stream").send().await; assert_eq!( resp.0.status(), diff --git a/server/src/http/anthropic.rs b/server/src/http/anthropic.rs index b1648ff4..bc5b1538 100644 --- a/server/src/http/anthropic.rs +++ b/server/src/http/anthropic.rs @@ -198,7 +198,8 @@ mod tests { fn get_api_key_returns_key_when_set() { let dir = TempDir::new().unwrap(); let ctx = test_ctx(dir.path()); - ctx.store.set(KEY_ANTHROPIC_API_KEY, json!("sk-ant-test123")); + ctx.store + .set(KEY_ANTHROPIC_API_KEY, json!("sk-ant-test123")); let result = get_anthropic_api_key(&ctx); assert_eq!(result.unwrap(), "sk-ant-test123"); } @@ -217,7 +218,8 @@ mod tests { async fn key_exists_returns_true_when_set() { let dir = TempDir::new().unwrap(); let ctx = AppContext::new_test(dir.path().to_path_buf()); - ctx.store.set(KEY_ANTHROPIC_API_KEY, json!("sk-ant-test123")); + ctx.store + .set(KEY_ANTHROPIC_API_KEY, json!("sk-ant-test123")); let api = AnthropicApi::new(Arc::new(ctx)); let result = api.get_anthropic_api_key_exists().await.unwrap(); assert!(result.0); @@ -265,8 +267,7 @@ mod tests { let dir = TempDir::new().unwrap(); let ctx = AppContext::new_test(dir.path().to_path_buf()); // A header value containing a newline is invalid - ctx.store - .set(KEY_ANTHROPIC_API_KEY, json!("bad\nvalue")); + ctx.store.set(KEY_ANTHROPIC_API_KEY, json!("bad\nvalue")); let api = AnthropicApi::new(Arc::new(ctx)); let result = api.list_anthropic_models_from("http://127.0.0.1:1").await; assert!(result.is_err()); diff --git a/server/src/http/bot_command.rs b/server/src/http/bot_command.rs index 688b93a3..30f12897 100644 --- a/server/src/http/bot_command.rs +++ b/server/src/http/bot_command.rs @@ -9,8 +9,8 @@ //! their dedicated async handlers. The `reset` command is handled by the frontend //! (it clears local session state and message history) and is not routed here. -use crate::http::context::{AppContext, OpenApiResult}; use crate::chat::commands::CommandDispatch; +use crate::http::context::{AppContext, OpenApiResult}; use poem::http::StatusCode; use poem_openapi::{Object, OpenApi, Tags, payload::Json}; use serde::{Deserialize, Serialize}; @@ -55,9 +55,11 @@ impl BotCommandApi { &self, body: Json, ) -> OpenApiResult> { - let project_root = self.ctx.state.get_project_root().map_err(|e| { - poem::Error::from_string(e, StatusCode::BAD_REQUEST) - })?; + let project_root = self + .ctx + .state + .get_project_root() + .map_err(|e| poem::Error::from_string(e, StatusCode::BAD_REQUEST))?; let cmd = body.command.trim().to_ascii_lowercase(); let args = body.args.trim(); @@ -135,12 +137,21 @@ async fn dispatch_assign( let number_str = parts.next().unwrap_or("").trim(); let model_str = parts.next().unwrap_or("").trim(); - if number_str.is_empty() || !number_str.chars().all(|c| c.is_ascii_digit()) || model_str.is_empty() { + if number_str.is_empty() + || !number_str.chars().all(|c| c.is_ascii_digit()) + || model_str.is_empty() + { return "Usage: `/assign ` (e.g. `/assign 42 opus`)".to_string(); } - crate::chat::transport::matrix::assign::handle_assign("web-ui", number_str, model_str, project_root, agents) - .await + crate::chat::transport::matrix::assign::handle_assign( + "web-ui", + number_str, + model_str, + project_root, + agents, + ) + .await } async fn dispatch_start( @@ -164,8 +175,14 @@ async fn dispatch_start( Some(hint_str) }; - crate::chat::transport::matrix::start::handle_start("web-ui", number_str, agent_hint, project_root, agents) - .await + crate::chat::transport::matrix::start::handle_start( + "web-ui", + number_str, + agent_hint, + project_root, + agents, + ) + .await } async fn dispatch_delete( @@ -177,7 +194,13 @@ async fn dispatch_delete( if number_str.is_empty() || !number_str.chars().all(|c| c.is_ascii_digit()) { return "Usage: `/delete ` (e.g. `/delete 42`)".to_string(); } - crate::chat::transport::matrix::delete::handle_delete("web-ui", number_str, project_root, agents).await + crate::chat::transport::matrix::delete::handle_delete( + "web-ui", + number_str, + project_root, + agents, + ) + .await } async fn dispatch_rebuild( @@ -197,11 +220,13 @@ async fn dispatch_timer(args: &str, project_root: &std::path::Path) -> String { "@__web_ui__:localhost", ) { Some(cmd) => cmd, - None => return "Usage: `/timer list`, `/timer `, or `/timer cancel `".to_string(), + None => { + return "Usage: `/timer list`, `/timer `, or `/timer cancel `" + .to_string(); + } }; - let store = crate::chat::timer::TimerStore::load( - project_root.join(".huskies").join("timers.json"), - ); + let store = + crate::chat::timer::TimerStore::load(project_root.join(".huskies").join("timers.json")); crate::chat::timer::handle_timer_command(timer_cmd, &store, project_root).await } diff --git a/server/src/http/context.rs b/server/src/http/context.rs index d5149e90..37b009eb 100644 --- a/server/src/http/context.rs +++ b/server/src/http/context.rs @@ -94,8 +94,7 @@ pub struct AppContext { /// /// Wrapped in `Arc` so `AppContext` can implement `Clone`. /// `None` when no Matrix bot is configured. - pub matrix_shutdown_tx: - Option>>>, + pub matrix_shutdown_tx: Option>>>, /// Shared rate-limit retry timer store. /// /// Used by MCP tools (`move_story`, `stop_agent`) to cancel pending timers @@ -168,7 +167,10 @@ mod tests { fn permission_decision_equality() { assert_eq!(PermissionDecision::Deny, PermissionDecision::Deny); assert_eq!(PermissionDecision::Approve, PermissionDecision::Approve); - assert_eq!(PermissionDecision::AlwaysAllow, PermissionDecision::AlwaysAllow); + assert_eq!( + PermissionDecision::AlwaysAllow, + PermissionDecision::AlwaysAllow + ); assert_ne!(PermissionDecision::Deny, PermissionDecision::Approve); assert_ne!(PermissionDecision::Approve, PermissionDecision::AlwaysAllow); } diff --git a/server/src/http/io.rs b/server/src/http/io.rs index c3a46dff..7430dee2 100644 --- a/server/src/http/io.rs +++ b/server/src/http/io.rs @@ -168,8 +168,16 @@ mod tests { let entries = &result.0; assert!(entries.len() >= 2); - assert!(entries.iter().any(|e| e.name == "subdir" && e.kind == "dir")); - assert!(entries.iter().any(|e| e.name == "file.txt" && e.kind == "file")); + assert!( + entries + .iter() + .any(|e| e.name == "subdir" && e.kind == "dir") + ); + assert!( + entries + .iter() + .any(|e| e.name == "file.txt" && e.kind == "file") + ); } #[tokio::test] @@ -390,7 +398,11 @@ mod tests { let entries = &result.0; assert!(entries.iter().any(|e| e.name == "adir" && e.kind == "dir")); - assert!(entries.iter().any(|e| e.name == "bfile.txt" && e.kind == "file")); + assert!( + entries + .iter() + .any(|e| e.name == "bfile.txt" && e.kind == "file") + ); } #[tokio::test] @@ -403,5 +415,4 @@ mod tests { let result = api.list_directory(payload).await; assert!(result.is_err()); } - } diff --git a/server/src/http/mcp/agent_tools.rs b/server/src/http/mcp/agent_tools.rs index 8fd6c9d0..8b0c2979 100644 --- a/server/src/http/mcp/agent_tools.rs +++ b/server/src/http/mcp/agent_tools.rs @@ -5,7 +5,7 @@ use crate::http::context::AppContext; use crate::http::settings::get_editor_command_from_store; use crate::slog_warn; use crate::worktree; -use serde_json::{json, Value}; +use serde_json::{Value, json}; pub(super) async fn tool_start_agent(args: &Value, ctx: &AppContext) -> Result { let story_id = args @@ -72,28 +72,32 @@ pub(super) async fn tool_stop_agent(args: &Value, ctx: &AppContext) -> Result Result { let project_root = ctx.agents.get_project_root(&ctx.state).ok(); let agents = ctx.agents.list_agents()?; - serde_json::to_string_pretty(&json!(agents - .iter() - .filter(|a| { - project_root - .as_deref() - .map(|root| !crate::http::agents::story_is_archived(root, &a.story_id)) - .unwrap_or(true) - }) - .map(|a| json!({ - "story_id": a.story_id, - "agent_name": a.agent_name, - "status": a.status.to_string(), - "session_id": a.session_id, - "worktree_path": a.worktree_path, - })) - .collect::>())) + serde_json::to_string_pretty(&json!( + agents + .iter() + .filter(|a| { + project_root + .as_deref() + .map(|root| !crate::http::agents::story_is_archived(root, &a.story_id)) + .unwrap_or(true) + }) + .map(|a| json!({ + "story_id": a.story_id, + "agent_name": a.agent_name, + "status": a.status.to_string(), + "session_id": a.session_id, + "worktree_path": a.worktree_path, + })) + .collect::>() + )) .map_err(|e| format!("Serialization error: {e}")) } @@ -124,16 +128,12 @@ pub(super) async fn tool_get_agent_output( let project_root = ctx.agents.get_project_root(&ctx.state)?; // Collect all matching log files, oldest first. - let log_files = - agent_log::list_story_log_files(&project_root, story_id, agent_name_filter); + let log_files = agent_log::list_story_log_files(&project_root, story_id, agent_name_filter); let mut all_lines: Vec = Vec::new(); for path in &log_files { - let file_name = path - .file_name() - .and_then(|n| n.to_str()) - .unwrap_or("?"); + let file_name = path.file_name().and_then(|n| n.to_str()).unwrap_or("?"); all_lines.push(format!("=== {} ===", file_name.trim_end_matches(".log"))); match agent_log::read_log_as_readable_lines(path) { Ok(lines) => all_lines.extend(lines), @@ -156,8 +156,7 @@ pub(super) async fn tool_get_agent_output( let now = chrono::Utc::now().to_rfc3339(); for event in &live_events { if let Ok(event_value) = serde_json::to_value(event) - && let Some(line) = - agent_log::format_log_entry_as_text(&now, &event_value) + && let Some(line) = agent_log::format_log_entry_as_text(&now, &event_value) { all_lines.push(line); } @@ -201,8 +200,7 @@ pub(super) fn tool_get_agent_config(ctx: &AppContext) -> Result // Collect available (idle) agent names across all stages so the caller can // see at a glance which agents are free to start (story 190). - let mut available_names: std::collections::HashSet = - std::collections::HashSet::new(); + let mut available_names: std::collections::HashSet = std::collections::HashSet::new(); for stage in &[ PipelineStage::Coder, PipelineStage::Qa, @@ -214,19 +212,21 @@ pub(super) fn tool_get_agent_config(ctx: &AppContext) -> Result } } - serde_json::to_string_pretty(&json!(config - .agent - .iter() - .map(|a| json!({ - "name": a.name, - "role": a.role, - "model": a.model, - "allowed_tools": a.allowed_tools, - "max_turns": a.max_turns, - "max_budget_usd": a.max_budget_usd, - "available": available_names.contains(&a.name), - })) - .collect::>())) + serde_json::to_string_pretty(&json!( + config + .agent + .iter() + .map(|a| json!({ + "name": a.name, + "role": a.role, + "model": a.model, + "allowed_tools": a.allowed_tools, + "max_turns": a.max_turns, + "max_budget_usd": a.max_budget_usd, + "available": available_names.contains(&a.name), + })) + .collect::>() + )) .map_err(|e| format!("Serialization error: {e}")) } @@ -254,11 +254,13 @@ pub(super) async fn tool_wait_for_agent(args: &Value, ctx: &AppContext) -> Resul _ => None, }; - let completion = info.completion.as_ref().map(|r| json!({ - "summary": r.summary, - "gates_passed": r.gates_passed, - "gate_output": r.gate_output, - })); + let completion = info.completion.as_ref().map(|r| { + json!({ + "summary": r.summary, + "gates_passed": r.gates_passed, + "gate_output": r.gate_output, + }) + }); serde_json::to_string_pretty(&json!({ "story_id": info.story_id, @@ -295,13 +297,15 @@ pub(super) fn tool_list_worktrees(ctx: &AppContext) -> Result { let project_root = ctx.agents.get_project_root(&ctx.state)?; let entries = worktree::list_worktrees(&project_root)?; - serde_json::to_string_pretty(&json!(entries - .iter() - .map(|e| json!({ - "story_id": e.story_id, - "path": e.path.to_string_lossy(), - })) - .collect::>())) + serde_json::to_string_pretty(&json!( + entries + .iter() + .map(|e| json!({ + "story_id": e.story_id, + "path": e.path.to_string_lossy(), + })) + .collect::>() + )) .map_err(|e| format!("Serialization error: {e}")) } @@ -332,7 +336,10 @@ pub(super) fn tool_get_editor_command(args: &Value, ctx: &AppContext) -> Result< /// Run `git log ..HEAD --oneline` in the worktree and return the commit /// summaries, or `None` if git is unavailable or there are no new commits. -pub(super) async fn get_worktree_commits(worktree_path: &str, base_branch: &str) -> Option> { +pub(super) async fn get_worktree_commits( + worktree_path: &str, + base_branch: &str, +) -> Option> { let wt = worktree_path.to_string(); let base = base_branch.to_string(); tokio::task::spawn_blocking(move || { @@ -382,7 +389,11 @@ mod tests { let result = tool_get_agent_config(&ctx).unwrap(); let parsed: Vec = serde_json::from_str(&result).unwrap(); // Default config contains one agent entry with default values - assert_eq!(parsed.len(), 1, "default config should have one fallback agent"); + assert_eq!( + parsed.len(), + 1, + "default config should have one fallback agent" + ); assert!(parsed[0].get("name").is_some()); assert!(parsed[0].get("role").is_some()); } @@ -401,12 +412,10 @@ mod tests { let tmp = tempfile::tempdir().unwrap(); let ctx = test_ctx(tmp.path()); // No agent registered, no log file → returns "no log files found" message - let result = tool_get_agent_output( - &json!({"story_id": "99_nope", "agent_name": "bot"}), - &ctx, - ) - .await - .unwrap(); + let result = + tool_get_agent_output(&json!({"story_id": "99_nope", "agent_name": "bot"}), &ctx) + .await + .unwrap(); assert!( result.contains("No log files found"), "expected 'No log files found' message: {result}" @@ -418,12 +427,9 @@ mod tests { let tmp = tempfile::tempdir().unwrap(); let ctx = test_ctx(tmp.path()); // No agent_name provided — should succeed (no error) - let result = tool_get_agent_output( - &json!({"story_id": "99_nope"}), - &ctx, - ) - .await - .unwrap(); + let result = tool_get_agent_output(&json!({"story_id": "99_nope"}), &ctx) + .await + .unwrap(); assert!(result.contains("No log files found")); } @@ -440,13 +446,8 @@ mod tests { .set("project_root", json!(tmp.path().to_string_lossy().as_ref())); // Write a log file - let mut writer = AgentLogWriter::new( - tmp.path(), - "42_story_foo", - "coder-1", - "sess-test", - ) - .unwrap(); + let mut writer = + AgentLogWriter::new(tmp.path(), "42_story_foo", "coder-1", "sess-test").unwrap(); writer .write_event(&AgentEvent::Output { story_id: "42_story_foo".to_string(), @@ -488,13 +489,8 @@ mod tests { ctx.store .set("project_root", json!(tmp.path().to_string_lossy().as_ref())); - let mut writer = AgentLogWriter::new( - tmp.path(), - "42_story_bar", - "coder-1", - "sess-tail", - ) - .unwrap(); + let mut writer = + AgentLogWriter::new(tmp.path(), "42_story_bar", "coder-1", "sess-tail").unwrap(); for i in 0..10 { writer .write_event(&AgentEvent::Output { @@ -514,8 +510,14 @@ mod tests { .unwrap(); // Should contain "line 7", "line 8", "line 9" but NOT "line 0" - assert!(result.contains("line 9"), "should contain last line: {result}"); - assert!(!result.contains("line 0"), "should not contain early lines: {result}"); + assert!( + result.contains("line 9"), + "should contain last line: {result}" + ); + assert!( + !result.contains("line 0"), + "should not contain early lines: {result}" + ); } #[tokio::test] @@ -529,13 +531,8 @@ mod tests { ctx.store .set("project_root", json!(tmp.path().to_string_lossy().as_ref())); - let mut writer = AgentLogWriter::new( - tmp.path(), - "42_story_baz", - "coder-1", - "sess-filter", - ) - .unwrap(); + let mut writer = + AgentLogWriter::new(tmp.path(), "42_story_baz", "coder-1", "sess-filter").unwrap(); writer .write_event(&AgentEvent::Output { story_id: "42_story_baz".to_string(), @@ -559,8 +556,14 @@ mod tests { .await .unwrap(); - assert!(result.contains("needle"), "filter should keep matching lines: {result}"); - assert!(!result.contains("haystack"), "filter should remove non-matching lines: {result}"); + assert!( + result.contains("needle"), + "filter should keep matching lines: {result}" + ); + assert!( + !result.contains("haystack"), + "filter should remove non-matching lines: {result}" + ); } #[tokio::test] @@ -697,10 +700,7 @@ stage = "coder" fn tool_get_editor_command_no_editor_configured() { let tmp = tempfile::tempdir().unwrap(); let ctx = test_ctx(tmp.path()); - let result = tool_get_editor_command( - &json!({"worktree_path": "/some/path"}), - &ctx, - ); + let result = tool_get_editor_command(&json!({"worktree_path": "/some/path"}), &ctx); assert!(result.is_err()); assert!(result.unwrap_err().contains("No editor configured")); } @@ -725,17 +725,14 @@ stage = "coder" let ctx = test_ctx(tmp.path()); ctx.store.set("editor_command", json!("code")); - let result = tool_get_editor_command( - &json!({"worktree_path": "/path/to/worktree"}), - &ctx, - ) - .unwrap(); + let result = + tool_get_editor_command(&json!({"worktree_path": "/path/to/worktree"}), &ctx).unwrap(); assert_eq!(result, "code /path/to/worktree"); } #[test] fn get_editor_command_in_tools_list() { - use super::super::{handle_tools_list}; + use super::super::handle_tools_list; let resp = handle_tools_list(Some(json!(1))); let tools = resp.result.unwrap()["tools"].as_array().unwrap().clone(); let tool = tools.iter().find(|t| t["name"] == "get_editor_command"); @@ -769,9 +766,11 @@ stage = "coder" async fn wait_for_agent_tool_nonexistent_agent_returns_error() { let tmp = tempfile::tempdir().unwrap(); let ctx = test_ctx(tmp.path()); - let result = - tool_wait_for_agent(&json!({"story_id": "99_nope", "agent_name": "bot", "timeout_ms": 50}), &ctx) - .await; + let result = tool_wait_for_agent( + &json!({"story_id": "99_nope", "agent_name": "bot", "timeout_ms": 50}), + &ctx, + ) + .await; // No agent registered — should error assert!(result.is_err()); } @@ -802,13 +801,19 @@ stage = "coder" #[test] fn wait_for_agent_tool_in_list() { - use super::super::{handle_tools_list}; + use super::super::handle_tools_list; let resp = handle_tools_list(Some(json!(1))); let tools = resp.result.unwrap()["tools"].as_array().unwrap().clone(); let wait_tool = tools.iter().find(|t| t["name"] == "wait_for_agent"); - assert!(wait_tool.is_some(), "wait_for_agent missing from tools list"); + assert!( + wait_tool.is_some(), + "wait_for_agent missing from tools list" + ); let t = wait_tool.unwrap(); - assert!(t["description"].as_str().unwrap().contains("block") || t["description"].as_str().unwrap().contains("Block")); + assert!( + t["description"].as_str().unwrap().contains("block") + || t["description"].as_str().unwrap().contains("Block") + ); let required = t["inputSchema"]["required"].as_array().unwrap(); let req_names: Vec<&str> = required.iter().map(|v| v.as_str().unwrap()).collect(); assert!(req_names.contains(&"story_id")); @@ -821,7 +826,8 @@ stage = "coder" let tmp = tempfile::tempdir().unwrap(); let cov_dir = tmp.path().join(".huskies/coverage"); fs::create_dir_all(&cov_dir).unwrap(); - let json_content = r#"{"data":[{"totals":{"lines":{"count":100,"covered":78,"percent":78.0}}}]}"#; + let json_content = + r#"{"data":[{"totals":{"lines":{"count":100,"covered":78,"percent":78.0}}}]}"#; fs::write(cov_dir.join("server.json"), json_content).unwrap(); let pct = read_coverage_percent_from_json(tmp.path()); diff --git a/server/src/http/mcp/diagnostics.rs b/server/src/http/mcp/diagnostics.rs index 1e7def20..8c8aa8a7 100644 --- a/server/src/http/mcp/diagnostics.rs +++ b/server/src/http/mcp/diagnostics.rs @@ -153,7 +153,8 @@ pub(super) async fn tool_prompt_permission( // Try to forward to the interactive session (WebSocket/Matrix). // If no session is active (headless agent), auto-deny the permission. - if ctx.perm_tx + if ctx + .perm_tx .send(crate::http::context::PermissionForward { request_id: request_id.clone(), tool_name: tool_name.clone(), @@ -321,8 +322,8 @@ pub(super) fn tool_dump_crdt(args: &Value) -> Result { /// MCP tool: return the server version and build hash. pub(super) fn tool_get_version() -> Result { - let build_hash = std::fs::read_to_string(".huskies/build_hash") - .unwrap_or_else(|_| "unknown".to_string()); + let build_hash = + std::fs::read_to_string(".huskies/build_hash").unwrap_or_else(|_| "unknown".to_string()); serde_json::to_string_pretty(&json!({ "version": env!("CARGO_PKG_VERSION"), "build_hash": build_hash.trim(), @@ -338,7 +339,10 @@ pub(super) fn tool_loc_file(args: &Value, ctx: &AppContext) -> Result Result Result = vec![ - "commit".to_string(), - "--message".to_string(), - message, - ]; + let git_args: Vec = vec!["commit".to_string(), "--message".to_string(), message]; let output = run_git_owned(git_args, dir).await?; @@ -412,12 +406,9 @@ mod tests { .output() .unwrap(); - let result = tool_git_status( - &json!({"worktree_path": story_wt.to_str().unwrap()}), - &ctx, - ) - .await - .unwrap(); + let result = tool_git_status(&json!({"worktree_path": story_wt.to_str().unwrap()}), &ctx) + .await + .unwrap(); let parsed: serde_json::Value = serde_json::from_str(&result).unwrap(); assert_eq!(parsed["clean"], true); @@ -446,18 +437,17 @@ mod tests { // Add untracked file std::fs::write(story_wt.join("new_file.txt"), "content").unwrap(); - let result = tool_git_status( - &json!({"worktree_path": story_wt.to_str().unwrap()}), - &ctx, - ) - .await - .unwrap(); + let result = tool_git_status(&json!({"worktree_path": story_wt.to_str().unwrap()}), &ctx) + .await + .unwrap(); let parsed: serde_json::Value = serde_json::from_str(&result).unwrap(); assert_eq!(parsed["clean"], false); let untracked = parsed["untracked"].as_array().unwrap(); assert!( - untracked.iter().any(|v| v.as_str().unwrap().contains("new_file.txt")), + untracked + .iter() + .any(|v| v.as_str().unwrap().contains("new_file.txt")), "expected new_file.txt in untracked: {parsed}" ); } @@ -493,12 +483,9 @@ mod tests { // Modify file (unstaged) std::fs::write(story_wt.join("file.txt"), "line1\nline2\n").unwrap(); - let result = tool_git_diff( - &json!({"worktree_path": story_wt.to_str().unwrap()}), - &ctx, - ) - .await - .unwrap(); + let result = tool_git_diff(&json!({"worktree_path": story_wt.to_str().unwrap()}), &ctx) + .await + .unwrap(); let parsed: serde_json::Value = serde_json::from_str(&result).unwrap(); assert!( @@ -560,11 +547,8 @@ mod tests { #[tokio::test] async fn git_add_missing_paths() { let (_tmp, story_wt, ctx) = setup_worktree(); - let result = tool_git_add( - &json!({"worktree_path": story_wt.to_str().unwrap()}), - &ctx, - ) - .await; + let result = + tool_git_add(&json!({"worktree_path": story_wt.to_str().unwrap()}), &ctx).await; assert!(result.is_err()); assert!(result.unwrap_err().contains("paths")); } @@ -609,7 +593,10 @@ mod tests { .output() .unwrap(); let output = String::from_utf8_lossy(&status.stdout); - assert!(output.contains("A file.txt"), "file should be staged: {output}"); + assert!( + output.contains("A file.txt"), + "file should be staged: {output}" + ); } // ── git_commit ──────────────────────────────────────────────────── @@ -626,11 +613,8 @@ mod tests { #[tokio::test] async fn git_commit_missing_message() { let (_tmp, story_wt, ctx) = setup_worktree(); - let result = tool_git_commit( - &json!({"worktree_path": story_wt.to_str().unwrap()}), - &ctx, - ) - .await; + let result = + tool_git_commit(&json!({"worktree_path": story_wt.to_str().unwrap()}), &ctx).await; assert!(result.is_err()); assert!(result.unwrap_err().contains("message")); } @@ -713,12 +697,9 @@ mod tests { .output() .unwrap(); - let result = tool_git_log( - &json!({"worktree_path": story_wt.to_str().unwrap()}), - &ctx, - ) - .await - .unwrap(); + let result = tool_git_log(&json!({"worktree_path": story_wt.to_str().unwrap()}), &ctx) + .await + .unwrap(); let parsed: serde_json::Value = serde_json::from_str(&result).unwrap(); assert_eq!(parsed["exit_code"], 0); diff --git a/server/src/http/mcp/merge_tools.rs b/server/src/http/mcp/merge_tools.rs index 719ad856..3d6ccfec 100644 --- a/server/src/http/mcp/merge_tools.rs +++ b/server/src/http/mcp/merge_tools.rs @@ -4,7 +4,7 @@ use crate::http::context::AppContext; use crate::io::story_metadata::write_merge_failure; use crate::slog; use crate::slog_warn; -use serde_json::{json, Value}; +use serde_json::{Value, json}; pub(super) fn tool_merge_agent_work(args: &Value, ctx: &AppContext) -> Result { let story_id = args @@ -38,14 +38,12 @@ fn tool_get_merge_status_inner( job: &crate::agents::merge::MergeJob, ) -> Result { match &job.status { - crate::agents::merge::MergeJobStatus::Running => { - serde_json::to_string_pretty(&json!({ - "story_id": story_id, - "status": "running", - "message": "Merge pipeline is still running." - })) - .map_err(|e| format!("Serialization error: {e}")) - } + crate::agents::merge::MergeJobStatus::Running => serde_json::to_string_pretty(&json!({ + "story_id": story_id, + "status": "running", + "message": "Merge pipeline is still running." + })) + .map_err(|e| format!("Serialization error: {e}")), crate::agents::merge::MergeJobStatus::Completed(report) => { serde_json::to_string_pretty(&json!({ "story_id": story_id, @@ -58,14 +56,12 @@ fn tool_get_merge_status_inner( })) .map_err(|e| format!("Serialization error: {e}")) } - crate::agents::merge::MergeJobStatus::Failed(err) => { - serde_json::to_string_pretty(&json!({ - "story_id": story_id, - "status": "failed", - "error": err, - })) - .map_err(|e| format!("Serialization error: {e}")) - } + crate::agents::merge::MergeJobStatus::Failed(err) => serde_json::to_string_pretty(&json!({ + "story_id": story_id, + "status": "failed", + "error": err, + })) + .map_err(|e| format!("Serialization error: {e}")), } } @@ -75,8 +71,9 @@ pub(super) fn tool_get_merge_status(args: &Value, ctx: &AppContext) -> Result { @@ -127,7 +124,10 @@ pub(super) fn tool_get_merge_status(args: &Value, ctx: &AppContext) -> Result Result { +pub(super) async fn tool_move_story_to_merge( + args: &Value, + ctx: &AppContext, +) -> Result { let story_id = args .get("story_id") .and_then(|v| v.as_str()) @@ -176,10 +176,12 @@ pub(super) fn tool_report_merge_failure(args: &Value, ctx: &AppContext) -> Resul // Broadcast the failure so the Matrix notification listener can post an // error message to configured rooms without coupling this tool to the bot. - let _ = ctx.watcher_tx.send(crate::io::watcher::WatcherEvent::MergeFailure { - story_id: story_id.to_string(), - reason: reason.to_string(), - }); + let _ = ctx + .watcher_tx + .send(crate::io::watcher::WatcherEvent::MergeFailure { + story_id: story_id.to_string(), + reason: reason.to_string(), + }); // Persist the failure reason to the story file's front matter so it // survives server restarts and is visible in the web UI. @@ -238,7 +240,7 @@ mod tests { #[test] fn merge_agent_work_in_tools_list() { - use super::super::{handle_tools_list}; + use super::super::handle_tools_list; let resp = handle_tools_list(Some(json!(1))); let tools = resp.result.unwrap()["tools"].as_array().unwrap().clone(); let tool = tools.iter().find(|t| t["name"] == "merge_agent_work"); @@ -254,11 +256,14 @@ mod tests { #[test] fn move_story_to_merge_in_tools_list() { - use super::super::{handle_tools_list}; + use super::super::handle_tools_list; let resp = handle_tools_list(Some(json!(1))); let tools = resp.result.unwrap()["tools"].as_array().unwrap().clone(); let tool = tools.iter().find(|t| t["name"] == "move_story_to_merge"); - assert!(tool.is_some(), "move_story_to_merge missing from tools list"); + assert!( + tool.is_some(), + "move_story_to_merge missing from tools list" + ); let t = tool.unwrap(); assert!(t["description"].is_string()); let required = t["inputSchema"]["required"].as_array().unwrap(); @@ -338,7 +343,7 @@ mod tests { #[test] fn report_merge_failure_in_tools_list() { - use super::super::{handle_tools_list}; + use super::super::handle_tools_list; let resp = handle_tools_list(Some(json!(1))); let tools = resp.result.unwrap()["tools"].as_array().unwrap().clone(); let tool = tools.iter().find(|t| t["name"] == "report_merge_failure"); diff --git a/server/src/http/mcp/mod.rs b/server/src/http/mcp/mod.rs index 59b65a47..c01d071c 100644 --- a/server/src/http/mcp/mod.rs +++ b/server/src/http/mcp/mod.rs @@ -1,12 +1,12 @@ //! MCP server — Model Context Protocol endpoint dispatching tool calls to handlers. -use crate::slog_warn; use crate::http::context::AppContext; +use crate::slog_warn; use poem::handler; use poem::http::StatusCode; use poem::web::Data; use poem::{Body, Request, Response}; use serde::{Deserialize, Serialize}; -use serde_json::{json, Value}; +use serde_json::{Value, json}; use std::sync::Arc; pub mod agent_tools; @@ -1212,15 +1212,8 @@ fn handle_tools_list(id: Option) -> JsonRpcResponse { // ── Tool dispatch ───────────────────────────────────────────────── -async fn handle_tools_call( - id: Option, - params: &Value, - ctx: &AppContext, -) -> JsonRpcResponse { - let tool_name = params - .get("name") - .and_then(|v| v.as_str()) - .unwrap_or(""); +async fn handle_tools_call(id: Option, params: &Value, ctx: &AppContext) -> JsonRpcResponse { + let tool_name = params.get("name").and_then(|v| v.as_str()).unwrap_or(""); let args = params.get("arguments").cloned().unwrap_or(json!({})); let result = match tool_name { @@ -1460,7 +1453,12 @@ mod tests { )); let result = resp.result.unwrap(); assert_eq!(result["isError"], true); - assert!(result["content"][0]["text"].as_str().unwrap().contains("Unknown tool")); + assert!( + result["content"][0]["text"] + .as_str() + .unwrap() + .contains("Unknown tool") + ); } #[test] @@ -1572,7 +1570,10 @@ mod tests { ) .await; assert!( - body["error"]["message"].as_str().unwrap_or("").contains("version"), + body["error"]["message"] + .as_str() + .unwrap_or("") + .contains("version"), "expected version error: {body}" ); } @@ -1599,9 +1600,7 @@ mod tests { let resp = cli .post("/mcp") .header("content-type", "application/json") - .body( - r#"{"jsonrpc":"2.0","id":null,"method":"notifications/initialized","params":{}}"#, - ) + .body(r#"{"jsonrpc":"2.0","id":null,"method":"notifications/initialized","params":{}}"#) .send() .await; assert_eq!(resp.0.status(), poem::http::StatusCode::ACCEPTED); @@ -1631,7 +1630,10 @@ mod tests { ) .await; assert!( - body["error"]["message"].as_str().unwrap_or("").contains("Unknown method"), + body["error"]["message"] + .as_str() + .unwrap_or("") + .contains("Unknown method"), "expected unknown method error: {body}" ); } @@ -1719,14 +1721,21 @@ mod tests { let body = resp.0.into_body().into_string().await.unwrap(); // Body is SSE-wrapped: "data: {…}\n\n" — strip the prefix and verify it's // a valid JSON-RPC result (not an error about missing agent_name). - let json_part = body.trim_start_matches("data: ").trim_end_matches("\n\n").trim(); + let json_part = body + .trim_start_matches("data: ") + .trim_end_matches("\n\n") + .trim(); let parsed: serde_json::Value = serde_json::from_str(json_part) .unwrap_or_else(|_| panic!("expected JSON-RPC in SSE body, got: {body}")); - assert!(parsed.get("result").is_some(), - "expected JSON-RPC result (disk-based handler ran): {parsed}"); + assert!( + parsed.get("result").is_some(), + "expected JSON-RPC result (disk-based handler ran): {parsed}" + ); // Must NOT be an error about missing agent_name (agent_name is now optional) - assert!(parsed.get("error").is_none(), - "unexpected error when agent_name omitted: {parsed}"); + assert!( + parsed.get("error").is_none(), + "unexpected error when agent_name omitted: {parsed}" + ); } #[tokio::test] @@ -1749,8 +1758,14 @@ mod tests { let body = resp.0.into_body().into_string().await.unwrap(); assert!(body.contains("data:"), "expected SSE data prefix: {body}"); // Must NOT return isError — should be a success result with "No log files found" - assert!(!body.contains("isError"), "expected no isError for missing agent: {body}"); - assert!(body.contains("No log files found"), "expected not-found message: {body}"); + assert!( + !body.contains("isError"), + "expected no isError for missing agent: {body}" + ); + assert!( + body.contains("No log files found"), + "expected not-found message: {body}" + ); } #[tokio::test] @@ -1760,8 +1775,7 @@ mod tests { // Agent has exited (not in pool) but wrote logs to disk. let tmp = tempfile::tempdir().unwrap(); let root = tmp.path(); - let mut writer = - AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-sse").unwrap(); + let mut writer = AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-sse").unwrap(); writer .write_event(&AgentEvent::Output { story_id: "42_story_foo".to_string(), @@ -1781,7 +1795,13 @@ mod tests { .send() .await; let body = resp.0.into_body().into_string().await.unwrap(); - assert!(body.contains("disk output"), "expected disk log content in SSE response: {body}"); - assert!(!body.contains("isError"), "expected no error for exited agent with logs: {body}"); + assert!( + body.contains("disk output"), + "expected disk log content in SSE response: {body}" + ); + assert!( + !body.contains("isError"), + "expected no error for exited agent with logs: {body}" + ); } } diff --git a/server/src/http/mcp/qa_tools.rs b/server/src/http/mcp/qa_tools.rs index 40aab25f..67b74db7 100644 --- a/server/src/http/mcp/qa_tools.rs +++ b/server/src/http/mcp/qa_tools.rs @@ -1,5 +1,7 @@ //! MCP QA tools — request, approve, and reject QA reviews for stories. -use crate::agents::{move_story_to_done, move_story_to_merge, move_story_to_qa, reject_story_from_qa}; +use crate::agents::{ + move_story_to_done, move_story_to_merge, move_story_to_qa, reject_story_from_qa, +}; use crate::http::context::AppContext; use crate::slog; use crate::slog_warn; @@ -63,11 +65,10 @@ pub(super) async fn tool_approve_qa(args: &Value, ctx: &AppContext) -> Result Result Result Result Err(format!("Task join error: {e}")), Ok(Ok(Err(e))) => Err(format!("Failed to execute command: {e}")), - Ok(Ok(Ok(output))) => { - serde_json::to_string_pretty(&json!({ - "stdout": String::from_utf8_lossy(&output.stdout), - "stderr": String::from_utf8_lossy(&output.stderr), - "exit_code": output.status.code().unwrap_or(-1), - "timed_out": false, - })) - .map_err(|e| format!("Serialization error: {e}")) - } + Ok(Ok(Ok(output))) => serde_json::to_string_pretty(&json!({ + "stdout": String::from_utf8_lossy(&output.stdout), + "stderr": String::from_utf8_lossy(&output.stderr), + "exit_code": output.status.code().unwrap_or(-1), + "timed_out": false, + })) + .map_err(|e| format!("Serialization error: {e}")), } } @@ -172,7 +164,7 @@ pub(super) fn handle_run_command_sse( params: &Value, ctx: &AppContext, ) -> Response { - use super::{to_sse_response, JsonRpcResponse}; + use super::{JsonRpcResponse, to_sse_response}; let args = params.get("arguments").cloned().unwrap_or(json!({})); @@ -183,7 +175,7 @@ pub(super) fn handle_run_command_sse( id, -32602, "Missing required argument: command".into(), - )) + )); } }; @@ -194,7 +186,7 @@ pub(super) fn handle_run_command_sse( id, -32602, "Missing required argument: working_dir".into(), - )) + )); } }; @@ -326,9 +318,7 @@ pub(super) fn handle_run_command_sse( .status(poem::http::StatusCode::OK) .header("Content-Type", "text/event-stream") .header("Cache-Control", "no-cache") - .body(Body::from_bytes_stream(stream.map(|r| { - r.map(Bytes::from) - }))) + .body(Body::from_bytes_stream(stream.map(|r| r.map(Bytes::from)))) } /// Truncate output to at most `max_lines` lines, keeping the tail. @@ -364,7 +354,11 @@ fn parse_test_counts(output: &str) -> (u64, u64) { fn extract_count(line: &str, label: &str) -> Option { let pos = line.find(label)?; let before = line[..pos].trim_end(); - let num_str: String = before.chars().rev().take_while(|c| c.is_ascii_digit()).collect(); + let num_str: String = before + .chars() + .rev() + .take_while(|c| c.is_ascii_digit()) + .collect(); if num_str.is_empty() { return None; } @@ -391,10 +385,7 @@ pub(super) async fn tool_run_tests(args: &Value, ctx: &AppContext) -> Result Result { +pub(super) async fn tool_get_test_result(args: &Value, ctx: &AppContext) -> Result { let project_root = ctx.agents.get_project_root(&ctx.state)?; let working_dir = match args.get("worktree_path").and_then(|v| v.as_str()) { @@ -703,9 +691,7 @@ pub(super) async fn tool_run_lint(args: &Value, ctx: &AppContext) -> Result Result { +fn format_test_result(result: &crate::http::context::TestJobResult) -> Result { serde_json::to_string_pretty(&json!({ "passed": result.passed, "exit_code": result.exit_code, @@ -854,11 +840,8 @@ mod tests { async fn tool_run_command_blocks_dangerous_command() { let tmp = tempfile::tempdir().unwrap(); let ctx = test_ctx(tmp.path()); - let result = tool_run_command( - &json!({"command": "rm -rf /", "working_dir": "/tmp"}), - &ctx, - ) - .await; + let result = + tool_run_command(&json!({"command": "rm -rf /", "working_dir": "/tmp"}), &ctx).await; assert!(result.is_err()); assert!(result.unwrap_err().contains("blocked")); } @@ -1017,7 +1000,10 @@ mod tests { let ctx = test_ctx(tmp.path()); // No script/test in tmp — should return Err let result = tool_run_tests(&json!({}), &ctx).await; - assert!(result.is_err(), "expected error for missing script: {result:?}"); + assert!( + result.is_err(), + "expected error for missing script: {result:?}" + ); assert!( result.unwrap_err().contains("not found"), "error should mention 'not found'" @@ -1073,8 +1059,11 @@ mod tests { std::fs::create_dir_all(&wt_dir).unwrap(); let ctx = test_ctx(tmp.path()); // tmp.path() itself is outside worktrees → should fail validation - let result = - tool_run_tests(&json!({"worktree_path": tmp.path().to_str().unwrap()}), &ctx).await; + let result = tool_run_tests( + &json!({"worktree_path": tmp.path().to_str().unwrap()}), + &ctx, + ) + .await; assert!(result.is_err()); assert!( result.unwrap_err().contains("worktrees"), @@ -1118,8 +1107,11 @@ mod tests { let wt_dir = tmp.path().join(".huskies").join("worktrees"); std::fs::create_dir_all(&wt_dir).unwrap(); let ctx = test_ctx(tmp.path()); - let result = - tool_run_build(&json!({"worktree_path": tmp.path().to_str().unwrap()}), &ctx).await; + let result = tool_run_build( + &json!({"worktree_path": tmp.path().to_str().unwrap()}), + &ctx, + ) + .await; assert!(result.is_err()); assert!(result.unwrap_err().contains("worktrees")); } @@ -1184,9 +1176,18 @@ mod tests { let lines: Vec = (1..=200).map(|i| format!("line {i}")).collect(); let text = lines.join("\n"); let result = truncate_output(&text, 50); - assert!(result.contains("line 200"), "should keep last line: {result}"); - assert!(result.contains("omitted"), "should note omitted lines: {result}"); - assert!(!result.contains("line 1\n"), "should not keep first line: {result}"); + assert!( + result.contains("line 200"), + "should keep last line: {result}" + ); + assert!( + result.contains("omitted"), + "should note omitted lines: {result}" + ); + assert!( + !result.contains("line 1\n"), + "should not keep first line: {result}" + ); } // ── parse_test_counts ───────────────────────────────────────────── diff --git a/server/src/http/mcp/status_tools.rs b/server/src/http/mcp/status_tools.rs index d83bde16..f370b9e9 100644 --- a/server/src/http/mcp/status_tools.rs +++ b/server/src/http/mcp/status_tools.rs @@ -20,7 +20,10 @@ fn parse_ac_items(contents: &str) -> Vec<(String, bool)> { break; } if in_ac_section { - if let Some(rest) = trimmed.strip_prefix("- [x] ").or(trimmed.strip_prefix("- [X] ")) { + if let Some(rest) = trimmed + .strip_prefix("- [x] ") + .or(trimmed.strip_prefix("- [X] ")) + { items.push((rest.to_string(), true)); } else if let Some(rest) = trimmed.strip_prefix("- [ ] ") { items.push((rest.to_string(), false)); @@ -33,10 +36,7 @@ fn parse_ac_items(contents: &str) -> Vec<(String, bool)> { /// Find the most recent log file for any agent under `.huskies/logs/{story_id}/`. fn find_most_recent_log(project_root: &Path, story_id: &str) -> Option { - let dir = project_root - .join(".huskies") - .join("logs") - .join(story_id); + let dir = project_root.join(".huskies").join("logs").join(story_id); if !dir.is_dir() { return None; @@ -68,8 +68,7 @@ fn find_most_recent_log(project_root: &Path, story_id: &str) -> Option /// Return the last N raw lines from a file. fn last_n_lines(path: &Path, n: usize) -> Result, String> { - let content = - fs::read_to_string(path).map_err(|e| format!("Failed to read log file: {e}"))?; + let content = fs::read_to_string(path).map_err(|e| format!("Failed to read log file: {e}"))?; let lines: Vec = content .lines() .rev() @@ -172,9 +171,8 @@ pub(super) async fn tool_status(args: &Value, ctx: &AppContext) -> Result Result Result { @@ -549,8 +554,7 @@ pub(super) async fn tool_delete_story(args: &Value, ctx: &AppContext) -> Result< // 3. Remove worktree (best-effort). if let Ok(config) = crate::config::ProjectConfig::load(&project_root) { - match crate::worktree::remove_worktree_by_story_id(&project_root, story_id, &config).await - { + match crate::worktree::remove_worktree_by_story_id(&project_root, story_id, &config).await { Ok(()) => slog_warn!("[delete_story] Removed worktree for '{story_id}'"), Err(e) => slog_warn!("[delete_story] Worktree removal for '{story_id}': {e}"), } @@ -573,7 +577,10 @@ pub(super) async fn tool_delete_story(args: &Value, ctx: &AppContext) -> Result< // 5. Delete from database content store and shadow table. let found_in_db = crate::db::read_content(story_id).is_some() - || crate::pipeline_state::read_typed(story_id).ok().flatten().is_some(); + || crate::pipeline_state::read_typed(story_id) + .ok() + .flatten() + .is_some(); crate::db::delete_item(story_id); slog_warn!("[delete_story] Deleted '{story_id}' from content store / shadow table"); @@ -599,7 +606,9 @@ pub(super) async fn tool_delete_story(args: &Value, ctx: &AppContext) -> Result< deleted_from_fs = true; } Err(e) => { - slog_warn!("[delete_story] Failed to delete filesystem shadow '{story_id}' from work/{stage}/: {e}"); + slog_warn!( + "[delete_story] Failed to delete filesystem shadow '{story_id}' from work/{stage}/: {e}" + ); failed_steps.push(format!("delete_filesystem({stage}): {e}")); } } @@ -820,7 +829,10 @@ mod tests { .unwrap(); assert!(result.contains("Created story:")); - let story_id = result.trim_start_matches("Created story: ").trim().to_string(); + let story_id = result + .trim_start_matches("Created story: ") + .trim() + .to_string(); let content = crate::db::read_content(&story_id).expect("story content should exist"); assert!( content.contains("## Description"), @@ -844,11 +856,7 @@ mod tests { ("4_merge", "9940_story_merge", "Merge Story"), ("5_done", "9950_story_done", "Done Story"), ] { - crate::db::write_item_with_content( - id, - stage, - &format!("---\nname: \"{name}\"\n---\n"), - ); + crate::db::write_item_with_content(id, stage, &format!("---\nname: \"{name}\"\n---\n")); } let ctx = test_ctx(tmp.path()); @@ -869,7 +877,9 @@ mod tests { // Backlog should contain our item let backlog = parsed["backlog"].as_array().unwrap(); assert!( - backlog.iter().any(|b| b["story_id"] == "9910_story_upcoming"), + backlog + .iter() + .any(|b| b["story_id"] == "9910_story_upcoming"), "expected 9910_story_upcoming in backlog: {backlog:?}" ); } @@ -896,7 +906,9 @@ mod tests { let parsed: Value = serde_json::from_str(&result).unwrap(); let active = parsed["active"].as_array().unwrap(); - let item = active.iter().find(|i| i["story_id"] == "9921_story_active") + let item = active + .iter() + .find(|i| i["story_id"] == "9921_story_active") .expect("expected 9921_story_active in active items"); assert_eq!(item["stage"], "current"); assert!(!item["agent"].is_null(), "agent should be present"); @@ -1115,7 +1127,10 @@ mod tests { ) .unwrap(); - assert!(result.contains("_bug_login_crash"), "result should contain bug ID: {result}"); + assert!( + result.contains("_bug_login_crash"), + "result should contain bug ID: {result}" + ); // Extract the actual bug ID from the result message (format: "Created bug: "). let bug_id = result.trim_start_matches("Created bug: ").trim(); // Bug content should exist in the CRDT content store. @@ -1157,11 +1172,15 @@ mod tests { let result = tool_list_bugs(&ctx).unwrap(); let parsed: Vec = serde_json::from_str(&result).unwrap(); assert!( - parsed.iter().any(|b| b["bug_id"] == "9902_bug_crash" && b["name"] == "App Crash"), + parsed + .iter() + .any(|b| b["bug_id"] == "9902_bug_crash" && b["name"] == "App Crash"), "expected 9902_bug_crash in bugs list: {parsed:?}" ); assert!( - parsed.iter().any(|b| b["bug_id"] == "9903_bug_typo" && b["name"] == "Typo in Header"), + parsed + .iter() + .any(|b| b["bug_id"] == "9903_bug_typo" && b["name"] == "Typo in Header"), "expected 9903_bug_typo in bugs list: {parsed:?}" ); } @@ -1252,12 +1271,14 @@ mod tests { ) .unwrap(); - assert!(result.contains("_spike_compare_encoders"), "result should contain spike ID: {result}"); + assert!( + result.contains("_spike_compare_encoders"), + "result should contain spike ID: {result}" + ); // Extract the actual spike ID from the result message (format: "Created spike: "). let spike_id = result.trim_start_matches("Created spike: ").trim(); // Spike content should exist in the CRDT content store. - let contents = crate::db::read_content(spike_id) - .expect("expected spike content in CRDT"); + let contents = crate::db::read_content(spike_id).expect("expected spike content in CRDT"); assert!(contents.starts_with("---\nname: \"Compare Encoders\"\n---")); assert!(contents.contains("Which encoder is fastest?")); } @@ -1268,13 +1289,15 @@ mod tests { let ctx = test_ctx(tmp.path()); let result = tool_create_spike(&json!({"name": "My Spike"}), &ctx).unwrap(); - assert!(result.contains("_spike_my_spike"), "result should contain spike ID: {result}"); + assert!( + result.contains("_spike_my_spike"), + "result should contain spike ID: {result}" + ); // Extract the actual spike ID from the result message (format: "Created spike: "). let spike_id = result.trim_start_matches("Created spike: ").trim(); // Spike content should exist in the CRDT content store. - let contents = crate::db::read_content(spike_id) - .expect("expected spike content in CRDT"); + let contents = crate::db::read_content(spike_id).expect("expected spike content in CRDT"); assert!(contents.starts_with("---\nname: \"My Spike\"\n---")); assert!(contents.contains("## Question\n\n- TBD\n")); } @@ -1326,7 +1349,9 @@ mod tests { let ctx = test_ctx(tmp.path()); let result = tool_validate_stories(&ctx).unwrap(); let parsed: Vec = serde_json::from_str(&result).unwrap(); - let item = parsed.iter().find(|v| v["story_id"] == "9907_test") + let item = parsed + .iter() + .find(|v| v["story_id"] == "9907_test") .expect("expected 9907_test in validation results"); assert_eq!(item["valid"], true); } @@ -1336,16 +1361,14 @@ mod tests { let tmp = tempfile::tempdir().unwrap(); crate::db::ensure_content_store(); - crate::db::write_item_with_content( - "9908_test", - "2_current", - "## No front matter at all\n", - ); + crate::db::write_item_with_content("9908_test", "2_current", "## No front matter at all\n"); let ctx = test_ctx(tmp.path()); let result = tool_validate_stories(&ctx).unwrap(); let parsed: Vec = serde_json::from_str(&result).unwrap(); - let item = parsed.iter().find(|v| v["story_id"] == "9908_test") + let item = parsed + .iter() + .find(|v| v["story_id"] == "9908_test") .expect("expected 9908_test in validation results"); assert_eq!(item["valid"], false); } @@ -1551,11 +1574,7 @@ mod tests { let current_dir = tmp.path().join(".huskies/work/2_current"); std::fs::create_dir_all(¤t_dir).unwrap(); let content = "---\nname: No Branch\n---\n"; - std::fs::write( - current_dir.join("51_story_no_branch.md"), - content, - ) - .unwrap(); + std::fs::write(current_dir.join("51_story_no_branch.md"), content).unwrap(); crate::db::ensure_content_store(); crate::db::write_content("51_story_no_branch", content); @@ -1594,8 +1613,14 @@ mod tests { assert!(result.is_ok(), "Expected ok: {result:?}"); let content = crate::db::read_content("504_bool_test").unwrap(); - assert!(content.contains("blocked: false"), "bool should be unquoted: {content}"); - assert!(!content.contains("blocked: \"false\""), "bool must not be quoted: {content}"); + assert!( + content.contains("blocked: false"), + "bool should be unquoted: {content}" + ); + assert!( + !content.contains("blocked: \"false\""), + "bool must not be quoted: {content}" + ); } #[test] @@ -1615,8 +1640,14 @@ mod tests { assert!(result.is_ok(), "Expected ok: {result:?}"); let content = crate::db::read_content("504_num_test").unwrap(); - assert!(content.contains("retry_count: 3"), "number should be unquoted: {content}"); - assert!(!content.contains("retry_count: \"3\""), "number must not be quoted: {content}"); + assert!( + content.contains("retry_count: 3"), + "number should be unquoted: {content}" + ); + assert!( + !content.contains("retry_count: \"3\""), + "number must not be quoted: {content}" + ); } #[test] @@ -1637,8 +1668,14 @@ mod tests { let content = crate::db::read_content("504_arr_test").unwrap(); // YAML inline sequences use spaces after commas - assert!(content.contains("depends_on: [490, 491]"), "array should be unquoted YAML: {content}"); - assert!(!content.contains("depends_on: \""), "array must not be quoted: {content}"); + assert!( + content.contains("depends_on: [490, 491]"), + "array should be unquoted YAML: {content}" + ); + assert!( + !content.contains("depends_on: \""), + "array must not be quoted: {content}" + ); // The YAML must be parseable as a vec let meta = crate::io::story_metadata::parse_front_matter(&content) @@ -1677,8 +1714,10 @@ mod tests { ); let ctx = test_ctx(tmp.path()); - let result = - tool_check_criterion(&json!({"story_id": "9904_test", "criterion_index": 0}), &ctx); + let result = tool_check_criterion( + &json!({"story_id": "9904_test", "criterion_index": 0}), + &ctx, + ); assert!(result.is_ok(), "Expected ok: {result:?}"); assert!(result.unwrap().contains("Criterion 0 checked")); } @@ -1719,11 +1758,8 @@ mod tests { assert_eq!(ctx.timer_store.list().len(), 1); // Delete the story. - let result = tool_delete_story( - &json!({"story_id": "478_story_rate_limit_repro"}), - &ctx, - ) - .await; + let result = + tool_delete_story(&json!({"story_id": "478_story_rate_limit_repro"}), &ctx).await; assert!(result.is_ok(), "delete_story failed: {result:?}"); // Timer must be gone — fast-forwarding past the scheduled time should @@ -1741,9 +1777,7 @@ mod tests { // Filesystem shadow must also be gone. assert!( - !backlog - .join("478_story_rate_limit_repro.md") - .exists(), + !backlog.join("478_story_rate_limit_repro.md").exists(), "filesystem shadow was not removed" ); } diff --git a/server/src/http/mcp/wizard_tools.rs b/server/src/http/mcp/wizard_tools.rs index ed044365..a92b74f5 100644 --- a/server/src/http/mcp/wizard_tools.rs +++ b/server/src/http/mcp/wizard_tools.rs @@ -24,7 +24,10 @@ use std::path::Path; /// Returns `None` for `Scaffold` since that step has no single output file — it /// creates the full `.huskies/` directory structure and is handled by /// `huskies init` before the server starts. -pub(crate) fn step_output_path(project_root: &Path, step: WizardStep) -> Option { +pub(crate) fn step_output_path( + project_root: &Path, + step: WizardStep, +) -> Option { match step { WizardStep::Context => Some( project_root @@ -58,7 +61,11 @@ pub(crate) fn is_script_step(step: WizardStep) -> bool { /// Existing files (including `CLAUDE.md`) are never overwritten — the wizard /// appends or skips per the acceptance criteria. For script steps the file is /// also made executable after writing. -pub(crate) fn write_if_missing(path: &Path, content: &str, executable: bool) -> Result { +pub(crate) fn write_if_missing( + path: &Path, + content: &str, + executable: bool, +) -> Result { if path.exists() { return Ok(false); // already present — skip silently } @@ -66,8 +73,7 @@ pub(crate) fn write_if_missing(path: &Path, content: &str, executable: bool) -> fs::create_dir_all(parent) .map_err(|e| format!("Failed to create directory {}: {e}", parent.display()))?; } - fs::write(path, content) - .map_err(|e| format!("Failed to write {}: {e}", path.display()))?; + fs::write(path, content).map_err(|e| format!("Failed to write {}: {e}", path.display()))?; if executable { #[cfg(unix)] @@ -186,7 +192,8 @@ pub(crate) fn generation_hint(step: WizardStep, project_root: &Path) -> String { - High-level goal of the project\n\ - Core features\n\ - Domain concepts and entities\n\ - - Glossary of abbreviations and technical terms".to_string() + - Glossary of abbreviations and technical terms" + .to_string() } else { "Read the project source tree and generate a `.huskies/specs/00_CONTEXT.md` describing:\n\ - High-level goal of the project\n\ @@ -262,7 +269,9 @@ pub(crate) fn generation_hint(step: WizardStep, project_root: &Path) -> String { "Generate a `script/test_coverage` shell script (#!/usr/bin/env bash, set -euo pipefail) that generates a test coverage report (e.g. `cargo llvm-cov nextest` or `npm run coverage`).".to_string() } } - WizardStep::Scaffold => "Scaffold step is handled automatically by `huskies init`.".to_string(), + WizardStep::Scaffold => { + "Scaffold step is handled automatically by `huskies init`.".to_string() + } } } @@ -427,11 +436,8 @@ mod tests { fn wizard_generate_with_content_stages_content() { let dir = TempDir::new().unwrap(); let ctx = setup(&dir); - let result = tool_wizard_generate( - &serde_json::json!({"content": "# My Project"}), - &ctx, - ) - .unwrap(); + let result = + tool_wizard_generate(&serde_json::json!({"content": "# My Project"}), &ctx).unwrap(); assert!(result.contains("staged")); let state = WizardState::load(dir.path()).unwrap(); assert_eq!(state.steps[1].status, StepStatus::AwaitingConfirmation); @@ -443,11 +449,7 @@ mod tests { let dir = TempDir::new().unwrap(); let ctx = setup(&dir); // Stage content for Context step. - tool_wizard_generate( - &serde_json::json!({"content": "# Context content"}), - &ctx, - ) - .unwrap(); + tool_wizard_generate(&serde_json::json!({"content": "# Context content"}), &ctx).unwrap(); let result = tool_wizard_confirm(&ctx).unwrap(); assert!(result.contains("confirmed")); // File should now exist. @@ -478,11 +480,7 @@ mod tests { std::fs::write(&context_path, "original content").unwrap(); // Stage and confirm — existing file should NOT be overwritten. - tool_wizard_generate( - &serde_json::json!({"content": "new content"}), - &ctx, - ) - .unwrap(); + tool_wizard_generate(&serde_json::json!({"content": "new content"}), &ctx).unwrap(); let result = tool_wizard_confirm(&ctx).unwrap(); assert!(result.contains("already exists")); assert_eq!( @@ -507,11 +505,7 @@ mod tests { let dir = TempDir::new().unwrap(); let ctx = setup(&dir); // Stage content first. - tool_wizard_generate( - &serde_json::json!({"content": "some content"}), - &ctx, - ) - .unwrap(); + tool_wizard_generate(&serde_json::json!({"content": "some content"}), &ctx).unwrap(); let result = tool_wizard_retry(&ctx).unwrap(); assert!(result.contains("reset")); let state = WizardState::load(dir.path()).unwrap(); diff --git a/server/src/http/mod.rs b/server/src/http/mod.rs index 0bb92e42..3a2decdb 100644 --- a/server/src/http/mod.rs +++ b/server/src/http/mod.rs @@ -2,8 +2,6 @@ pub mod agents; pub mod agents_sse; pub mod anthropic; -#[cfg(test)] -pub(crate) mod test_helpers; pub mod assets; pub mod bot_command; pub mod chat; @@ -14,6 +12,8 @@ pub mod mcp; pub mod model; pub mod oauth; pub mod settings; +#[cfg(test)] +pub(crate) mod test_helpers; pub mod workflow; pub mod project; @@ -95,10 +95,7 @@ pub fn build_routes( "/callback", get(oauth::oauth_callback).data(oauth_state.clone()), ) - .at( - "/oauth/status", - get(oauth::oauth_status), - ) + .at("/oauth/status", get(oauth::oauth_status)) .at("/debug/crdt", get(debug_crdt_handler)) .at("/assets/*path", get(assets::embedded_asset)) .at("/", get(assets::embedded_index)) diff --git a/server/src/http/oauth.rs b/server/src/http/oauth.rs index fbdeec16..bb5fe52b 100644 --- a/server/src/http/oauth.rs +++ b/server/src/http/oauth.rs @@ -67,14 +67,21 @@ fn compute_code_challenge(verifier: &str) -> String { /// Base64url-encode without padding (RFC 7636). fn base64url_encode(data: &[u8]) -> String { // Standard base64 then convert to base64url - const CHARS: &[u8] = - b"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"; + const CHARS: &[u8] = b"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"; let mut result = String::new(); let mut i = 0; while i < data.len() { let b0 = data[i] as u32; - let b1 = if i + 1 < data.len() { data[i + 1] as u32 } else { 0 }; - let b2 = if i + 2 < data.len() { data[i + 2] as u32 } else { 0 }; + let b1 = if i + 1 < data.len() { + data[i + 1] as u32 + } else { + 0 + }; + let b2 = if i + 2 < data.len() { + data[i + 2] as u32 + } else { + 0 + }; let triple = (b0 << 16) | (b1 << 8) | b2; result.push(CHARS[((triple >> 18) & 0x3F) as usize] as char); @@ -238,7 +245,11 @@ pub async fn oauth_callback( let status = resp.status(); let body = resp.text().await.unwrap_or_default(); - slog!("[oauth] Token exchange response (HTTP {}): {}", status, body); + slog!( + "[oauth] Token exchange response (HTTP {}): {}", + status, + body + ); if !status.is_success() { return html_response( diff --git a/server/src/http/settings.rs b/server/src/http/settings.rs index fe3be876..d1c000cb 100644 --- a/server/src/http/settings.rs +++ b/server/src/http/settings.rs @@ -79,7 +79,10 @@ impl SettingsApi { payload: Json, ) -> OpenApiResult> { let editor_command = payload.0.editor_command; - let trimmed = editor_command.as_deref().map(str::trim).filter(|s| !s.is_empty()); + let trimmed = editor_command + .as_deref() + .map(str::trim) + .filter(|s| !s.is_empty()); match trimmed { Some(cmd) => { self.ctx.store.set(EDITOR_COMMAND_KEY, json!(cmd)); @@ -256,9 +259,7 @@ mod tests { async fn get_editor_http_handler_returns_null_when_not_set() { let dir = TempDir::new().unwrap(); let ctx = test_ctx(dir.path()); - let api = SettingsApi { - ctx: Arc::new(ctx), - }; + let api = SettingsApi { ctx: Arc::new(ctx) }; let result = api.get_editor().await.unwrap().0; assert!(result.editor_command.is_none()); } @@ -267,9 +268,7 @@ mod tests { async fn set_editor_http_handler_stores_value() { let dir = TempDir::new().unwrap(); let ctx = test_ctx(dir.path()); - let api = SettingsApi { - ctx: Arc::new(ctx), - }; + let api = SettingsApi { ctx: Arc::new(ctx) }; let result = api .set_editor(Json(EditorCommandPayload { editor_command: Some("zed".to_string()), @@ -284,9 +283,7 @@ mod tests { async fn set_editor_http_handler_clears_value_when_null() { let dir = TempDir::new().unwrap(); let ctx = test_ctx(dir.path()); - let api = SettingsApi { - ctx: Arc::new(ctx), - }; + let api = SettingsApi { ctx: Arc::new(ctx) }; // First set a value api.set_editor(Json(EditorCommandPayload { editor_command: Some("code".to_string()), diff --git a/server/src/http/wizard.rs b/server/src/http/wizard.rs index ab3bffb7..88b23947 100644 --- a/server/src/http/wizard.rs +++ b/server/src/http/wizard.rs @@ -259,10 +259,7 @@ mod tests { let (dir, client) = setup(); WizardState::init_if_missing(dir.path()); - let resp = client - .post("/wizard/step/context/generating") - .send() - .await; + let resp = client.post("/wizard/step/context/generating").send().await; resp.assert_status_is_ok(); let body: serde_json::Value = resp.0.into_body().into_json().await.unwrap(); assert_eq!(body["steps"][1]["status"], "generating"); @@ -273,10 +270,7 @@ mod tests { let (dir, client) = setup(); WizardState::init_if_missing(dir.path()); - let resp = client - .post("/wizard/step/nonexistent/confirm") - .send() - .await; + let resp = client.post("/wizard/step/nonexistent/confirm").send().await; resp.assert_status(StatusCode::NOT_FOUND); } @@ -286,7 +280,13 @@ mod tests { WizardState::init_if_missing(dir.path()); // Steps 2-6 (scaffold is already confirmed) - let steps = ["context", "stack", "test_script", "release_script", "test_coverage"]; + let steps = [ + "context", + "stack", + "test_script", + "release_script", + "test_coverage", + ]; for step in steps { let resp = client .post(format!("/wizard/step/{step}/confirm")) diff --git a/server/src/http/workflow/bug_ops.rs b/server/src/http/workflow/bug_ops.rs index 2ed5bae2..46ee6647 100644 --- a/server/src/http/workflow/bug_ops.rs +++ b/server/src/http/workflow/bug_ops.rs @@ -165,12 +165,16 @@ fn is_bug_item(stem: &str) -> bool { /// Extract bug name from content (heading or front matter). fn extract_bug_name_from_content(content: &str) -> Option { // Try front matter first. - if let Ok(meta) = parse_front_matter(content) && let Some(name) = meta.name { + if let Ok(meta) = parse_front_matter(content) + && let Some(name) = meta.name + { return Some(name); } // Fallback: heading. for line in content.lines() { - if let Some(rest) = line.strip_prefix("# Bug ") && let Some(colon_pos) = rest.find(": ") { + if let Some(rest) = line.strip_prefix("# Bug ") + && let Some(colon_pos) = rest.find(": ") + { return Some(rest[colon_pos + 2..].to_string()); } } @@ -184,16 +188,19 @@ pub fn list_bug_files(_root: &Path) -> Result, String> { let mut bugs = Vec::new(); for item in crate::pipeline_state::read_all_typed() { - if !matches!(item.stage, crate::pipeline_state::Stage::Backlog) || !is_bug_item(&item.story_id.0) { + if !matches!(item.stage, crate::pipeline_state::Stage::Backlog) + || !is_bug_item(&item.story_id.0) + { continue; } let sid = item.story_id.0; - let name = if item.name.is_empty() { None } else { Some(item.name) } - .or_else(|| { - crate::db::read_content(&sid) - .and_then(|c| extract_bug_name_from_content(&c)) - }) - .unwrap_or_else(|| sid.clone()); + let name = if item.name.is_empty() { + None + } else { + Some(item.name) + } + .or_else(|| crate::db::read_content(&sid).and_then(|c| extract_bug_name_from_content(&c))) + .unwrap_or_else(|| sid.clone()); bugs.push((sid, name)); } @@ -214,17 +221,23 @@ pub fn list_refactor_files(_root: &Path) -> Result, String let mut refactors = Vec::new(); for item in crate::pipeline_state::read_all_typed() { - if !matches!(item.stage, crate::pipeline_state::Stage::Backlog) || !is_refactor_item(&item.story_id.0) { + if !matches!(item.stage, crate::pipeline_state::Stage::Backlog) + || !is_refactor_item(&item.story_id.0) + { continue; } let sid = item.story_id.0; - let name = if item.name.is_empty() { None } else { Some(item.name) } - .or_else(|| { - crate::db::read_content(&sid) - .and_then(|c| parse_front_matter(&c).ok()) - .and_then(|m| m.name) - }) - .unwrap_or_else(|| sid.clone()); + let name = if item.name.is_empty() { + None + } else { + Some(item.name) + } + .or_else(|| { + crate::db::read_content(&sid) + .and_then(|c| parse_front_matter(&c).ok()) + .and_then(|m| m.name) + }) + .unwrap_or_else(|| sid.clone()); refactors.push((sid, name)); } @@ -278,7 +291,11 @@ mod tests { fs::write(backlog.join("3_bug_another.md"), "").unwrap(); // Also write to content store so next_item_number sees them. crate::db::write_item_with_content("1_bug_crash", "1_backlog", "---\nname: Crash\n---\n"); - crate::db::write_item_with_content("3_bug_another", "1_backlog", "---\nname: Another\n---\n"); + crate::db::write_item_with_content( + "3_bug_another", + "1_backlog", + "---\nname: Another\n---\n", + ); assert!(super::super::next_item_number(tmp.path()).unwrap() >= 4); } @@ -323,7 +340,11 @@ mod tests { ); let result = list_bug_files(tmp.path()).unwrap(); - assert!(result.iter().any(|(id, name)| id == "7001_bug_open" && name == "Open Bug")); + assert!( + result + .iter() + .any(|(id, name)| id == "7001_bug_open" && name == "Open Bug") + ); assert!(!result.iter().any(|(id, _)| id == "7002_bug_closed")); } @@ -349,9 +370,18 @@ mod tests { let result = list_bug_files(tmp.path()).unwrap(); // Find positions of our three bugs in the sorted result. - let pos_first = result.iter().position(|(id, _)| id == "7011_bug_first").unwrap(); - let pos_second = result.iter().position(|(id, _)| id == "7012_bug_second").unwrap(); - let pos_third = result.iter().position(|(id, _)| id == "7013_bug_third").unwrap(); + let pos_first = result + .iter() + .position(|(id, _)| id == "7011_bug_first") + .unwrap(); + let pos_second = result + .iter() + .position(|(id, _)| id == "7012_bug_second") + .unwrap(); + let pos_third = result + .iter() + .position(|(id, _)| id == "7013_bug_third") + .unwrap(); assert!(pos_first < pos_second); assert!(pos_second < pos_third); } @@ -379,12 +409,17 @@ mod tests { ) .unwrap(); - assert!(bug_id.ends_with("_bug_login_crash"), "expected ID to end with _bug_login_crash, got: {bug_id}"); + assert!( + bug_id.ends_with("_bug_login_crash"), + "expected ID to end with _bug_login_crash, got: {bug_id}" + ); // Check content exists (either in DB or filesystem). let contents = crate::db::read_content(&bug_id) .or_else(|| { - let filepath = tmp.path().join(format!(".huskies/work/1_backlog/{bug_id}.md")); + let filepath = tmp + .path() + .join(format!(".huskies/work/1_backlog/{bug_id}.md")); fs::read_to_string(filepath).ok() }) .expect("bug content should exist"); @@ -393,7 +428,10 @@ mod tests { contents.starts_with("---\nname: \"Login Crash\"\n---"), "bug file must start with YAML front matter" ); - assert!(contents.contains("Login Crash"), "content should mention bug name"); + assert!( + contents.contains("Login Crash"), + "content should mention bug name" + ); assert!(contents.contains("## Description")); assert!(contents.contains("The login page crashes on submit.")); assert!(contents.contains("## How to Reproduce")); @@ -409,7 +447,15 @@ mod tests { #[test] fn create_bug_file_rejects_empty_name() { let tmp = tempfile::tempdir().unwrap(); - let result = create_bug_file(tmp.path(), "!!!", "desc", "steps", "actual", "expected", None); + let result = create_bug_file( + tmp.path(), + "!!!", + "desc", + "steps", + "actual", + "expected", + None, + ); assert!(result.is_err()); assert!(result.unwrap_err().contains("alphanumeric")); } @@ -453,11 +499,16 @@ mod tests { let spike_id = create_spike_file(tmp.path(), "Filesystem Watcher Architecture", None).unwrap(); - assert!(spike_id.ends_with("_spike_filesystem_watcher_architecture"), "expected ID to end with _spike_filesystem_watcher_architecture, got: {spike_id}"); + assert!( + spike_id.ends_with("_spike_filesystem_watcher_architecture"), + "expected ID to end with _spike_filesystem_watcher_architecture, got: {spike_id}" + ); let contents = crate::db::read_content(&spike_id) .or_else(|| { - let filepath = tmp.path().join(format!(".huskies/work/1_backlog/{spike_id}.md")); + let filepath = tmp + .path() + .join(format!(".huskies/work/1_backlog/{spike_id}.md")); fs::read_to_string(filepath).ok() }) .expect("spike content should exist"); @@ -466,7 +517,10 @@ mod tests { contents.starts_with("---\nname: \"Filesystem Watcher Architecture\"\n---"), "spike file must start with YAML front matter" ); - assert!(contents.contains("Filesystem Watcher Architecture"), "content should mention spike name"); + assert!( + contents.contains("Filesystem Watcher Architecture"), + "content should mention spike name" + ); assert!(contents.contains("## Question")); assert!(contents.contains("## Hypothesis")); assert!(contents.contains("## Timebox")); @@ -480,11 +534,14 @@ mod tests { let tmp = tempfile::tempdir().unwrap(); let description = "What is the best approach for watching filesystem events?"; - let spike_id = create_spike_file(tmp.path(), "FS Watcher Spike", Some(description)).unwrap(); + let spike_id = + create_spike_file(tmp.path(), "FS Watcher Spike", Some(description)).unwrap(); let contents = crate::db::read_content(&spike_id) .or_else(|| { - let filepath = tmp.path().join(format!(".huskies/work/1_backlog/{spike_id}.md")); + let filepath = tmp + .path() + .join(format!(".huskies/work/1_backlog/{spike_id}.md")); fs::read_to_string(filepath).ok() }) .expect("spike content should exist"); @@ -498,7 +555,9 @@ mod tests { let contents = crate::db::read_content(&spike_id) .or_else(|| { - let filepath = tmp.path().join(format!(".huskies/work/1_backlog/{spike_id}.md")); + let filepath = tmp + .path() + .join(format!(".huskies/work/1_backlog/{spike_id}.md")); fs::read_to_string(filepath).ok() }) .expect("spike content should exist"); @@ -544,8 +603,19 @@ mod tests { ); let spike_id = create_spike_file(tmp.path(), "My Spike", None).unwrap(); - assert!(spike_id.ends_with("_spike_my_spike"), "expected ID to end with _spike_my_spike, got: {spike_id}"); - let num: u32 = spike_id.chars().take_while(|c| c.is_ascii_digit()).collect::().parse().unwrap(); - assert!(num >= 7051, "expected spike number >= 7051, got: {spike_id}"); + assert!( + spike_id.ends_with("_spike_my_spike"), + "expected ID to end with _spike_my_spike, got: {spike_id}" + ); + let num: u32 = spike_id + .chars() + .take_while(|c| c.is_ascii_digit()) + .collect::() + .parse() + .unwrap(); + assert!( + num >= 7051, + "expected spike number >= 7051, got: {spike_id}" + ); } } diff --git a/server/src/http/workflow/mod.rs b/server/src/http/workflow/mod.rs index 9796f3f2..82901172 100644 --- a/server/src/http/workflow/mod.rs +++ b/server/src/http/workflow/mod.rs @@ -161,7 +161,6 @@ pub fn load_pipeline_state(ctx: &AppContext) -> Result { Ok(state) } - /// Build a map from story_id → AgentAssignment for all pending/running agents. fn build_active_agent_map(ctx: &AppContext) -> HashMap { let agents = match ctx.agents.list_agents() { @@ -196,7 +195,6 @@ fn build_active_agent_map(ctx: &AppContext) -> HashMap map } - pub fn load_upcoming_stories(_ctx: &AppContext) -> Result, String> { use crate::pipeline_state::Stage; @@ -244,9 +242,7 @@ pub fn load_upcoming_stories(_ctx: &AppContext) -> Result, St Ok(stories) } -pub fn validate_story_dirs( - _root: &std::path::Path, -) -> Result, String> { +pub fn validate_story_dirs(_root: &std::path::Path) -> Result, String> { use crate::pipeline_state::Stage; let mut results = Vec::new(); @@ -309,7 +305,12 @@ pub(super) fn read_story_content(_project_root: &Path, story_id: &str) -> Result } /// Write story content to the DB content store and CRDT. -pub(super) fn write_story_content(_project_root: &Path, story_id: &str, stage: &str, content: &str) { +pub(super) fn write_story_content( + _project_root: &Path, + story_id: &str, + stage: &str, + content: &str, +) { crate::db::write_item_with_content(story_id, stage, content); } @@ -321,13 +322,16 @@ pub(super) fn story_stage(story_id: &str) -> Option { .map(|item| item.stage.dir_name().to_string()) } - /// Replace the content of a named `## Section` in a story file. /// /// Finds the first occurrence of `## {section_name}` and replaces everything /// until the next `##` heading (or end of file) with the provided text. /// Returns an error if the section is not found. -pub(super) fn replace_section_content(content: &str, section_name: &str, new_text: &str) -> Result { +pub(super) fn replace_section_content( + content: &str, + section_name: &str, + new_text: &str, +) -> Result { let lines: Vec<&str> = content.lines().collect(); let heading = format!("## {section_name}"); @@ -517,18 +521,24 @@ mod tests { ("4_merge", "9840_story_merge"), ("5_done", "9850_story_done"), ] { - crate::db::write_item_with_content( - id, - stage, - &format!("---\nname: {id}\n---\n"), - ); + crate::db::write_item_with_content(id, stage, &format!("---\nname: {id}\n---\n")); } let ctx = crate::http::context::AppContext::new_test(root); let state = load_pipeline_state(&ctx).unwrap(); - assert!(state.backlog.iter().any(|s| s.story_id == "9810_story_upcoming")); - assert!(state.current.iter().any(|s| s.story_id == "9820_story_current")); + assert!( + state + .backlog + .iter() + .any(|s| s.story_id == "9810_story_upcoming") + ); + assert!( + state + .current + .iter() + .any(|s| s.story_id == "9820_story_current") + ); assert!(state.qa.iter().any(|s| s.story_id == "9830_story_qa")); assert!(state.merge.iter().any(|s| s.story_id == "9840_story_merge")); assert!(state.done.iter().any(|s| s.story_id == "9850_story_done")); @@ -558,12 +568,23 @@ mod tests { ); let ctx = crate::http::context::AppContext::new_test(root); - ctx.agents.inject_test_agent("9860_story_test", "coder-1", crate::agents::AgentStatus::Running); + ctx.agents.inject_test_agent( + "9860_story_test", + "coder-1", + crate::agents::AgentStatus::Running, + ); let state = load_pipeline_state(&ctx).unwrap(); - let item = state.current.iter().find(|s| s.story_id == "9860_story_test").unwrap(); - assert!(item.agent.is_some(), "running agent should appear on work item"); + let item = state + .current + .iter() + .find(|s| s.story_id == "9860_story_test") + .unwrap(); + assert!( + item.agent.is_some(), + "running agent should appear on work item" + ); let agent = item.agent.as_ref().unwrap(); assert_eq!(agent.agent_name, "coder-1"); assert_eq!(agent.status, "running"); @@ -582,11 +603,19 @@ mod tests { ); let ctx = crate::http::context::AppContext::new_test(root); - ctx.agents.inject_test_agent("9861_story_done", "coder-1", crate::agents::AgentStatus::Completed); + ctx.agents.inject_test_agent( + "9861_story_done", + "coder-1", + crate::agents::AgentStatus::Completed, + ); let state = load_pipeline_state(&ctx).unwrap(); - let item = state.current.iter().find(|s| s.story_id == "9861_story_done").unwrap(); + let item = state + .current + .iter() + .find(|s| s.story_id == "9861_story_done") + .unwrap(); assert!( item.agent.is_none(), "completed agent should not appear on work item" @@ -606,12 +635,23 @@ mod tests { ); let ctx = crate::http::context::AppContext::new_test(root); - ctx.agents.inject_test_agent("9862_story_pending", "coder-1", crate::agents::AgentStatus::Pending); + ctx.agents.inject_test_agent( + "9862_story_pending", + "coder-1", + crate::agents::AgentStatus::Pending, + ); let state = load_pipeline_state(&ctx).unwrap(); - let item = state.current.iter().find(|s| s.story_id == "9862_story_pending").unwrap(); - assert!(item.agent.is_some(), "pending agent should appear on work item"); + let item = state + .current + .iter() + .find(|s| s.story_id == "9862_story_pending") + .unwrap(); + assert!( + item.agent.is_some(), + "pending agent should appear on work item" + ); assert_eq!(item.agent.as_ref().unwrap().status, "pending"); } @@ -633,10 +673,18 @@ mod tests { let ctx = crate::http::context::AppContext::new_test(tmp.path().to_path_buf()); let state = load_pipeline_state(&ctx).unwrap(); - let dependent = state.backlog.iter().find(|s| s.story_id == "9863_story_dependent").unwrap(); + let dependent = state + .backlog + .iter() + .find(|s| s.story_id == "9863_story_dependent") + .unwrap(); assert_eq!(dependent.depends_on, Some(vec![10, 11])); - let independent = state.backlog.iter().find(|s| s.story_id == "9864_story_independent").unwrap(); + let independent = state + .backlog + .iter() + .find(|s| s.story_id == "9864_story_independent") + .unwrap(); assert_eq!(independent.depends_on, None); } @@ -657,9 +705,15 @@ mod tests { let tmp = tempfile::tempdir().unwrap(); let ctx = crate::http::context::AppContext::new_test(tmp.path().to_path_buf()); let stories = load_upcoming_stories(&ctx).unwrap(); - let s1 = stories.iter().find(|s| s.story_id == "9870_story_view_upcoming").unwrap(); + let s1 = stories + .iter() + .find(|s| s.story_id == "9870_story_view_upcoming") + .unwrap(); assert_eq!(s1.name.as_deref(), Some("View Upcoming")); - let s2 = stories.iter().find(|s| s.story_id == "9871_story_worktree").unwrap(); + let s2 = stories + .iter() + .find(|s| s.story_id == "9871_story_worktree") + .unwrap(); assert_eq!(s2.name.as_deref(), Some("Worktree Orchestration")); } @@ -696,24 +750,29 @@ mod tests { let tmp = tempfile::tempdir().unwrap(); let results = validate_story_dirs(tmp.path()).unwrap(); - let r1 = results.iter().find(|r| r.story_id == "9873_story_todos").unwrap(); + let r1 = results + .iter() + .find(|r| r.story_id == "9873_story_todos") + .unwrap(); assert!(r1.valid); - let r2 = results.iter().find(|r| r.story_id == "9874_story_front_matter").unwrap(); + let r2 = results + .iter() + .find(|r| r.story_id == "9874_story_front_matter") + .unwrap(); assert!(r2.valid); } #[test] fn validate_story_dirs_missing_front_matter() { crate::db::ensure_content_store(); - crate::db::write_item_with_content( - "9875_story_no_fm", - "2_current", - "# No front matter\n", - ); + crate::db::write_item_with_content("9875_story_no_fm", "2_current", "# No front matter\n"); let tmp = tempfile::tempdir().unwrap(); let results = validate_story_dirs(tmp.path()).unwrap(); - let r = results.iter().find(|r| r.story_id == "9875_story_no_fm").unwrap(); + let r = results + .iter() + .find(|r| r.story_id == "9875_story_no_fm") + .unwrap(); assert!(!r.valid); assert_eq!(r.error.as_deref(), Some("Missing front matter")); } @@ -729,7 +788,10 @@ mod tests { let tmp = tempfile::tempdir().unwrap(); let results = validate_story_dirs(tmp.path()).unwrap(); - let r = results.iter().find(|r| r.story_id == "9876_story_no_name").unwrap(); + let r = results + .iter() + .find(|r| r.story_id == "9876_story_no_name") + .unwrap(); assert!(!r.valid); let err = r.error.as_deref().unwrap(); assert!(err.contains("Missing 'name' field")); @@ -789,11 +851,7 @@ mod tests { #[test] fn next_item_number_increments_beyond_existing() { crate::db::ensure_content_store(); - crate::db::write_item_with_content( - "9877_story_foo", - "1_backlog", - "---\nname: Foo\n---\n", - ); + crate::db::write_item_with_content("9877_story_foo", "1_backlog", "---\nname: Foo\n---\n"); let tmp = tempfile::tempdir().unwrap(); assert!(next_item_number(tmp.path()).unwrap() >= 9878); } @@ -824,7 +882,8 @@ mod tests { #[test] fn replace_or_append_section_appends_when_absent() { let contents = "---\nname: T\n---\n# Story\n"; - let new = replace_or_append_section(contents, "## Test Results", "## Test Results\n\nfoo\n"); + let new = + replace_or_append_section(contents, "## Test Results", "## Test Results\n\nfoo\n"); assert!(new.contains("## Test Results")); assert!(new.contains("foo")); assert!(new.contains("# Story")); @@ -833,7 +892,11 @@ mod tests { #[test] fn replace_or_append_section_replaces_existing() { let contents = "# Story\n\n## Test Results\n\nold content\n\n## Other\n\nother content\n"; - let new = replace_or_append_section(contents, "## Test Results", "## Test Results\n\nnew content\n"); + let new = replace_or_append_section( + contents, + "## Test Results", + "## Test Results\n\nnew content\n", + ); assert!(new.contains("new content")); assert!(!new.contains("old content")); assert!(new.contains("## Other")); diff --git a/server/src/http/workflow/story_ops.rs b/server/src/http/workflow/story_ops.rs index c56a1011..4ccd184e 100644 --- a/server/src/http/workflow/story_ops.rs +++ b/server/src/http/workflow/story_ops.rs @@ -4,7 +4,10 @@ use serde_json::Value; use std::collections::HashMap; use std::path::Path; -use super::{create_section_content, next_item_number, read_story_content, replace_section_content, slugify_name, story_stage, write_story_content}; +use super::{ + create_section_content, next_item_number, read_story_content, replace_section_content, + slugify_name, story_stage, write_story_content, +}; /// Shared create-story logic used by both the OpenApi and MCP handlers. /// @@ -158,9 +161,7 @@ pub fn add_criterion_to_file( let insert_after = last_criterion_line .or(ac_section_start) - .ok_or_else(|| { - format!("Story '{story_id}' has no '## Acceptance Criteria' section.") - })?; + .ok_or_else(|| format!("Story '{story_id}' has no '## Acceptance Criteria' section."))?; let mut new_lines: Vec = lines.iter().map(|s| s.to_string()).collect(); new_lines.insert(insert_after + 1, format!("- [ ] {criterion}")); @@ -195,7 +196,14 @@ fn json_value_to_yaml_scalar(value: &Value) -> String { } Value::String(s) => yaml_encode_str(s), // Null and Object are not meaningful as YAML scalars; store as quoted strings. - other => format!("\"{}\"", other.to_string().replace('"', "\\\"").replace('\n', " ").replace('\r', "")), + other => format!( + "\"{}\"", + other + .to_string() + .replace('"', "\\\"") + .replace('\n', " ") + .replace('\r', "") + ), } } @@ -211,7 +219,10 @@ fn yaml_encode_str(s: &str) -> String { // YAML inline sequences like [490] or [490, 491] — write unquoted so // serde_yaml can deserialise them as Vec. s if s.starts_with('[') && s.ends_with(']') => s.to_string(), - s => format!("\"{}\"", s.replace('"', "\\\"").replace('\n', " ").replace('\r', "")), + s => format!( + "\"{}\"", + s.replace('"', "\\\"").replace('\n', " ").replace('\r', "") + ), } } @@ -246,13 +257,17 @@ pub fn update_story_in_file( if let Some(us) = user_story { contents = match replace_section_content(&contents, "User Story", us) { Ok(updated) => updated, - Err(_) => create_section_content(&contents, "User Story", us, Some("Acceptance Criteria")), + Err(_) => { + create_section_content(&contents, "User Story", us, Some("Acceptance Criteria")) + } }; } if let Some(desc) = description { contents = match replace_section_content(&contents, "Description", desc) { Ok(updated) => updated, - Err(_) => create_section_content(&contents, "Description", desc, Some("Acceptance Criteria")), + Err(_) => { + create_section_content(&contents, "Description", desc, Some("Acceptance Criteria")) + } }; } @@ -322,7 +337,11 @@ mod tests { fs::create_dir_all(&backlog).unwrap(); fs::write(backlog.join("36_story_existing.md"), "").unwrap(); // Also write to content store so next_item_number sees it. - crate::db::write_item_with_content("36_story_existing", "1_backlog", "---\nname: Existing\n---\n"); + crate::db::write_item_with_content( + "36_story_existing", + "1_backlog", + "---\nname: Existing\n---\n", + ); let number = super::super::next_item_number(tmp.path()).unwrap(); // The number must be >= 37 (at least higher than the existing "36_story_existing.md"), @@ -390,9 +409,18 @@ mod tests { // Read the updated content. let contents = read_story_content(tmp.path(), "1_test").unwrap(); - assert!(contents.contains("- [x] Criterion 0"), "first should be checked"); - assert!(contents.contains("- [ ] Criterion 1"), "second should stay unchecked"); - assert!(contents.contains("- [ ] Criterion 2"), "third should stay unchecked"); + assert!( + contents.contains("- [x] Criterion 0"), + "first should be checked" + ); + assert!( + contents.contains("- [ ] Criterion 1"), + "second should stay unchecked" + ); + assert!( + contents.contains("- [ ] Criterion 2"), + "third should stay unchecked" + ); } #[test] @@ -404,9 +432,18 @@ mod tests { check_criterion_in_file(tmp.path(), "2_test", 1).unwrap(); let contents = read_story_content(tmp.path(), "2_test").unwrap(); - assert!(contents.contains("- [ ] Criterion 0"), "first should stay unchecked"); - assert!(contents.contains("- [x] Criterion 1"), "second should be checked"); - assert!(contents.contains("- [ ] Criterion 2"), "third should stay unchecked"); + assert!( + contents.contains("- [ ] Criterion 0"), + "first should stay unchecked" + ); + assert!( + contents.contains("- [x] Criterion 1"), + "second should be checked" + ); + assert!( + contents.contains("- [ ] Criterion 2"), + "third should stay unchecked" + ); } #[test] @@ -423,7 +460,9 @@ mod tests { // ── add_criterion_to_file tests ─────────────────────────────────────────── fn story_with_ac_section(criteria: &[&str]) -> String { - let mut s = "---\nname: Test\n---\n\n## User Story\n\nAs a user...\n\n## Acceptance Criteria\n\n".to_string(); + let mut s = + "---\nname: Test\n---\n\n## User Story\n\nAs a user...\n\n## Acceptance Criteria\n\n" + .to_string(); for c in criteria { s.push_str(&format!("- [ ] {c}\n")); } @@ -434,7 +473,11 @@ mod tests { #[test] fn add_criterion_appends_after_last_criterion() { let tmp = tempfile::tempdir().unwrap(); - setup_story_in_fs(tmp.path(), "10_test", &story_with_ac_section(&["First", "Second"])); + setup_story_in_fs( + tmp.path(), + "10_test", + &story_with_ac_section(&["First", "Second"]), + ); add_criterion_to_file(tmp.path(), "10_test", "Third").unwrap(); @@ -450,19 +493,27 @@ mod tests { #[test] fn add_criterion_to_empty_section() { let tmp = tempfile::tempdir().unwrap(); - let content = "---\nname: Test\n---\n\n## Acceptance Criteria\n\n## Out of Scope\n\n- N/A\n"; + let content = + "---\nname: Test\n---\n\n## Acceptance Criteria\n\n## Out of Scope\n\n- N/A\n"; setup_story_in_fs(tmp.path(), "11_test", content); add_criterion_to_file(tmp.path(), "11_test", "New AC").unwrap(); let contents = read_story_content(tmp.path(), "11_test").unwrap(); - assert!(contents.contains("- [ ] New AC\n"), "criterion should be present"); + assert!( + contents.contains("- [ ] New AC\n"), + "criterion should be present" + ); } #[test] fn add_criterion_missing_section_returns_error() { let tmp = tempfile::tempdir().unwrap(); - setup_story_in_fs(tmp.path(), "12_test", "---\nname: Test\n---\n\nNo AC section here.\n"); + setup_story_in_fs( + tmp.path(), + "12_test", + "---\nname: Test\n---\n\nNo AC section here.\n", + ); let result = add_criterion_to_file(tmp.path(), "12_test", "X"); assert!(result.is_err()); @@ -477,12 +528,25 @@ mod tests { let content = "---\nname: T\n---\n\n## User Story\n\nOld text\n\n## Acceptance Criteria\n\n- [ ] AC\n"; setup_story_in_fs(tmp.path(), "20_test", content); - update_story_in_file(tmp.path(), "20_test", Some("New user story text"), None, None).unwrap(); + update_story_in_file( + tmp.path(), + "20_test", + Some("New user story text"), + None, + None, + ) + .unwrap(); let result = read_story_content(tmp.path(), "20_test").unwrap(); - assert!(result.contains("New user story text"), "new text should be present"); + assert!( + result.contains("New user story text"), + "new text should be present" + ); assert!(!result.contains("Old text"), "old text should be replaced"); - assert!(result.contains("## Acceptance Criteria"), "other sections preserved"); + assert!( + result.contains("## Acceptance Criteria"), + "other sections preserved" + ); } #[test] @@ -494,8 +558,14 @@ mod tests { update_story_in_file(tmp.path(), "21_test", None, Some("New description"), None).unwrap(); let result = read_story_content(tmp.path(), "21_test").unwrap(); - assert!(result.contains("New description"), "new description present"); - assert!(!result.contains("Old description"), "old description replaced"); + assert!( + result.contains("New description"), + "new description present" + ); + assert!( + !result.contains("Old description"), + "old description replaced" + ); } #[test] @@ -515,16 +585,26 @@ mod tests { let content = "---\nname: T\n---\n\n## Acceptance Criteria\n\n- [ ] AC\n"; setup_story_in_fs(tmp.path(), "23_test", content); - let result = update_story_in_file(tmp.path(), "23_test", Some("New user story"), None, None); - assert!(result.is_ok(), "should succeed when section is missing: {result:?}"); + let result = + update_story_in_file(tmp.path(), "23_test", Some("New user story"), None, None); + assert!( + result.is_ok(), + "should succeed when section is missing: {result:?}" + ); let updated = read_story_content(tmp.path(), "23_test").unwrap(); - assert!(updated.contains("## User Story"), "section should be created"); + assert!( + updated.contains("## User Story"), + "section should be created" + ); assert!(updated.contains("New user story"), "text should be present"); // Section should appear before Acceptance Criteria. let pos_us = updated.find("## User Story").unwrap(); let pos_ac = updated.find("## Acceptance Criteria").unwrap(); - assert!(pos_us < pos_ac, "User Story should be before Acceptance Criteria"); + assert!( + pos_us < pos_ac, + "User Story should be before Acceptance Criteria" + ); } #[test] @@ -534,16 +614,34 @@ mod tests { let content = "---\nname: T\n---\n\n## User Story\n\nAs a user...\n\n## Acceptance Criteria\n\n- [ ] AC\n"; setup_story_in_fs(tmp.path(), "32_test", content); - let result = update_story_in_file(tmp.path(), "32_test", None, Some("New description text"), None); - assert!(result.is_ok(), "should succeed when section is missing: {result:?}"); + let result = update_story_in_file( + tmp.path(), + "32_test", + None, + Some("New description text"), + None, + ); + assert!( + result.is_ok(), + "should succeed when section is missing: {result:?}" + ); let updated = read_story_content(tmp.path(), "32_test").unwrap(); - assert!(updated.contains("## Description"), "section should be created"); - assert!(updated.contains("New description text"), "text should be present"); + assert!( + updated.contains("## Description"), + "section should be created" + ); + assert!( + updated.contains("New description text"), + "text should be present" + ); // Section should appear before Acceptance Criteria. let pos_desc = updated.find("## Description").unwrap(); let pos_ac = updated.find("## Acceptance Criteria").unwrap(); - assert!(pos_desc < pos_ac, "Description should be before Acceptance Criteria"); + assert!( + pos_desc < pos_ac, + "Description should be before Acceptance Criteria" + ); } #[test] @@ -553,32 +651,58 @@ mod tests { let content = "---\nname: T\n---\n\nSome content here.\n"; setup_story_in_fs(tmp.path(), "33_test", content); - let result = update_story_in_file(tmp.path(), "33_test", None, Some("Appended description"), None); - assert!(result.is_ok(), "should succeed even with no Acceptance Criteria: {result:?}"); + let result = update_story_in_file( + tmp.path(), + "33_test", + None, + Some("Appended description"), + None, + ); + assert!( + result.is_ok(), + "should succeed even with no Acceptance Criteria: {result:?}" + ); let updated = read_story_content(tmp.path(), "33_test").unwrap(); - assert!(updated.contains("## Description"), "section should be created"); - assert!(updated.contains("Appended description"), "text should be present"); + assert!( + updated.contains("## Description"), + "section should be created" + ); + assert!( + updated.contains("Appended description"), + "text should be present" + ); } #[test] fn update_story_sets_agent_front_matter_field() { let tmp = tempfile::tempdir().unwrap(); - setup_story_in_fs(tmp.path(), "24_test", "---\nname: T\n---\n\n## User Story\n\nSome story\n"); + setup_story_in_fs( + tmp.path(), + "24_test", + "---\nname: T\n---\n\n## User Story\n\nSome story\n", + ); let mut fields = HashMap::new(); fields.insert("agent".to_string(), Value::String("dev".to_string())); update_story_in_file(tmp.path(), "24_test", None, None, Some(&fields)).unwrap(); let result = read_story_content(tmp.path(), "24_test").unwrap(); - assert!(result.contains("agent: \"dev\""), "agent field should be set"); + assert!( + result.contains("agent: \"dev\""), + "agent field should be set" + ); assert!(result.contains("name: T"), "name field preserved"); } #[test] fn update_story_sets_arbitrary_front_matter_fields() { let tmp = tempfile::tempdir().unwrap(); - setup_story_in_fs(tmp.path(), "25_test", "---\nname: T\n---\n\n## User Story\n\nSome story\n"); + setup_story_in_fs( + tmp.path(), + "25_test", + "---\nname: T\n---\n\n## User Story\n\nSome story\n", + ); let mut fields = HashMap::new(); fields.insert("qa".to_string(), Value::String("human".to_string())); @@ -587,19 +711,29 @@ mod tests { let result = read_story_content(tmp.path(), "25_test").unwrap(); assert!(result.contains("qa: \"human\""), "qa field should be set"); - assert!(result.contains("priority: \"high\""), "priority field should be set"); + assert!( + result.contains("priority: \"high\""), + "priority field should be set" + ); assert!(result.contains("name: T"), "name field preserved"); } #[test] fn update_story_front_matter_only_no_section_required() { let tmp = tempfile::tempdir().unwrap(); - setup_story_in_fs(tmp.path(), "26_test", "---\nname: T\n---\n\nNo sections here.\n"); + setup_story_in_fs( + tmp.path(), + "26_test", + "---\nname: T\n---\n\nNo sections here.\n", + ); let mut fields = HashMap::new(); fields.insert("agent".to_string(), Value::String("dev".to_string())); let result = update_story_in_file(tmp.path(), "26_test", None, None, Some(&fields)); - assert!(result.is_ok(), "front-matter-only update should not require body sections"); + assert!( + result.is_ok(), + "front-matter-only update should not require body sections" + ); let contents = read_story_content(tmp.path(), "26_test").unwrap(); assert!(contents.contains("agent: \"dev\"")); @@ -616,8 +750,14 @@ mod tests { update_story_in_file(tmp.path(), "27_test", None, None, Some(&fields)).unwrap(); let result = read_story_content(tmp.path(), "27_test").unwrap(); - assert!(result.contains("blocked: false"), "bool should be unquoted: {result}"); - assert!(!result.contains("blocked: \"false\""), "bool must not be quoted: {result}"); + assert!( + result.contains("blocked: false"), + "bool should be unquoted: {result}" + ); + assert!( + !result.contains("blocked: \"false\""), + "bool must not be quoted: {result}" + ); } #[test] @@ -631,14 +771,24 @@ mod tests { update_story_in_file(tmp.path(), "28_test", None, None, Some(&fields)).unwrap(); let result = read_story_content(tmp.path(), "28_test").unwrap(); - assert!(result.contains("retry_count: 0"), "integer should be unquoted: {result}"); - assert!(!result.contains("retry_count: \"0\""), "integer must not be quoted: {result}"); + assert!( + result.contains("retry_count: 0"), + "integer should be unquoted: {result}" + ); + assert!( + !result.contains("retry_count: \"0\""), + "integer must not be quoted: {result}" + ); } #[test] fn update_story_bool_front_matter_parseable_after_write() { let tmp = tempfile::tempdir().unwrap(); - setup_story_in_fs(tmp.path(), "29_test", "---\nname: My Story\n---\n\nNo sections.\n"); + setup_story_in_fs( + tmp.path(), + "29_test", + "---\nname: My Story\n---\n\nNo sections.\n", + ); let mut fields = HashMap::new(); fields.insert("blocked".to_string(), Value::String("false".to_string())); @@ -646,7 +796,11 @@ mod tests { let contents = read_story_content(tmp.path(), "29_test").unwrap(); let meta = parse_front_matter(&contents).expect("front matter should parse"); - assert_eq!(meta.name.as_deref(), Some("My Story"), "name preserved after writing bool field"); + assert_eq!( + meta.name.as_deref(), + Some("My Story"), + "name preserved after writing bool field" + ); } // ── Bug 493 regression tests ────────────────────────────────────────────── @@ -662,8 +816,14 @@ mod tests { update_story_in_file(tmp.path(), "30_test", None, None, Some(&fields)).unwrap(); let result = read_story_content(tmp.path(), "30_test").unwrap(); - assert!(result.contains("depends_on: [490]"), "should be unquoted array: {result}"); - assert!(!result.contains("depends_on: \"[490]\""), "must not be quoted: {result}"); + assert!( + result.contains("depends_on: [490]"), + "should be unquoted array: {result}" + ); + assert!( + !result.contains("depends_on: \"[490]\""), + "must not be quoted: {result}" + ); let meta = parse_front_matter(&result).expect("front matter should parse"); assert_eq!(meta.depends_on, Some(vec![490])); @@ -690,8 +850,14 @@ mod tests { }) .expect("story content should exist"); - assert!(contents.contains("depends_on: [489]"), "missing front matter: {contents}"); - assert!(!contents.contains("- [ ] depends_on"), "must not appear as checkbox: {contents}"); + assert!( + contents.contains("depends_on: [489]"), + "missing front matter: {contents}" + ); + assert!( + !contents.contains("- [ ] depends_on"), + "must not appear as checkbox: {contents}" + ); let meta = parse_front_matter(&contents).expect("front matter should parse"); assert_eq!(meta.depends_on, Some(vec![489])); @@ -709,8 +875,14 @@ mod tests { update_story_in_file(tmp.path(), "31_test", None, None, Some(&fields)).unwrap(); let result = read_story_content(tmp.path(), "31_test").unwrap(); - assert!(result.contains("blocked: false"), "native bool false should be unquoted: {result}"); - assert!(!result.contains("blocked: \"false\""), "must not be quoted: {result}"); + assert!( + result.contains("blocked: false"), + "native bool false should be unquoted: {result}" + ); + assert!( + !result.contains("blocked: \"false\""), + "must not be quoted: {result}" + ); } #[test] @@ -723,22 +895,38 @@ mod tests { update_story_in_file(tmp.path(), "32_test", None, None, Some(&fields)).unwrap(); let result = read_story_content(tmp.path(), "32_test").unwrap(); - assert!(result.contains("blocked: true"), "native bool true should be unquoted: {result}"); - assert!(!result.contains("blocked: \"true\""), "must not be quoted: {result}"); + assert!( + result.contains("blocked: true"), + "native bool true should be unquoted: {result}" + ); + assert!( + !result.contains("blocked: \"true\""), + "must not be quoted: {result}" + ); } #[test] fn update_story_native_integer_written_unquoted() { let tmp = tempfile::tempdir().unwrap(); - setup_story_in_fs(tmp.path(), "33b_test", "---\nname: T\n---\n\nNo sections.\n"); + setup_story_in_fs( + tmp.path(), + "33b_test", + "---\nname: T\n---\n\nNo sections.\n", + ); let mut fields = HashMap::new(); fields.insert("retry_count".to_string(), serde_json::json!(3)); update_story_in_file(tmp.path(), "33b_test", None, None, Some(&fields)).unwrap(); let result = read_story_content(tmp.path(), "33b_test").unwrap(); - assert!(result.contains("retry_count: 3"), "native integer should be unquoted: {result}"); - assert!(!result.contains("retry_count: \"3\""), "must not be quoted: {result}"); + assert!( + result.contains("retry_count: 3"), + "native integer should be unquoted: {result}" + ); + assert!( + !result.contains("retry_count: \"3\""), + "must not be quoted: {result}" + ); } #[test] @@ -751,8 +939,14 @@ mod tests { update_story_in_file(tmp.path(), "34_test", None, None, Some(&fields)).unwrap(); let result = read_story_content(tmp.path(), "34_test").unwrap(); - assert!(result.contains("depends_on: [490, 491]"), "native array should be YAML sequence: {result}"); - assert!(!result.contains("depends_on: \"["), "must not be quoted: {result}"); + assert!( + result.contains("depends_on: [490, 491]"), + "native array should be YAML sequence: {result}" + ); + assert!( + !result.contains("depends_on: \"["), + "must not be quoted: {result}" + ); let meta = parse_front_matter(&result).expect("front matter should parse"); assert_eq!(meta.depends_on, Some(vec![490, 491])); @@ -761,7 +955,11 @@ mod tests { #[test] fn update_story_native_bool_parseable_after_write() { let tmp = tempfile::tempdir().unwrap(); - setup_story_in_fs(tmp.path(), "35_test", "---\nname: My Story\n---\n\nNo sections.\n"); + setup_story_in_fs( + tmp.path(), + "35_test", + "---\nname: My Story\n---\n\nNo sections.\n", + ); let mut fields = HashMap::new(); fields.insert("blocked".to_string(), Value::Bool(false)); @@ -769,7 +967,11 @@ mod tests { let contents = read_story_content(tmp.path(), "35_test").unwrap(); let meta = parse_front_matter(&contents).expect("front matter should parse"); - assert_eq!(meta.name.as_deref(), Some("My Story"), "name preserved after writing native bool"); + assert_eq!( + meta.name.as_deref(), + Some("My Story"), + "name preserved after writing native bool" + ); } #[test] @@ -779,7 +981,10 @@ mod tests { // String "[490, 491]" still works (backwards compatibility). let mut fields = HashMap::new(); - fields.insert("depends_on".to_string(), Value::String("[490, 491]".to_string())); + fields.insert( + "depends_on".to_string(), + Value::String("[490, 491]".to_string()), + ); update_story_in_file(tmp.path(), "31_test", None, None, Some(&fields)).unwrap(); let result = read_story_content(tmp.path(), "31_test").unwrap(); diff --git a/server/src/http/workflow/test_results.rs b/server/src/http/workflow/test_results.rs index 34acb03e..408149df 100644 --- a/server/src/http/workflow/test_results.rs +++ b/server/src/http/workflow/test_results.rs @@ -56,7 +56,11 @@ pub fn write_coverage_baseline_to_story_file( Err(_) => return Ok(()), // No story — skip silently }; - let updated = set_front_matter_field(&contents, "coverage_baseline", &format!("{coverage_pct:.1}%")); + let updated = set_front_matter_field( + &contents, + "coverage_baseline", + &format!("{coverage_pct:.1}%"), + ); let stage = story_stage(story_id).unwrap_or_else(|| "2_current".to_string()); write_story_content(project_root, story_id, &stage, &updated); diff --git a/server/src/io/fs/mod.rs b/server/src/io/fs/mod.rs index a49d21ff..6bdab696 100644 --- a/server/src/io/fs/mod.rs +++ b/server/src/io/fs/mod.rs @@ -6,8 +6,8 @@ pub mod project; pub mod scaffold; pub use files::{ - create_directory_absolute, list_directory, list_directory_absolute, list_project_files, - read_file, write_file, FileEntry, + FileEntry, create_directory_absolute, list_directory, list_directory_absolute, + list_project_files, read_file, write_file, }; pub use paths::{find_story_kit_root, get_home_directory, resolve_cli_path}; pub use preferences::{get_model_preference, set_model_preference}; diff --git a/server/src/io/fs/project.rs b/server/src/io/fs/project.rs index 2b15f6cf..d7ddf28a 100644 --- a/server/src/io/fs/project.rs +++ b/server/src/io/fs/project.rs @@ -180,7 +180,13 @@ mod tests { let store = make_store(&dir); let state = SessionState::default(); - let result = open_project(project_dir.to_string_lossy().to_string(), &state, &store, 3001).await; + let result = open_project( + project_dir.to_string_lossy().to_string(), + &state, + &store, + 3001, + ) + .await; assert!(result.is_ok()); let root = state.get_project_root().unwrap(); @@ -201,9 +207,14 @@ mod tests { let store = make_store(&dir); let state = SessionState::default(); - open_project(project_dir.to_string_lossy().to_string(), &state, &store, 3001) - .await - .unwrap(); + open_project( + project_dir.to_string_lossy().to_string(), + &state, + &store, + 3001, + ) + .await + .unwrap(); assert_eq!( fs::read_to_string(&mcp_path).unwrap(), @@ -220,15 +231,29 @@ mod tests { let store = make_store(&dir); let state = SessionState::default(); - open_project(project_dir.to_string_lossy().to_string(), &state, &store, 3001) - .await - .unwrap(); + open_project( + project_dir.to_string_lossy().to_string(), + &state, + &store, + 3001, + ) + .await + .unwrap(); let mcp_path = project_dir.join(".mcp.json"); - assert!(mcp_path.exists(), "open_project should write .mcp.json for new projects"); + assert!( + mcp_path.exists(), + "open_project should write .mcp.json for new projects" + ); let content = fs::read_to_string(&mcp_path).unwrap(); - assert!(content.contains("3001"), "mcp.json should reference the server port"); - assert!(content.contains("localhost"), "mcp.json should reference localhost"); + assert!( + content.contains("3001"), + "mcp.json should reference the server port" + ); + assert!( + content.contains("localhost"), + "mcp.json should reference localhost" + ); } /// Regression test for bug 371: no-arg `huskies` in empty directory skips scaffold. @@ -242,9 +267,14 @@ mod tests { let store = make_store(&dir); let state = SessionState::default(); - open_project(project_dir.to_string_lossy().to_string(), &state, &store, 3001) - .await - .unwrap(); + open_project( + project_dir.to_string_lossy().to_string(), + &state, + &store, + 3001, + ) + .await + .unwrap(); assert!( project_dir.join(".huskies/project.toml").exists(), @@ -316,9 +346,14 @@ mod tests { let store = make_store(&dir); let state = SessionState::default(); - open_project(project_dir.to_string_lossy().to_string(), &state, &store, 3001) - .await - .unwrap(); + open_project( + project_dir.to_string_lossy().to_string(), + &state, + &store, + 3001, + ) + .await + .unwrap(); let projects = get_known_projects(&store).unwrap(); assert_eq!(projects.len(), 1); @@ -383,9 +418,14 @@ mod tests { let store = make_store(&dir); let state = SessionState::default(); - open_project(project_dir.to_string_lossy().to_string(), &state, &store, 3001) - .await - .unwrap(); + open_project( + project_dir.to_string_lossy().to_string(), + &state, + &store, + 3001, + ) + .await + .unwrap(); // .huskies/ should have been created automatically assert!(project_dir.join(".huskies").is_dir()); @@ -402,9 +442,14 @@ mod tests { let store = make_store(&dir); let state = SessionState::default(); - open_project(project_dir.to_string_lossy().to_string(), &state, &store, 3001) - .await - .unwrap(); + open_project( + project_dir.to_string_lossy().to_string(), + &state, + &store, + 3001, + ) + .await + .unwrap(); // Existing .huskies/ content should not be overwritten assert_eq!(fs::read_to_string(&readme).unwrap(), "custom content"); diff --git a/server/src/io/fs/scaffold.rs b/server/src/io/fs/scaffold.rs index decc2b56..aecc8da2 100644 --- a/server/src/io/fs/scaffold.rs +++ b/server/src/io/fs/scaffold.rs @@ -4,8 +4,7 @@ use std::path::Path; const STORY_KIT_README: &str = include_str!("../../../../.huskies/README.md"); -const BOT_TOML_MATRIX_EXAMPLE: &str = - include_str!("../../../../.huskies/bot.toml.matrix.example"); +const BOT_TOML_MATRIX_EXAMPLE: &str = include_str!("../../../../.huskies/bot.toml.matrix.example"); const BOT_TOML_WHATSAPP_META_EXAMPLE: &str = include_str!("../../../../.huskies/bot.toml.whatsapp-meta.example"); const BOT_TOML_WHATSAPP_TWILIO_EXAMPLE: &str = @@ -194,9 +193,7 @@ pub fn detect_components_toml(root: &Path) -> String { // No tech stack markers detected — emit a single generic component // with an empty setup list. The ONBOARDING_PROMPT instructs the chat // agent to inspect the project and replace this with real definitions. - sections.push( - "[[component]]\nname = \"app\"\npath = \".\"\nsetup = []\n".to_string(), - ); + sections.push("[[component]]\nname = \"app\"\npath = \".\"\nsetup = []\n".to_string()); } sections.join("\n") @@ -826,9 +823,18 @@ mod tests { let mcp_path = dir.path().join(".mcp.json"); assert!(mcp_path.exists(), ".mcp.json should be created by scaffold"); let content = fs::read_to_string(&mcp_path).unwrap(); - assert!(content.contains("4242"), ".mcp.json should reference the given port"); - assert!(content.contains("localhost"), ".mcp.json should reference localhost"); - assert!(content.contains("huskies"), ".mcp.json should name the huskies server"); + assert!( + content.contains("4242"), + ".mcp.json should reference the given port" + ); + assert!( + content.contains("localhost"), + ".mcp.json should reference localhost" + ); + assert!( + content.contains("huskies"), + ".mcp.json should name the huskies server" + ); } #[test] @@ -976,7 +982,10 @@ mod tests { fs::write(dir.path().join("go.mod"), "module example.com/app\n").unwrap(); let toml = detect_components_toml(dir.path()); - assert!(!toml.contains("cargo"), "go project must not contain cargo commands"); + assert!( + !toml.contains("cargo"), + "go project must not contain cargo commands" + ); assert!(toml.contains("go build"), "go project must use Go tooling"); } @@ -986,8 +995,14 @@ mod tests { fs::write(dir.path().join("package.json"), "{}").unwrap(); let toml = detect_components_toml(dir.path()); - assert!(!toml.contains("cargo"), "node project must not contain cargo commands"); - assert!(toml.contains("npm install"), "node project must use npm tooling"); + assert!( + !toml.contains("cargo"), + "node project must not contain cargo commands" + ); + assert!( + toml.contains("npm install"), + "node project must use npm tooling" + ); } #[test] @@ -995,9 +1010,15 @@ mod tests { let dir = tempdir().unwrap(); let toml = detect_components_toml(dir.path()); - assert!(!toml.contains("cargo"), "unknown stack must not contain cargo commands"); + assert!( + !toml.contains("cargo"), + "unknown stack must not contain cargo commands" + ); // setup list must be empty - assert!(toml.contains("setup = []"), "unknown stack must have empty setup list"); + assert!( + toml.contains("setup = []"), + "unknown stack must have empty setup list" + ); } #[test] @@ -1047,7 +1068,10 @@ mod tests { fs::write(dir.path().join("Cargo.toml"), "[package]\nname = \"x\"\n").unwrap(); let script = detect_script_test(dir.path()); - assert!(script.contains("cargo test"), "Rust project should run cargo test"); + assert!( + script.contains("cargo test"), + "Rust project should run cargo test" + ); assert!(!script.contains("No tests configured")); } @@ -1057,7 +1081,10 @@ mod tests { fs::write(dir.path().join("package.json"), "{}").unwrap(); let script = detect_script_test(dir.path()); - assert!(script.contains("npm test"), "Node project without pnpm-lock should run npm test"); + assert!( + script.contains("npm test"), + "Node project without pnpm-lock should run npm test" + ); assert!(!script.contains("No tests configured")); } @@ -1068,18 +1095,31 @@ mod tests { fs::write(dir.path().join("pnpm-lock.yaml"), "").unwrap(); let script = detect_script_test(dir.path()); - assert!(script.contains("pnpm test"), "Node project with pnpm-lock should run pnpm test"); + assert!( + script.contains("pnpm test"), + "Node project with pnpm-lock should run pnpm test" + ); // "pnpm test" is a substring of itself; verify there's no bare "npm test" line - assert!(!script.lines().any(|l| l.trim() == "npm test"), "should not use npm when pnpm-lock.yaml is present"); + assert!( + !script.lines().any(|l| l.trim() == "npm test"), + "should not use npm when pnpm-lock.yaml is present" + ); } #[test] fn detect_script_test_pyproject_toml_adds_pytest() { let dir = tempdir().unwrap(); - fs::write(dir.path().join("pyproject.toml"), "[project]\nname = \"x\"\n").unwrap(); + fs::write( + dir.path().join("pyproject.toml"), + "[project]\nname = \"x\"\n", + ) + .unwrap(); let script = detect_script_test(dir.path()); - assert!(script.contains("pytest"), "Python project should run pytest"); + assert!( + script.contains("pytest"), + "Python project should run pytest" + ); assert!(!script.contains("No tests configured")); } @@ -1089,7 +1129,10 @@ mod tests { fs::write(dir.path().join("requirements.txt"), "flask\n").unwrap(); let script = detect_script_test(dir.path()); - assert!(script.contains("pytest"), "Python project (requirements.txt) should run pytest"); + assert!( + script.contains("pytest"), + "Python project (requirements.txt) should run pytest" + ); } #[test] @@ -1098,7 +1141,10 @@ mod tests { fs::write(dir.path().join("go.mod"), "module example.com/app\n").unwrap(); let script = detect_script_test(dir.path()); - assert!(script.contains("go test ./..."), "Go project should run go test ./..."); + assert!( + script.contains("go test ./..."), + "Go project should run go test ./..." + ); assert!(!script.contains("No tests configured")); } @@ -1109,8 +1155,14 @@ mod tests { fs::write(dir.path().join("package.json"), "{}").unwrap(); let script = detect_script_test(dir.path()); - assert!(script.contains("go test ./..."), "multi-stack should include Go test command"); - assert!(script.contains("npm test"), "multi-stack should include Node test command"); + assert!( + script.contains("go test ./..."), + "multi-stack should include Go test command" + ); + assert!( + script.contains("npm test"), + "multi-stack should include Node test command" + ); } #[test] @@ -1128,13 +1180,23 @@ mod tests { #[test] fn scaffold_script_test_contains_detected_commands_for_rust() { let dir = tempdir().unwrap(); - fs::write(dir.path().join("Cargo.toml"), "[package]\nname = \"myapp\"\n").unwrap(); + fs::write( + dir.path().join("Cargo.toml"), + "[package]\nname = \"myapp\"\n", + ) + .unwrap(); scaffold_story_kit(dir.path(), 3001).unwrap(); let content = fs::read_to_string(dir.path().join("script/test")).unwrap(); - assert!(content.contains("cargo test"), "Rust project scaffold should set cargo test in script/test"); - assert!(!content.contains("No tests configured"), "should not use stub when stack is detected"); + assert!( + content.contains("cargo test"), + "Rust project scaffold should set cargo test in script/test" + ); + assert!( + !content.contains("No tests configured"), + "should not use stub when stack is detected" + ); } #[test] @@ -1143,7 +1205,10 @@ mod tests { scaffold_story_kit(dir.path(), 3001).unwrap(); let content = fs::read_to_string(dir.path().join("script/test")).unwrap(); - assert!(content.contains("No tests configured"), "unknown stack should use the generic stub"); + assert!( + content.contains("No tests configured"), + "unknown stack should use the generic stub" + ); } // --- generate_project_toml --- diff --git a/server/src/io/mod.rs b/server/src/io/mod.rs index 1b502a1e..1ad15aab 100644 --- a/server/src/io/mod.rs +++ b/server/src/io/mod.rs @@ -4,7 +4,7 @@ pub mod onboarding; pub mod search; pub mod shell; pub mod story_metadata; -pub mod watcher; -pub mod wizard; #[cfg(test)] pub(crate) mod test_helpers; +pub mod watcher; +pub mod wizard; diff --git a/server/src/io/search.rs b/server/src/io/search.rs index 9673ccb8..a087ba7f 100644 --- a/server/src/io/search.rs +++ b/server/src/io/search.rs @@ -159,7 +159,9 @@ mod tests { let state = SessionState::default(); *state.project_root.lock().unwrap() = Some(dir.path().to_path_buf()); - let results = search_files("target_text".to_string(), &state).await.unwrap(); + let results = search_files("target_text".to_string(), &state) + .await + .unwrap(); assert_eq!(results.len(), 1); assert_eq!(results[0].path, "found.txt"); diff --git a/server/src/io/shell.rs b/server/src/io/shell.rs index 7a7efbe0..ca0dc36d 100644 --- a/server/src/io/shell.rs +++ b/server/src/io/shell.rs @@ -75,12 +75,7 @@ mod tests { #[tokio::test] async fn exec_shell_impl_runs_allowed_command() { let dir = tempdir().unwrap(); - let result = exec_shell_impl( - "ls".to_string(), - Vec::new(), - dir.path().to_path_buf(), - ) - .await; + let result = exec_shell_impl("ls".to_string(), Vec::new(), dir.path().to_path_buf()).await; assert!(result.is_ok()); let output = result.unwrap(); @@ -92,13 +87,9 @@ mod tests { let dir = tempdir().unwrap(); std::fs::write(dir.path().join("hello.txt"), "").unwrap(); - let result = exec_shell_impl( - "ls".to_string(), - Vec::new(), - dir.path().to_path_buf(), - ) - .await - .unwrap(); + let result = exec_shell_impl("ls".to_string(), Vec::new(), dir.path().to_path_buf()) + .await + .unwrap(); assert!(result.stdout.contains("hello.txt")); } diff --git a/server/src/io/story_metadata.rs b/server/src/io/story_metadata.rs index f8f1d75e..f2466288 100644 --- a/server/src/io/story_metadata.rs +++ b/server/src/io/story_metadata.rs @@ -142,7 +142,10 @@ pub fn write_merge_failure(path: &Path, reason: &str) -> Result<(), String> { fs::read_to_string(path).map_err(|e| format!("Failed to read story file: {e}"))?; // Produce a YAML-safe inline quoted string: collapse newlines, escape inner quotes. - let escaped = reason.replace('"', "\\\"").replace('\n', " ").replace('\r', ""); + let escaped = reason + .replace('"', "\\\"") + .replace('\n', " ") + .replace('\r', ""); let yaml_value = format!("\"{escaped}\""); let updated = set_front_matter_field(&contents, "merge_failure", &yaml_value); @@ -288,7 +291,10 @@ pub fn check_unmet_deps(project_root: &Path, stage_dir: &str, story_id: &str) -> Ok(c) => c, Err(_) => return Vec::new(), }; - let deps = match parse_front_matter(&contents).ok().and_then(|m| m.depends_on) { + let deps = match parse_front_matter(&contents) + .ok() + .and_then(|m| m.depends_on) + { Some(d) => d, None => return Vec::new(), }; @@ -333,7 +339,10 @@ fn dep_is_done(project_root: &Path, dep_number: u32) -> bool { fn dep_is_archived(project_root: &Path, dep_number: u32) -> bool { let prefix = format!("{dep_number}_"); let exact = dep_number.to_string(); - let dir = project_root.join(".huskies").join("work").join("6_archived"); + let dir = project_root + .join(".huskies") + .join("work") + .join("6_archived"); if let Ok(entries) = fs::read_dir(&dir) { for entry in entries.flatten() { let path = entry.path(); @@ -365,7 +374,10 @@ pub fn check_archived_deps(project_root: &Path, stage_dir: &str, story_id: &str) Ok(c) => c, Err(_) => return Vec::new(), }; - let deps = match parse_front_matter(&contents).ok().and_then(|m| m.depends_on) { + let deps = match parse_front_matter(&contents) + .ok() + .and_then(|m| m.depends_on) + { Some(d) => d, None => return Vec::new(), }; @@ -434,7 +446,10 @@ pub fn write_blocked_in_content(contents: &str) -> String { /// Write or update `merge_failure` in story content (pure function). pub fn write_merge_failure_in_content(contents: &str, reason: &str) -> String { - let escaped = reason.replace('"', "\\\"").replace('\n', " ").replace('\r', ""); + let escaped = reason + .replace('"', "\\\"") + .replace('\n', " ") + .replace('\r', ""); let yaml_value = format!("\"{escaped}\""); set_front_matter_field(contents, "merge_failure", &yaml_value) } @@ -465,9 +480,7 @@ pub fn parse_unchecked_todos(contents: &str) -> Vec { .lines() .filter_map(|line| { let trimmed = line.trim(); - trimmed - .strip_prefix("- [ ] ") - .map(|text| text.to_string()) + trimmed.strip_prefix("- [ ] ").map(|text| text.to_string()) }) .collect() } @@ -486,7 +499,10 @@ workflow: tdd "#; let meta = parse_front_matter(input).expect("front matter"); - assert_eq!(meta.name.as_deref(), Some("Establish the TDD Workflow and Gates")); + assert_eq!( + meta.name.as_deref(), + Some("Establish the TDD Workflow and Gates") + ); assert_eq!(meta.coverage_baseline, None); } @@ -566,7 +582,11 @@ workflow: tdd fn clear_front_matter_field_updates_file() { let tmp = tempfile::tempdir().unwrap(); let path = tmp.path().join("story.md"); - std::fs::write(&path, "---\nname: Test\nmerge_failure: \"bad\"\n---\n# Story\n").unwrap(); + std::fs::write( + &path, + "---\nname: Test\nmerge_failure: \"bad\"\n---\n# Story\n", + ) + .unwrap(); clear_front_matter_field(&path, "merge_failure").unwrap(); let contents = std::fs::read_to_string(&path).unwrap(); assert!(!contents.contains("merge_failure")); @@ -854,5 +874,4 @@ workflow: tdd // 99 doesn't exist anywhere. assert!(!dep_is_archived(tmp.path(), 99)); } - } diff --git a/server/src/io/watcher.rs b/server/src/io/watcher.rs index 2ed3d375..4a4d025c 100644 --- a/server/src/io/watcher.rs +++ b/server/src/io/watcher.rs @@ -301,10 +301,7 @@ fn flush_pending( pending .iter() .filter(|(path, _)| !path.exists()) - .find(|(path, _)| { - path.file_stem() - .and_then(|s| s.to_str()) == Some(item_id.as_str()) - }) + .find(|(path, _)| path.file_stem().and_then(|s| s.to_str()) == Some(item_id.as_str())) .map(|(_, stage)| stage.clone()) } else { None @@ -327,7 +324,7 @@ fn flush_pending( /// All state is read from and written to CRDT — no filesystem access. /// Worktree pruning is handled separately by the CRDT event subscriber. pub(crate) fn sweep_done_to_archived(done_retention: Duration) { - use crate::pipeline_state::{PipelineEvent, Stage, stage_dir_name, transition, read_all_typed}; + use crate::pipeline_state::{PipelineEvent, Stage, read_all_typed, stage_dir_name, transition}; for item in read_all_typed() { if let Stage::Done { merged_at, .. } = &item.stage { @@ -374,10 +371,7 @@ pub(crate) fn sweep_done_to_archived(done_retention: Duration) { /// `git_root` — project root (passed to `git` commands and config loading). /// `event_tx` — broadcast sender for `ConfigChanged` events. /// `watcher_config` — initial sweep configuration loaded from `project.toml`. -pub fn start_watcher( - git_root: PathBuf, - event_tx: broadcast::Sender, -) { +pub fn start_watcher(git_root: PathBuf, event_tx: broadcast::Sender) { std::thread::spawn(move || { let (notify_tx, notify_rx) = mpsc::channel::>(); @@ -1080,7 +1074,9 @@ mod tests { // Verify the item was moved to 6_archived in the CRDT. let items = crate::pipeline_state::read_all_typed(); - let item = items.iter().find(|i| i.story_id.0 == "9880_story_sweep_old"); + let item = items + .iter() + .find(|i| i.story_id.0 == "9880_story_sweep_old"); assert!( item.is_some_and(|i| matches!(i.stage, crate::pipeline_state::Stage::Archived { .. })), "item should be archived after sweep" @@ -1100,7 +1096,9 @@ mod tests { sweep_done_to_archived(Duration::from_secs(999_999)); let items = crate::pipeline_state::read_all_typed(); - let item = items.iter().find(|i| i.story_id.0 == "9881_story_sweep_new"); + let item = items + .iter() + .find(|i| i.story_id.0 == "9881_story_sweep_new"); assert!( item.is_some_and(|i| matches!(i.stage, crate::pipeline_state::Stage::Done { .. })), "item should remain in Done with long retention" @@ -1120,7 +1118,9 @@ mod tests { sweep_done_to_archived(Duration::ZERO); let items = crate::pipeline_state::read_all_typed(); - let item = items.iter().find(|i| i.story_id.0 == "9882_story_sweep_custom"); + let item = items + .iter() + .find(|i| i.story_id.0 == "9882_story_sweep_custom"); assert!( item.is_some_and(|i| matches!(i.stage, crate::pipeline_state::Stage::Archived { .. })), "item should be archived with zero retention" @@ -1172,8 +1172,7 @@ mod tests { fn sweep_keeps_item_newer_than_retention() { crate::db::ensure_content_store(); - let one_second_ago = - (chrono::Utc::now() - chrono::Duration::seconds(1)).timestamp() as f64; + let one_second_ago = (chrono::Utc::now() - chrono::Duration::seconds(1)).timestamp() as f64; crate::crdt_state::write_item( "9884_story_sweep_recent", diff --git a/server/src/io/wizard.rs b/server/src/io/wizard.rs index a4cd0006..32b02804 100644 --- a/server/src/io/wizard.rs +++ b/server/src/io/wizard.rs @@ -380,10 +380,7 @@ mod tests { Some("generated content".to_string()), ); assert_eq!(state.steps[1].status, StepStatus::AwaitingConfirmation); - assert_eq!( - state.steps[1].content.as_deref(), - Some("generated content") - ); + assert_eq!(state.steps[1].content.as_deref(), Some("generated content")); } #[test] diff --git a/server/src/llm/chat.rs b/server/src/llm/chat.rs index 0104675c..e3403970 100644 --- a/server/src/llm/chat.rs +++ b/server/src/llm/chat.rs @@ -1,9 +1,9 @@ //! LLM chat — orchestrates multi-turn conversations with tool-calling LLM providers. -use crate::slog; use crate::io::onboarding; use crate::llm::prompts::{ONBOARDING_PROMPT, SYSTEM_PROMPT}; use crate::llm::providers::claude_code::ClaudeCodeResult; use crate::llm::types::{Message, Role, ToolCall, ToolDefinition, ToolFunctionDefinition}; +use crate::slog; use crate::state::SessionState; use crate::store::StoreOps; use serde::Deserialize; @@ -767,10 +767,7 @@ mod tests { let store = MockStore::new(); let result = set_anthropic_api_key_impl(&store, "sk-my-key"); assert!(result.is_ok()); - assert_eq!( - store.get("anthropic_api_key"), - Some(json!("sk-my-key")) - ); + assert_eq!(store.get("anthropic_api_key"), Some(json!("sk-my-key"))); } #[test] @@ -868,8 +865,7 @@ mod tests { let required = read_file.function.parameters["required"] .as_array() .unwrap(); - let required_names: Vec<&str> = - required.iter().map(|v| v.as_str().unwrap()).collect(); + let required_names: Vec<&str> = required.iter().map(|v| v.as_str().unwrap()).collect(); assert!(required_names.contains(&"path")); } @@ -883,8 +879,7 @@ mod tests { let required = exec_shell.function.parameters["required"] .as_array() .unwrap(); - let required_names: Vec<&str> = - required.iter().map(|v| v.as_str().unwrap()).collect(); + let required_names: Vec<&str> = required.iter().map(|v| v.as_str().unwrap()).collect(); assert!(required_names.contains(&"command")); assert!(required_names.contains(&"args")); } @@ -936,9 +931,11 @@ mod tests { .await; assert!(result.is_err()); - assert!(result - .unwrap_err() - .contains("Unsupported provider: unsupported-provider")); + assert!( + result + .unwrap_err() + .contains("Unsupported provider: unsupported-provider") + ); } // --------------------------------------------------------------------------- diff --git a/server/src/llm/oauth.rs b/server/src/llm/oauth.rs index b2848bf7..261ff760 100644 --- a/server/src/llm/oauth.rs +++ b/server/src/llm/oauth.rs @@ -49,7 +49,9 @@ struct TokenRefreshError { /// Returns the path to `~/.claude/.credentials.json`. fn credentials_path() -> Result { let home = std::env::var("HOME").map_err(|_| "HOME not set".to_string())?; - Ok(PathBuf::from(home).join(".claude").join(".credentials.json")) + Ok(PathBuf::from(home) + .join(".claude") + .join(".credentials.json")) } /// Read OAuth credentials from disk. @@ -61,12 +63,7 @@ pub fn read_credentials() -> Result { path.display() ) })?; - serde_json::from_str(&data).map_err(|e| { - format!( - "Failed to parse {}: {e}", - path.display() - ) - }) + serde_json::from_str(&data).map_err(|e| format!("Failed to parse {}: {e}", path.display())) } /// Write updated credentials back to disk with 0600 permissions. @@ -74,8 +71,7 @@ pub fn write_credentials(creds: &CredentialsFile) -> Result<(), String> { let path = credentials_path()?; let data = serde_json::to_string_pretty(creds) .map_err(|e| format!("Failed to serialize credentials: {e}"))?; - std::fs::write(&path, &data) - .map_err(|e| format!("Failed to write {}: {e}", path.display()))?; + std::fs::write(&path, &data).map_err(|e| format!("Failed to write {}: {e}", path.display()))?; // Restore 0600 permissions #[cfg(unix)] @@ -102,9 +98,7 @@ pub async fn refresh_access_token() -> Result<(), String> { let refresh_token = creds.claude_ai_oauth.refresh_token.clone(); if refresh_token.is_empty() { - return Err( - "No refresh token found. Run `claude login` to authenticate.".to_string(), - ); + return Err("No refresh token found. Run `claude login` to authenticate.".to_string()); } let client = reqwest::Client::new(); @@ -215,7 +209,10 @@ mod tests { assert_eq!(creds.claude_ai_oauth.access_token, "sk-ant-oat01-test"); assert_eq!(creds.claude_ai_oauth.refresh_token, "sk-ant-ort01-test"); assert_eq!(creds.claude_ai_oauth.expires_at, 1774466144677); - assert_eq!(creds.claude_ai_oauth.subscription_type.as_deref(), Some("max")); + assert_eq!( + creds.claude_ai_oauth.subscription_type.as_deref(), + Some("max") + ); } #[test] diff --git a/server/src/llm/providers/anthropic.rs b/server/src/llm/providers/anthropic.rs index ab3408bb..bf69de13 100644 --- a/server/src/llm/providers/anthropic.rs +++ b/server/src/llm/providers/anthropic.rs @@ -752,9 +752,8 @@ mod tests { "delta": {"type": "input_json_delta", "partial_json": "{}"} }); let stop_event = json!({"type": "content_block_stop"}); - let body = format!( - "data: {start_event}\ndata: {delta_event}\ndata: {stop_event}\ndata: [DONE]\n" - ); + let body = + format!("data: {start_event}\ndata: {delta_event}\ndata: {stop_event}\ndata: [DONE]\n"); let _m = server .mock("POST", "/v1/messages") diff --git a/server/src/llm/providers/claude_code.rs b/server/src/llm/providers/claude_code.rs index 3a9b4ebb..7dc1fea7 100644 --- a/server/src/llm/providers/claude_code.rs +++ b/server/src/llm/providers/claude_code.rs @@ -77,12 +77,9 @@ impl ClaudeCodeProvider { let auth_failed = Arc::new(AtomicBool::new(false)); let auth_failed_clone = auth_failed.clone(); - let (token_tx, mut token_rx) = - tokio::sync::mpsc::unbounded_channel::(); - let (thinking_tx, mut thinking_rx) = - tokio::sync::mpsc::unbounded_channel::(); - let (activity_tx, mut activity_rx) = - tokio::sync::mpsc::unbounded_channel::(); + let (token_tx, mut token_rx) = tokio::sync::mpsc::unbounded_channel::(); + let (thinking_tx, mut thinking_rx) = tokio::sync::mpsc::unbounded_channel::(); + let (activity_tx, mut activity_rx) = tokio::sync::mpsc::unbounded_channel::(); let (msg_tx, msg_rx) = std::sync::mpsc::channel::(); let (sid_tx, sid_rx) = tokio::sync::oneshot::channel::(); @@ -1038,7 +1035,15 @@ mod tests { let (sid_tx, mut sid_rx) = tokio::sync::oneshot::channel::(); let mut sid_tx_opt = Some(sid_tx); let json = json!({"type": "system", "session_id": "sess-abc-123"}); - process_json_event(&json, &tok_tx, &thi_tx, &act_tx, &msg_tx, &mut sid_tx_opt, &AtomicBool::new(false)); + process_json_event( + &json, + &tok_tx, + &thi_tx, + &act_tx, + &msg_tx, + &mut sid_tx_opt, + &AtomicBool::new(false), + ); // sid_tx should have been consumed assert!(sid_tx_opt.is_none()); let received = sid_rx.try_recv().unwrap(); @@ -1051,7 +1056,15 @@ mod tests { let (sid_tx, _sid_rx) = tokio::sync::oneshot::channel::(); let mut sid_tx_opt = Some(sid_tx); let json = json!({"type": "system"}); - process_json_event(&json, &tok_tx, &thi_tx, &act_tx, &msg_tx, &mut sid_tx_opt, &AtomicBool::new(false)); + process_json_event( + &json, + &tok_tx, + &thi_tx, + &act_tx, + &msg_tx, + &mut sid_tx_opt, + &AtomicBool::new(false), + ); // sid_tx should still be present since no session_id in event assert!(sid_tx_opt.is_some()); } diff --git a/server/src/log_buffer.rs b/server/src/log_buffer.rs index 0e0441c3..2e9b25d5 100644 --- a/server/src/log_buffer.rs +++ b/server/src/log_buffer.rs @@ -55,7 +55,12 @@ pub struct LogEntry { impl LogEntry { /// Format the entry as a single log line: `{timestamp} [{LEVEL}] {message}`. pub fn formatted(&self) -> String { - format!("{} [{}] {}", self.timestamp, self.level.as_str(), self.message) + format!( + "{} [{}] {}", + self.timestamp, + self.level.as_str(), + self.message + ) } /// Format with ANSI color codes for terminal output. @@ -146,7 +151,8 @@ impl LogBuffer { .iter() .filter(|entry| { severity.is_none_or(|s| &entry.level == s) - && filter.is_none_or(|f| entry.message.contains(f) || entry.formatted().contains(f)) + && filter + .is_none_or(|f| entry.message.contains(f) || entry.formatted().contains(f)) }) .map(|entry| entry.formatted()) .collect(); @@ -171,7 +177,8 @@ impl LogBuffer { .iter() .filter(|entry| { severity.is_none_or(|s| &entry.level == s) - && filter.is_none_or(|f| entry.message.contains(f) || entry.formatted().contains(f)) + && filter + .is_none_or(|f| entry.message.contains(f) || entry.formatted().contains(f)) }) .cloned() .collect(); @@ -255,11 +262,17 @@ mod tests { // Should have exactly CAPACITY lines assert_eq!(recent.len(), CAPACITY); // The oldest (line 0) should have been evicted - assert!(!recent.iter().any(|l| l.contains("line 0") && !l.contains("line 10"))); + assert!( + !recent + .iter() + .any(|l| l.contains("line 0") && !l.contains("line 10")) + ); // The newest should be present - assert!(recent - .iter() - .any(|l| l.contains(&format!("line {CAPACITY}")))); + assert!( + recent + .iter() + .any(|l| l.contains(&format!("line {CAPACITY}"))) + ); } #[test] @@ -303,10 +316,7 @@ mod tests { assert_eq!(recent.len(), 1); // Timestamp format: YYYY-MM-DDTHH:MM:SSZ let line = &recent[0]; - assert!( - line.len() > 20, - "Line should have timestamp prefix: {line}" - ); + assert!(line.len() > 20, "Line should have timestamp prefix: {line}"); // Check it starts with a 4-digit year assert!(line.chars().next().unwrap().is_ascii_digit()); assert!(line.contains('T')); @@ -372,8 +382,14 @@ mod tests { message: "test warning".into(), }; let colored = entry.colored_formatted(); - assert!(colored.starts_with("\x1b[33m"), "WARN should start with yellow ANSI code"); - assert!(colored.ends_with("\x1b[0m"), "WARN should end with ANSI reset"); + assert!( + colored.starts_with("\x1b[33m"), + "WARN should start with yellow ANSI code" + ); + assert!( + colored.ends_with("\x1b[0m"), + "WARN should end with ANSI reset" + ); assert!(colored.contains("[WARN]")); assert!(colored.contains("test warning")); } @@ -386,8 +402,14 @@ mod tests { message: "test error".into(), }; let colored = entry.colored_formatted(); - assert!(colored.starts_with("\x1b[31m"), "ERROR should start with red ANSI code"); - assert!(colored.ends_with("\x1b[0m"), "ERROR should end with ANSI reset"); + assert!( + colored.starts_with("\x1b[31m"), + "ERROR should start with red ANSI code" + ); + assert!( + colored.ends_with("\x1b[0m"), + "ERROR should end with ANSI reset" + ); assert!(colored.contains("[ERROR]")); assert!(colored.contains("test error")); } @@ -400,7 +422,10 @@ mod tests { message: "test info".into(), }; let colored = entry.colored_formatted(); - assert!(!colored.contains("\x1b["), "INFO should have no ANSI escape codes"); + assert!( + !colored.contains("\x1b["), + "INFO should have no ANSI escape codes" + ); assert!(colored.contains("[INFO]")); assert!(colored.contains("test info")); } diff --git a/server/src/main.rs b/server/src/main.rs index 92bd0cf6..a3633955 100644 --- a/server/src/main.rs +++ b/server/src/main.rs @@ -18,11 +18,11 @@ mod http; mod io; mod llm; pub mod log_buffer; +pub(crate) mod pipeline_state; pub mod rebuild; mod state; mod store; mod workflow; -pub(crate) mod pipeline_state; mod worktree; use crate::agents::AgentPool; @@ -132,7 +132,14 @@ fn parse_cli_args(args: &[String]) -> Result { return Err("agent mode requires --rendezvous ".to_string()); } - Ok(CliArgs { port, path, init, agent, rendezvous, gateway }) + Ok(CliArgs { + port, + path, + init, + agent, + rendezvous, + gateway, + }) } fn print_help() { @@ -156,11 +163,15 @@ fn print_help() { println!("OPTIONS:"); println!(" -h, --help Print this help and exit"); println!(" -V, --version Print the version and exit"); - println!(" --port Port to listen on (default: 3001). Persisted to project.toml."); + println!( + " --port Port to listen on (default: 3001). Persisted to project.toml." + ); println!(" --rendezvous WebSocket URL of the rendezvous peer (agent mode only)."); println!(" Example: ws://server:3001/crdt-sync"); println!(" --gateway Start in gateway mode. Reads projects.toml from PATH"); - println!(" (or cwd) and proxies MCP calls to per-project containers."); + println!( + " (or cwd) and proxies MCP calls to per-project containers." + ); } /// Resolve the optional positional path argument into an absolute `PathBuf`. @@ -184,8 +195,8 @@ async fn main() -> Result<(), std::io::Error> { }); // Log version and build hash so we can verify what's running. - let build_hash = std::fs::read_to_string(".huskies/build_hash") - .unwrap_or_else(|_| "unknown".to_string()); + let build_hash = + std::fs::read_to_string(".huskies/build_hash").unwrap_or_else(|_| "unknown".to_string()); slog!( "[startup] huskies v{} (build {})", env!("CARGO_PKG_VERSION"), @@ -433,12 +444,8 @@ async fn main() -> Result<(), std::io::Error> { { let story_id = evt.story_id.clone(); tokio::task::spawn_blocking(move || { - if let Err(e) = - crate::worktree::prune_worktree_sync(&root, &story_id) - { - crate::slog!( - "[crdt] worktree prune failed for {story_id}: {e}" - ); + if let Err(e) = crate::worktree::prune_worktree_sync(&root, &story_id) { + crate::slog!("[crdt] worktree prune failed for {story_id}: {e}"); } }); } diff --git a/server/src/pipeline_state.rs b/server/src/pipeline_state.rs index 31f59fc5..b191c229 100644 --- a/server/src/pipeline_state.rs +++ b/server/src/pipeline_state.rs @@ -592,8 +592,7 @@ fn project_stage(view: &PipelineItemView) -> Result { let merged_at = view .merged_at .map(|ts| { - DateTime::from_timestamp(ts as i64, 0) - .unwrap_or(DateTime::::UNIX_EPOCH) + DateTime::from_timestamp(ts as i64, 0).unwrap_or(DateTime::::UNIX_EPOCH) }) .unwrap_or(DateTime::::UNIX_EPOCH); Ok(Stage::Done { diff --git a/server/src/rebuild.rs b/server/src/rebuild.rs index b4f5af0e..c2fafb09 100644 --- a/server/src/rebuild.rs +++ b/server/src/rebuild.rs @@ -1,8 +1,8 @@ //! Server rebuild and restart logic shared between the MCP tool and Matrix bot command. use crate::agents::AgentPool; -use crate::slog; use crate::chat::ChatTransport; +use crate::slog; use std::path::Path; use std::sync::Arc; @@ -31,11 +31,7 @@ pub struct BotShutdownNotifier { } impl BotShutdownNotifier { - pub fn new( - transport: Arc, - channels: Vec, - bot_name: String, - ) -> Self { + pub fn new(transport: Arc, channels: Vec, bot_name: String) -> Self { Self { transport, channels, @@ -66,10 +62,7 @@ impl BotShutdownNotifier { format!("{} is going offline (server stopped).", self.bot_name) } ShutdownReason::Rebuild => { - format!( - "{} is going offline to pick up a new build.", - self.bot_name - ) + format!("{} is going offline to pick up a new build.", self.bot_name) } }; for channel in &self.channels { @@ -221,9 +214,7 @@ pub async fn rebuild_and_restart( // Use exec() to replace the current process. // This never returns on success. use std::os::unix::process::CommandExt; - let err = std::process::Command::new(&new_exe) - .args(&args[1..]) - .exec(); + let err = std::process::Command::new(&new_exe).args(&args[1..]).exec(); // If we get here, exec() failed. Err(format!("Failed to exec new binary: {err}")) @@ -234,8 +225,8 @@ pub async fn rebuild_and_restart( #[cfg(test)] mod tests { use super::*; - use async_trait::async_trait; use crate::chat::MessageId; + use async_trait::async_trait; use std::sync::Mutex; /// In-memory transport that records sent messages. @@ -366,7 +357,10 @@ mod tests { let manual_msg = &transport_a.messages()[0].1; let rebuild_msg = &transport_b.messages()[0].1; - assert_ne!(manual_msg, rebuild_msg, "manual and rebuild messages must differ"); + assert_ne!( + manual_msg, rebuild_msg, + "manual and rebuild messages must differ" + ); } #[tokio::test] @@ -483,6 +477,9 @@ mod tests { let startup_msg = &transport_start.messages()[0].1; let shutdown_msg = &transport_stop.messages()[0].1; - assert_ne!(startup_msg, shutdown_msg, "startup and shutdown messages must differ"); + assert_ne!( + startup_msg, shutdown_msg, + "startup and shutdown messages must differ" + ); } } diff --git a/server/src/state.rs b/server/src/state.rs index 917e4f9f..61778e84 100644 --- a/server/src/state.rs +++ b/server/src/state.rs @@ -33,5 +33,4 @@ impl SessionState { })?; Ok(root.clone()) } - } diff --git a/server/src/workflow.rs b/server/src/workflow.rs index abf1573f..f084171e 100644 --- a/server/src/workflow.rs +++ b/server/src/workflow.rs @@ -65,7 +65,6 @@ impl WorkflowState { Ok(()) } - } fn summarize_results(results: &StoryTestResults) -> TestRunSummary { @@ -144,13 +143,14 @@ pub fn evaluate_acceptance_with_coverage( )); } if let Some(baseline) = report.baseline_percent - && report.current_percent < baseline { - decision.can_accept = false; - decision.reasons.push(format!( - "Coverage regression: {:.1}% → {:.1}% (threshold: {:.1}%).", - baseline, report.current_percent, report.threshold_percent - )); - } + && report.current_percent < baseline + { + decision.can_accept = false; + decision.reasons.push(format!( + "Coverage regression: {:.1}% → {:.1}% (threshold: {:.1}%).", + baseline, report.current_percent, report.threshold_percent + )); + } } decision @@ -185,7 +185,12 @@ mod tests { let decision = evaluate_acceptance_with_coverage(&results, Some(&coverage)); assert!(!decision.can_accept); - assert!(decision.reasons.iter().any(|r| r.contains("Coverage below threshold"))); + assert!( + decision + .reasons + .iter() + .any(|r| r.contains("Coverage below threshold")) + ); } #[test] @@ -211,7 +216,12 @@ mod tests { let decision = evaluate_acceptance_with_coverage(&results, Some(&coverage)); assert!(!decision.can_accept); - assert!(decision.reasons.iter().any(|r| r.contains("Coverage regression"))); + assert!( + decision + .reasons + .iter() + .any(|r| r.contains("Coverage regression")) + ); } #[test] @@ -317,7 +327,12 @@ mod tests { let results = StoryTestResults::default(); let decision = evaluate_acceptance(&results); assert!(!decision.can_accept); - assert!(decision.reasons.iter().any(|r| r.contains("No test results"))); + assert!( + decision + .reasons + .iter() + .any(|r| r.contains("No test results")) + ); } #[test] @@ -386,11 +401,7 @@ mod tests { details: None, }]; - let result = state.record_test_results_validated( - "story-29".to_string(), - unit, - integration, - ); + let result = state.record_test_results_validated("story-29".to_string(), unit, integration); assert!(result.is_ok()); assert!(state.results.contains_key("story-29")); assert_eq!(state.results["story-29"].unit.len(), 1);