fix: add --all to cargo fmt in script/test and autoformat codebase

cargo fmt without --all fails with "Failed to find targets" in
workspace repos. This was blocking every story's gates. Also ran
cargo fmt --all to fix all existing formatting issues.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
dave
2026-04-13 14:07:08 +00:00
parent ed2526ce41
commit 845b85e7a7
128 changed files with 3566 additions and 2395 deletions
@@ -1,70 +0,0 @@
---
name: "Stale 1_backlog filesystem shadows get re-promoted by rate-limit retry timers, yanking successfully-merged stories back into current"
---
# Bug 510: Stale 1_backlog filesystem shadows get re-promoted by rate-limit retry timers, yanking successfully-merged stories back into current
## Description
After a story successfully completes the entire pipeline — coder runs, gates pass, mergemaster squashes the feature branch to master, lifecycle moves the story from `4_merge/` to `5_done/` — a stale filesystem shadow of the story's markdown file remains in `.huskies/work/1_backlog/`. This shadow is a leftover from the 491/492 migration: story state moved to the database as the source of truth, but the lifecycle move logic in `lifecycle.rs` is still operating on the filesystem and doesn't fully clean up after successful pipeline completions.
When a rate-limit retry timer subsequently fires for that story (rate limits get scheduled by story 496's auto-retry whenever an agent is hard-blocked, and bug 501 means those timers aren't cancelled on successful completion either), the timer fire path calls `move_story_to_current()`, which uses the **filesystem-only** `move_item` helper. That helper finds the stale `1_backlog/` shadow and "moves" it to `2_current/` — even though the story is correctly in `5_done` in the database.
Net effect: a fully-merged, archived-to-done story suddenly reappears in `current` with a fresh coder spawned on it. The matrix bot sends `Done → Current` notifications. The agent burns tokens working on a story whose work has already shipped to master. The user sees the story flapping and assumes the merge didn't actually happen.
**Observed live on 2026-04-09 against story 503:**
```
18:31:32 [lifecycle] Moved '503_…' from work/4_merge/ to work/5_done/
18:31:32 [bot] Sending stage notification: 🎉 #503 … — Merge → Done
18:32:21 [timer] Timer fired for story 503_…
18:32:21 [lifecycle] Moved '503_…' from work/1_backlog/ to work/2_current/ ← stale shadow!
18:32:21 [auto-assign] Assigning 'coder-1' to '503_…' in 2_current/
```
The merge to master persisted (commit `41515e3b` is on master). Only the *pipeline state* got corrupted by the stale shadow being re-promoted.
This is **distinct from bug 501** (which is about manual `stop_agent` not cancelling timers) but compounds it: 501 is about user-initiated stops, this is about successful pipeline completions. Both share a root cause — the rate-limit retry timer system has no notion of "this story has moved on, cancel any pending retries" — but the *consequences* of this bug are worse because the timer fires successfully and re-creates work that shouldn't exist.
Also distinct from bug 502 (mergemaster stage-mismatch) which has been fixed.
The deeper architectural problem this exposes: **`lifecycle.rs::move_item` and `move_story_to_current` are still on the legacy filesystem path** while the rest of the pipeline (491/492) has moved to DB-as-source-of-truth. The filesystem shadows in `.huskies/work/N_stage/` are supposed to be a *materialized rendering* of the DB state, not a parallel source of truth — but `move_item` treats them as authoritative.
## How to Reproduce
1. Take any story through the full pipeline successfully — coder runs, gates pass, mergemaster squashes to master, story moves to `5_done`.
2. While the story was in flight, ensure at least one coder run hit a hard rate limit (so a retry timer was scheduled). Bug 501 means that timer survives the successful completion.
3. Verify post-completion state:
- `SELECT stage FROM pipeline_items WHERE id = 'N_story_X';` returns `5_done`
- `ls .huskies/work/1_backlog/N_story_X.md` shows the file STILL EXISTS (the stale shadow)
- `cat .huskies/timers.json` shows a pending entry for `N_story_X` with a future `scheduled_at`
4. Wait for the timer to fire (default ~5 minutes after the last rate-limit hit).
## Actual Result
When the timer fires:
- The `[timer] Timer fired` log line appears for the already-done story
- `move_story_to_current` is called and finds the stale `1_backlog/N_story_X.md` shadow
- Lifecycle log: `[lifecycle] Moved 'N_…' from work/1_backlog/ to work/2_current/`
- Auto-assign sees the story in `2_current/` and spawns a coder
- Matrix bot sends `Done → Current` (and then later `Current → Current` etc.) stage notifications, spamming the room
- The new coder works on a story whose work is already shipped on master, burning tokens
- The story is now visible in BOTH `5_done` (via DB) AND `2_current` (via filesystem shadow), depending on which view the consumer reads
- The actual master commit is unaffected — the merge that already landed is still there. Only the *pipeline state* is corrupted.
## Expected Result
Successful pipeline completions must fully clean up the story's filesystem shadows. After `move_story_to_done` runs, `.huskies/work/1_backlog/N_story_X.md` (and any other stage shadow) for that story must not exist.
Additionally — and this is the more general fix — the rate-limit retry timer system must cancel any pending timers for a story when that story successfully completes the pipeline. This is a sibling fix to bug 501 (which is about cancelling on manual stop): both manual stop and successful completion should mean "no more retries".
The deepest fix is to migrate `lifecycle.rs::move_item` off the filesystem path and onto the DB path so the shadow files can be torn down entirely (or made strictly read-only renderings). That's a larger change that probably wants its own story, not a bug fix.
## Acceptance Criteria
- [ ] After a story moves to 5_done via the normal pipeline path (mergemaster success), the filesystem shadow at .huskies/work/1_backlog/N_story_X.md is removed (and any other stage shadows are also removed)
- [ ] When a story moves to 5_done, any pending rate-limit retry timer for that story is cancelled (the entry is removed from timers.json before the file is persisted)
- [ ] Regression test: simulate the full repro sequence — run a story through the pipeline with a mid-flight rate limit, complete the merge, fast-forward to the timer fire, assert (a) the story stays in 5_done, (b) no agent is spawned, (c) no Done→Current notification fires
- [ ] No regression in bug 501's fix for manual-stop timer cancellation
- [ ] Filesystem shadow cleanup is symmetric — also runs on delete_story, move_story to backlog, etc., not just the done path
- [ ] The matrix bot does not spam Done→Current notifications for stories whose work has actually completed
+1 -5
View File
@@ -264,11 +264,7 @@ impl<T: CrdtNode + DebugView> BaseCrdt<T> {
// Bounded queue overflow: evict the oldest op from the largest
// pending bucket before adding the new one. See CAUSAL_QUEUE_MAX.
if self.queue_len >= CAUSAL_QUEUE_MAX {
if let Some(bucket) = self
.message_q
.values_mut()
.max_by_key(|v| v.len())
{
if let Some(bucket) = self.message_q.values_mut().max_by_key(|v| v.len()) {
if !bucket.is_empty() {
bucket.remove(0);
self.queue_len = self.queue_len.saturating_sub(1);
+1 -1
View File
@@ -1,5 +1,5 @@
use crate::debug::DebugView;
use crate::json_crdt::{CrdtNode, OpState, JsonValue};
use crate::json_crdt::{CrdtNode, JsonValue, OpState};
use crate::op::{join_path, print_path, Op, PathSegment, SequenceNumber};
use std::cmp::{max, Ordering};
use std::fmt::Debug;
+1 -1
View File
@@ -16,7 +16,7 @@ fi
echo "=== Checking Rust formatting ==="
if cargo fmt --version &>/dev/null; then
cargo fmt --manifest-path "$PROJECT_ROOT/Cargo.toml" --check
cargo fmt --manifest-path "$PROJECT_ROOT/Cargo.toml" --all --check
else
echo "Skipping Rust formatting check (rustfmt not installed)"
fi
+33 -27
View File
@@ -226,10 +226,7 @@ pub enum PipelineEvent {
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum TransitionError {
/// The current stage doesn't accept this event.
InvalidTransition {
from_stage: String,
event: String,
},
InvalidTransition { from_stage: String, event: String },
}
// ── The transition function ──────────────────────────────────────────────────
@@ -260,11 +257,23 @@ pub fn transition(state: Stage, event: PipelineEvent) -> Result<Stage, Transitio
// ── Forward path: backlog → current → (qa →) merge → done ──────────
(Backlog, DepsMet) => Ok(Coding),
(Coding, GatesStarted) => Ok(Qa),
(Coding, QaSkipped { feature_branch, commits_ahead }) => Ok(Merge {
(
Coding,
QaSkipped {
feature_branch,
commits_ahead,
},
) => Ok(Merge {
feature_branch,
commits_ahead,
}),
(Qa, GatesPassed { feature_branch, commits_ahead }) => Ok(Merge {
(
Qa,
GatesPassed {
feature_branch,
commits_ahead,
},
) => Ok(Merge {
feature_branch,
commits_ahead,
}),
@@ -414,7 +423,9 @@ pub fn execution_transition(
}),
(Running { agent, .. }, HitRateLimit { resume_at })
| (Pending { agent, .. }, HitRateLimit { resume_at }) => Ok(RateLimited { agent, resume_at }),
| (Pending { agent, .. }, HitRateLimit { resume_at }) => {
Ok(RateLimited { agent, resume_at })
}
(RateLimited { agent, .. }, SpawnedSuccessfully) => Ok(Running {
agent,
@@ -747,10 +758,7 @@ mod tests {
assert!(matches!(e, ExecutionState::Running { .. }));
let e = execution_transition(e, ExecutionEvent::Exited { exit_code: 0 }).unwrap();
assert!(matches!(
e,
ExecutionState::Completed { exit_code: 0, .. }
));
assert!(matches!(e, ExecutionState::Completed { exit_code: 0, .. }));
}
#[test]
@@ -800,22 +808,20 @@ fn main() {
// Helper to apply a transition + fire the bus.
let mut current_stage = Stage::Backlog;
let step = |bus: &EventBus,
stage: &mut Stage,
event: PipelineEvent|
-> Result<(), TransitionError> {
let before = stage.clone();
let after = transition(stage.clone(), event.clone())?;
bus.fire(TransitionFired {
story_id: story_id.clone(),
before,
after: after.clone(),
event,
at: Utc::now(),
});
*stage = after;
Ok(())
};
let step =
|bus: &EventBus, stage: &mut Stage, event: PipelineEvent| -> Result<(), TransitionError> {
let before = stage.clone();
let after = transition(stage.clone(), event.clone())?;
bus.fire(TransitionFired {
story_id: story_id.clone(),
before,
after: after.clone(),
event,
at: Utc::now(),
});
*stage = after;
Ok(())
};
println!("Initial: {current_stage:?}\n");
+25 -37
View File
@@ -167,10 +167,9 @@ impl PipelineMachine {
// transitions forward and doesn't read them — but they're available
// to inspect via the State::Merge variant generated by the macro.
match event {
PipelineEvent::MergeSucceeded { merge_commit } => Transition(State::done(
Utc::now(),
merge_commit.clone(),
)),
PipelineEvent::MergeSucceeded { merge_commit } => {
Transition(State::done(Utc::now(), merge_commit.clone()))
}
PipelineEvent::MergeFailedFinal { reason } => Transition(State::archived(
Utc::now(),
ArchiveReason::MergeFailed {
@@ -205,9 +204,7 @@ impl PipelineMachine {
reason: reason.clone(),
},
)),
PipelineEvent::Abandon => {
Transition(State::archived(now, ArchiveReason::Abandoned))
}
PipelineEvent::Abandon => Transition(State::archived(now, ArchiveReason::Abandoned)),
PipelineEvent::Supersede { by } => Transition(State::archived(
now,
ArchiveReason::Superseded { by: by.clone() },
@@ -230,12 +227,8 @@ impl PipelineMachine {
let _ = merged_at; // currently unused; available for queries
let _ = merge_commit;
match event {
PipelineEvent::Accepted => {
Transition(State::archived(now, ArchiveReason::Completed))
}
PipelineEvent::Abandon => {
Transition(State::archived(now, ArchiveReason::Abandoned))
}
PipelineEvent::Accepted => Transition(State::archived(now, ArchiveReason::Completed)),
PipelineEvent::Abandon => Transition(State::archived(now, ArchiveReason::Abandoned)),
PipelineEvent::Supersede { by } => Transition(State::archived(
now,
ArchiveReason::Superseded { by: by.clone() },
@@ -294,10 +287,7 @@ pub mod execution {
#[derive(Default)]
pub struct ExecutionMachine;
#[state_machine(
initial = "State::idle()",
state(derive(Debug, Clone, PartialEq, Eq))
)]
#[state_machine(initial = "State::idle()", state(derive(Debug, Clone, PartialEq, Eq)))]
impl ExecutionMachine {
// ── Idle: no agent on this node is working on this story ──────────
@@ -327,11 +317,9 @@ pub mod execution {
ExecutionEvent::HitRateLimit { resume_at } => {
Transition(State::rate_limited(agent.clone(), *resume_at))
}
ExecutionEvent::Exited { exit_code } => Transition(State::completed(
agent.clone(),
*exit_code,
Utc::now(),
)),
ExecutionEvent::Exited { exit_code } => {
Transition(State::completed(agent.clone(), *exit_code, Utc::now()))
}
_ => Super,
}
}
@@ -358,11 +346,9 @@ pub mod execution {
ExecutionEvent::HitRateLimit { resume_at } => {
Transition(State::rate_limited(agent.clone(), *resume_at))
}
ExecutionEvent::Exited { exit_code } => Transition(State::completed(
agent.clone(),
*exit_code,
Utc::now(),
)),
ExecutionEvent::Exited { exit_code } => {
Transition(State::completed(agent.clone(), *exit_code, Utc::now()))
}
_ => Super,
}
}
@@ -380,11 +366,9 @@ pub mod execution {
let now = Utc::now();
Transition(State::running(agent.clone(), now, now))
}
ExecutionEvent::Exited { exit_code } => Transition(State::completed(
agent.clone(),
*exit_code,
Utc::now(),
)),
ExecutionEvent::Exited { exit_code } => {
Transition(State::completed(agent.clone(), *exit_code, Utc::now()))
}
_ => Super,
}
}
@@ -411,9 +395,7 @@ pub mod execution {
#[superstate]
fn any(event: &ExecutionEvent) -> Response<State> {
match event {
ExecutionEvent::Stopped | ExecutionEvent::Reset => {
Transition(State::idle())
}
ExecutionEvent::Stopped | ExecutionEvent::Reset => Transition(State::idle()),
_ => Handled,
}
}
@@ -677,7 +659,10 @@ mod tests {
assert!(matches!(em.state(), ExecState::Running { .. }));
em.handle(&ExecutionEvent::Exited { exit_code: 0 });
assert!(matches!(em.state(), ExecState::Completed { exit_code: 0, .. }));
assert!(matches!(
em.state(),
ExecState::Completed { exit_code: 0, .. }
));
}
#[test]
@@ -781,5 +766,8 @@ fn main() {
});
println!(" before Unblock: {:?}", sm2.state());
sm2.handle(&PipelineEvent::Unblock); // silently ignored — no transition
println!(" after Unblock: {:?} (no change — Unblock is a no-op from Done)", sm2.state());
println!(
" after Unblock: {:?} (no change — Unblock is a no-op from Done)",
sm2.state()
);
}
+45 -58
View File
@@ -6,7 +6,6 @@ use std::fs::{self, File, OpenOptions};
use std::io::{BufRead, BufReader, Write};
use std::path::{Path, PathBuf};
/// A single line in the agent log file (JSONL format).
#[derive(Debug, Serialize, Deserialize)]
pub struct LogEntry {
@@ -72,10 +71,7 @@ impl AgentLogWriter {
/// Return the log directory for a story.
fn log_dir(project_root: &Path, story_id: &str) -> PathBuf {
project_root
.join(".huskies")
.join("logs")
.join(story_id)
project_root.join(".huskies").join("logs").join(story_id)
}
/// Return the path to a specific log file.
@@ -102,8 +98,8 @@ pub fn read_log(path: &Path) -> Result<Vec<LogEntry>, String> {
if trimmed.is_empty() {
continue;
}
let entry: LogEntry = serde_json::from_str(trimmed)
.map_err(|e| format!("Failed to parse log entry: {e}"))?;
let entry: LogEntry =
serde_json::from_str(trimmed).map_err(|e| format!("Failed to parse log entry: {e}"))?;
entries.push(entry);
}
@@ -197,10 +193,7 @@ pub fn format_log_entry_as_text(timestamp: &str, event: &serde_json::Value) -> O
Some("done") => Some(format!("{pfx} DONE")),
Some("status") => {
// Skip trivial running/started noise
let status = event
.get("status")
.and_then(|v| v.as_str())
.unwrap_or("?");
let status = event.get("status").and_then(|v| v.as_str()).unwrap_or("?");
match status {
"running" | "started" => None,
_ => Some(format!("{pfx} STATUS: {status}")),
@@ -211,10 +204,7 @@ pub fn format_log_entry_as_text(timestamp: &str, event: &serde_json::Value) -> O
match data.get("type").and_then(|v| v.as_str()) {
Some("assistant") => {
let mut parts: Vec<String> = Vec::new();
if let Some(arr) = data
.pointer("/message/content")
.and_then(|v| v.as_array())
{
if let Some(arr) = data.pointer("/message/content").and_then(|v| v.as_array()) {
for item in arr {
match item.get("type").and_then(|v| v.as_str()) {
Some("text") => {
@@ -228,15 +218,11 @@ pub fn format_log_entry_as_text(timestamp: &str, event: &serde_json::Value) -> O
}
}
Some("tool_use") => {
let name = item
.get("name")
.and_then(|v| v.as_str())
.unwrap_or("?");
let name =
item.get("name").and_then(|v| v.as_str()).unwrap_or("?");
let input = item
.get("input")
.map(|v| {
serde_json::to_string(v).unwrap_or_default()
})
.map(|v| serde_json::to_string(v).unwrap_or_default())
.unwrap_or_default();
let display = if input.len() > 200 {
format!("{}...", &input[..200])
@@ -257,14 +243,9 @@ pub fn format_log_entry_as_text(timestamp: &str, event: &serde_json::Value) -> O
}
Some("user") => {
let mut parts: Vec<String> = Vec::new();
if let Some(arr) = data
.pointer("/message/content")
.and_then(|v| v.as_array())
{
if let Some(arr) = data.pointer("/message/content").and_then(|v| v.as_array()) {
for item in arr {
if item.get("type").and_then(|v| v.as_str())
!= Some("tool_result")
{
if item.get("type").and_then(|v| v.as_str()) != Some("tool_result") {
continue;
}
let content_str = match item.get("content") {
@@ -316,11 +297,7 @@ pub fn read_log_as_readable_lines(path: &Path) -> Result<Vec<String>, String> {
///
/// Scans `.huskies/logs/{story_id}/` for files matching `{agent_name}-*.log`
/// and returns the one with the most recent modification time.
pub fn find_latest_log(
project_root: &Path,
story_id: &str,
agent_name: &str,
) -> Option<PathBuf> {
pub fn find_latest_log(project_root: &Path, story_id: &str, agent_name: &str) -> Option<PathBuf> {
let dir = log_dir(project_root, story_id);
if !dir.is_dir() {
return None;
@@ -362,8 +339,7 @@ mod tests {
let tmp = tempdir().unwrap();
let root = tmp.path();
let _writer =
AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-abc123").unwrap();
let _writer = AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-abc123").unwrap();
let expected_path = root
.join(".huskies")
@@ -378,8 +354,7 @@ mod tests {
let tmp = tempdir().unwrap();
let root = tmp.path();
let mut writer =
AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-001").unwrap();
let mut writer = AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-001").unwrap();
let event = AgentEvent::Status {
story_id: "42_story_foo".to_string(),
@@ -426,8 +401,7 @@ mod tests {
let tmp = tempdir().unwrap();
let root = tmp.path();
let mut writer =
AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-002").unwrap();
let mut writer = AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-002").unwrap();
let events = vec![
AgentEvent::Status {
@@ -472,10 +446,8 @@ mod tests {
let tmp = tempdir().unwrap();
let root = tmp.path();
let mut writer1 =
AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-aaa").unwrap();
let mut writer2 =
AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-bbb").unwrap();
let mut writer1 = AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-aaa").unwrap();
let mut writer2 = AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-bbb").unwrap();
writer1
.write_event(&AgentEvent::Output {
@@ -496,7 +468,10 @@ mod tests {
let path1 = log_file_path(root, "42_story_foo", "coder-1", "sess-aaa");
let path2 = log_file_path(root, "42_story_foo", "coder-1", "sess-bbb");
assert_ne!(path1, path2, "Different sessions should use different files");
assert_ne!(
path1, path2,
"Different sessions should use different files"
);
let entries1 = read_log(&path1).unwrap();
let entries2 = read_log(&path2).unwrap();
@@ -513,8 +488,7 @@ mod tests {
let root = tmp.path();
// Create two log files with a small delay
let mut writer1 =
AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-old").unwrap();
let mut writer1 = AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-old").unwrap();
writer1
.write_event(&AgentEvent::Output {
story_id: "42_story_foo".to_string(),
@@ -527,8 +501,7 @@ mod tests {
// Touch the second file to ensure it's newer
std::thread::sleep(std::time::Duration::from_millis(50));
let mut writer2 =
AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-new").unwrap();
let mut writer2 = AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-new").unwrap();
writer2
.write_event(&AgentEvent::Output {
story_id: "42_story_foo".to_string(),
@@ -568,8 +541,7 @@ mod tests {
drop(w1);
std::thread::sleep(std::time::Duration::from_millis(10));
let mut w2 =
AgentLogWriter::new(root, "42_story_foo", "mergemaster", "sess-bbb").unwrap();
let mut w2 = AgentLogWriter::new(root, "42_story_foo", "mergemaster", "sess-bbb").unwrap();
w2.write_event(&AgentEvent::Output {
story_id: "42_story_foo".to_string(),
agent_name: "mergemaster".to_string(),
@@ -601,8 +573,7 @@ mod tests {
.unwrap();
drop(w1);
let mut w2 =
AgentLogWriter::new(root, "42_story_foo", "mergemaster", "sess-b").unwrap();
let mut w2 = AgentLogWriter::new(root, "42_story_foo", "mergemaster", "sess-b").unwrap();
w2.write_event(&AgentEvent::Output {
story_id: "42_story_foo".to_string(),
agent_name: "mergemaster".to_string(),
@@ -704,7 +675,10 @@ mod tests {
}
});
let result = format_log_entry_as_text(ts, &event).unwrap();
assert!(result.contains("TOOL: Read"), "should show tool call: {result}");
assert!(
result.contains("TOOL: Read"),
"should show tool call: {result}"
);
assert!(result.contains("file_path"), "should show input: {result}");
}
@@ -728,7 +702,10 @@ mod tests {
}
});
let result = format_log_entry_as_text(ts, &event).unwrap();
assert!(result.contains("Now I will read the file."), "should show text: {result}");
assert!(
result.contains("Now I will read the file."),
"should show text: {result}"
);
}
#[test]
@@ -743,7 +720,10 @@ mod tests {
"event": {"type": "content_block_delta", "delta": {"type": "text_delta", "text": "chunk"}}
}
});
assert!(format_log_entry_as_text(ts, &event).is_none(), "stream events should be skipped");
assert!(
format_log_entry_as_text(ts, &event).is_none(),
"stream events should be skipped"
);
}
#[test]
@@ -771,7 +751,11 @@ mod tests {
let path = log_file_path(root, "42_story_foo", "coder-1", "sess-readable");
let lines = read_log_as_readable_lines(&path).unwrap();
assert_eq!(lines.len(), 2, "Should produce 2 readable lines");
assert!(lines[0].contains("Let me read the file"), "first line: {}", lines[0]);
assert!(
lines[0].contains("Let me read the file"),
"first line: {}",
lines[0]
);
assert!(lines[1].contains("DONE"), "second line: {}", lines[1]);
}
@@ -802,7 +786,10 @@ mod tests {
};
// File should still exist and be readable
assert!(path.exists(), "Log file should persist after writer is dropped");
assert!(
path.exists(),
"Log file should persist after writer is dropped"
);
let entries = read_log(&path).unwrap();
assert_eq!(entries.len(), 1);
assert_eq!(entries[0].event["type"], "status");
+10 -8
View File
@@ -51,14 +51,20 @@ pub async fn run(
println!("\x1b[96;1m[agent-mode]\x1b[0m Starting headless build agent");
println!("\x1b[96;1m[agent-mode]\x1b[0m Rendezvous: {rendezvous_url}");
println!("\x1b[96;1m[agent-mode]\x1b[0m Project: {}", project_root.display());
println!(
"\x1b[96;1m[agent-mode]\x1b[0m Project: {}",
project_root.display()
);
// Validate project config.
let config = ProjectConfig::load(&project_root).unwrap_or_else(|e| {
eprintln!("error: invalid project config: {e}");
std::process::exit(1);
});
slog!("[agent-mode] Loaded config with {} agents", config.agent.len());
slog!(
"[agent-mode] Loaded config with {} agents",
config.agent.len()
);
// Event bus for pipeline lifecycle events.
let (watcher_tx, _) = broadcast::channel::<watcher::WatcherEvent>(1024);
@@ -79,9 +85,7 @@ pub async fn run(
{
let story_id = evt.story_id.clone();
tokio::task::spawn_blocking(move || {
if let Err(e) =
crate::worktree::prune_worktree_sync(&root, &story_id)
{
if let Err(e) = crate::worktree::prune_worktree_sync(&root, &story_id) {
slog!("[agent-mode] worktree prune failed for {story_id}: {e}");
}
});
@@ -113,9 +117,7 @@ pub async fn run(
if let watcher::WatcherEvent::WorkItem { ref stage, .. } = event
&& matches!(stage.as_str(), "2_current" | "3_qa" | "4_merge")
{
slog!(
"[agent-mode] CRDT transition in {stage}/; triggering auto-assign."
);
slog!("[agent-mode] CRDT transition in {stage}/; triggering auto-assign.");
auto_agents.auto_assign_available_work(&auto_root).await;
}
}
+18 -7
View File
@@ -36,9 +36,7 @@ pub(crate) fn worktree_has_committed_work(wt_path: &Path) -> bool {
.current_dir(wt_path)
.output();
match output {
Ok(out) if out.status.success() => {
!String::from_utf8_lossy(&out.stdout).trim().is_empty()
}
Ok(out) if out.status.success() => !String::from_utf8_lossy(&out.stdout).trim().is_empty(),
_ => false,
}
}
@@ -258,14 +256,21 @@ mod tests {
let script_dir = path.join("script");
fs::create_dir_all(&script_dir).unwrap();
let script_test = script_dir.join("test");
fs::write(&script_test, "#!/usr/bin/env bash\necho 'all tests passed'\nexit 0\n").unwrap();
fs::write(
&script_test,
"#!/usr/bin/env bash\necho 'all tests passed'\nexit 0\n",
)
.unwrap();
let mut perms = fs::metadata(&script_test).unwrap().permissions();
perms.set_mode(0o755);
fs::set_permissions(&script_test, perms).unwrap();
let (passed, output) = run_project_tests(path).unwrap();
assert!(passed, "script/test exiting 0 should pass");
assert!(output.contains("script/test"), "output should mention script/test");
assert!(
output.contains("script/test"),
"output should mention script/test"
);
}
#[cfg(unix)]
@@ -286,7 +291,10 @@ mod tests {
let (passed, output) = run_project_tests(path).unwrap();
assert!(!passed, "script/test exiting 1 should fail");
assert!(output.contains("script/test"), "output should mention script/test");
assert!(
output.contains("script/test"),
"output should mention script/test"
);
}
// ── run_coverage_gate tests ───────────────────────────────────────────────
@@ -347,7 +355,10 @@ mod tests {
let script = script_dir.join("test_coverage");
{
let mut f = fs::File::create(&script).unwrap();
f.write_all(b"#!/usr/bin/env bash\necho 'FAIL: Coverage 40% is below threshold 80%'\nexit 1\n").unwrap();
f.write_all(
b"#!/usr/bin/env bash\necho 'FAIL: Coverage 40% is below threshold 80%'\nexit 1\n",
)
.unwrap();
f.sync_all().unwrap();
}
let mut perms = fs::metadata(&script).unwrap().permissions();
+38 -14
View File
@@ -37,9 +37,7 @@ fn move_item<'a>(
// Use the typed projection for compile-safe stage comparison.
if let Ok(Some(typed_item)) = crate::pipeline_state::read_typed(story_id) {
let current_dir = typed_item.stage.dir_name();
if current_dir == target_dir
|| extra_done_dirs.contains(&current_dir)
{
if current_dir == target_dir || extra_done_dirs.contains(&current_dir) {
return Ok(None); // Idempotent: already there.
}
@@ -77,11 +75,7 @@ fn move_item<'a>(
}))
};
crate::db::move_item_stage(
story_id,
target_dir,
transform.as_ref().map(|f| f.as_ref()),
);
crate::db::move_item_stage(story_id, target_dir, transform.as_ref().map(|f| f.as_ref()));
slog!("[lifecycle] Moved '{story_id}' from work/{src_dir}/ to work/{target_dir}/");
return Ok(Some(src_dir));
@@ -121,7 +115,16 @@ fn move_item<'a>(
/// that has already advanced past the coding stage.
/// Idempotent: if already in `2_current/`, returns Ok. If not found, logs and returns Ok.
pub fn move_story_to_current(project_root: &Path, story_id: &str) -> Result<(), String> {
move_item(project_root, story_id, &["1_backlog"], "2_current", &[], true, &[]).map(|_| ())
move_item(
project_root,
story_id,
&["1_backlog"],
"2_current",
&[],
true,
&[],
)
.map(|_| ())
}
/// Check whether a feature branch `feature/story-{story_id}` exists and has
@@ -205,12 +208,25 @@ pub fn move_story_to_qa(project_root: &Path, story_id: &str) -> Result<(), Strin
}
/// Move a story from `work/3_qa/` back to `work/2_current/`, clearing `review_hold` and writing notes.
pub fn reject_story_from_qa(project_root: &Path, story_id: &str, notes: &str) -> Result<(), String> {
let moved = move_item(project_root, story_id, &["3_qa"], "2_current", &[], false, &["review_hold"])?;
pub fn reject_story_from_qa(
project_root: &Path,
story_id: &str,
notes: &str,
) -> Result<(), String> {
let moved = move_item(
project_root,
story_id,
&["3_qa"],
"2_current",
&[],
false,
&["review_hold"],
)?;
if moved.is_some() && !notes.is_empty() {
// Append rejection notes to the stored content.
if let Some(content) = crate::db::read_content(story_id) {
let updated = crate::io::story_metadata::write_rejection_notes_to_content(&content, notes);
let updated =
crate::io::story_metadata::write_rejection_notes_to_content(&content, notes);
crate::db::write_content(story_id, &updated);
// Re-sync to DB.
crate::db::write_item_with_content(story_id, "2_current", &updated);
@@ -251,8 +267,16 @@ pub fn move_story_to_stage(
let all_dirs: Vec<&str> = STAGES.iter().map(|(_, dir)| *dir).collect();
match move_item(project_root, story_id, &all_dirs, target_dir, &[], false, &[])
.map_err(|_| format!("Work item '{story_id}' not found in any pipeline stage."))?
match move_item(
project_root,
story_id,
&all_dirs,
target_dir,
&[],
false,
&[],
)
.map_err(|_| format!("Work item '{story_id}' not found in any pipeline stage."))?
{
Some(src_dir) => {
let from_stage = STAGES
+11 -2
View File
@@ -248,7 +248,9 @@ pub(crate) fn run_squash_merge(
.output()
.map_err(|e| format!("Failed to check merge diff: {e}"))?;
let changed_files = String::from_utf8_lossy(&diff_check.stdout);
let has_code_changes = changed_files.lines().any(|f| !f.starts_with(".huskies/work/"));
let has_code_changes = changed_files
.lines()
.any(|f| !f.starts_with(".huskies/work/"));
if !has_code_changes {
all_output.push_str(
"=== Merge commit contains only .huskies/ file moves, no code changes ===\n",
@@ -423,7 +425,14 @@ pub(crate) fn run_squash_merge(
// Exclude .huskies/work/ (pipeline file moves) but keep .huskies/project.toml
// and other config files which are legitimate deliverables.
let diff_stat = Command::new("git")
.args(["diff", "--stat", "HEAD~1..HEAD", "--", ".", ":(exclude).huskies/work"])
.args([
"diff",
"--stat",
"HEAD~1..HEAD",
"--",
".",
":(exclude).huskies/work",
])
.current_dir(project_root)
.output()
.map(|o| String::from_utf8_lossy(&o.stdout).trim().to_string())
@@ -64,8 +64,7 @@ impl AgentPool {
}
// All deps met — promote from backlog to current.
slog!("[auto-assign] Story '{story_id}' deps met; promoting from backlog to current.");
if let Err(e) =
crate::agents::lifecycle::move_story_to_current(project_root, story_id)
if let Err(e) = crate::agents::lifecycle::move_story_to_current(project_root, story_id)
{
slog!("[auto-assign] Failed to promote '{story_id}' to current: {e}");
}
@@ -160,10 +159,12 @@ impl AgentPool {
);
let _ = crate::io::story_metadata::write_blocked(&story_path);
}
let _ = self.watcher_tx.send(crate::io::watcher::WatcherEvent::StoryBlocked {
story_id: story_id.to_string(),
reason: empty_diff_reason.to_string(),
});
let _ = self
.watcher_tx
.send(crate::io::watcher::WatcherEvent::StoryBlocked {
story_id: story_id.to_string(),
reason: empty_diff_reason.to_string(),
});
continue;
}
@@ -570,9 +571,12 @@ mod tests {
pool.auto_assign_available_work(root).await;
let agents = pool.agents.lock().unwrap();
let has_pending = agents
.values()
.any(|a| matches!(a.status, crate::agents::AgentStatus::Pending | crate::agents::AgentStatus::Running));
let has_pending = agents.values().any(|a| {
matches!(
a.status,
crate::agents::AgentStatus::Pending | crate::agents::AgentStatus::Running
)
});
assert!(
has_pending,
"story with all deps done should be auto-assigned"
+19 -17
View File
@@ -161,17 +161,19 @@ impl AgentPool {
match qa_mode {
crate::io::story_metadata::QaMode::Server => {
if let Err(e) =
crate::agents::move_story_to_merge(project_root, story_id)
{
eprintln!("[startup:reconcile] Failed to move '{story_id}' to 4_merge/: {e}");
if let Err(e) = crate::agents::move_story_to_merge(project_root, story_id) {
eprintln!(
"[startup:reconcile] Failed to move '{story_id}' to 4_merge/: {e}"
);
let _ = progress_tx.send(ReconciliationEvent {
story_id: story_id.clone(),
status: "failed".to_string(),
message: format!("Failed to advance to merge: {e}"),
});
} else {
eprintln!("[startup:reconcile] Moved '{story_id}' → 4_merge/ (qa: server).");
eprintln!(
"[startup:reconcile] Moved '{story_id}' → 4_merge/ (qa: server)."
);
let _ = progress_tx.send(ReconciliationEvent {
story_id: story_id.clone(),
status: "advanced".to_string(),
@@ -180,10 +182,10 @@ impl AgentPool {
}
}
crate::io::story_metadata::QaMode::Agent => {
if let Err(e) =
crate::agents::move_story_to_qa(project_root, story_id)
{
eprintln!("[startup:reconcile] Failed to move '{story_id}' to 3_qa/: {e}");
if let Err(e) = crate::agents::move_story_to_qa(project_root, story_id) {
eprintln!(
"[startup:reconcile] Failed to move '{story_id}' to 3_qa/: {e}"
);
let _ = progress_tx.send(ReconciliationEvent {
story_id: story_id.clone(),
status: "failed".to_string(),
@@ -199,10 +201,10 @@ impl AgentPool {
}
}
crate::io::story_metadata::QaMode::Human => {
if let Err(e) =
crate::agents::move_story_to_qa(project_root, story_id)
{
eprintln!("[startup:reconcile] Failed to move '{story_id}' to 3_qa/: {e}");
if let Err(e) = crate::agents::move_story_to_qa(project_root, story_id) {
eprintln!(
"[startup:reconcile] Failed to move '{story_id}' to 3_qa/: {e}"
);
let _ = progress_tx.send(ReconciliationEvent {
story_id: story_id.clone(),
status: "failed".to_string(),
@@ -219,7 +221,9 @@ impl AgentPool {
"[startup:reconcile] Failed to set review_hold on '{story_id}': {e}"
);
}
eprintln!("[startup:reconcile] Moved '{story_id}' → 3_qa/ (qa: human — holding for review).");
eprintln!(
"[startup:reconcile] Moved '{story_id}' → 3_qa/ (qa: human — holding for review)."
);
let _ = progress_tx.send(ReconciliationEvent {
story_id: story_id.clone(),
status: "review_hold".to_string(),
@@ -284,9 +288,7 @@ impl AgentPool {
let story_path = project_root
.join(".huskies/work/3_qa")
.join(format!("{story_id}.md"));
if let Err(e) =
crate::io::story_metadata::write_review_hold(&story_path)
{
if let Err(e) = crate::io::story_metadata::write_review_hold(&story_path) {
eprintln!(
"[startup:reconcile] Failed to set review_hold on '{story_id}': {e}"
);
+7 -3
View File
@@ -31,7 +31,9 @@ pub(super) fn scan_stage_items(project_root: &Path, stage_dir: &str) -> Vec<Stri
// Also include filesystem items (backwards compat / migration fallback).
let dir = project_root.join(".huskies").join("work").join(stage_dir);
if dir.is_dir() && let Ok(entries) = std::fs::read_dir(&dir) {
if dir.is_dir()
&& let Ok(entries) = std::fs::read_dir(&dir)
{
for entry in entries.flatten() {
let path = entry.path();
if path.extension().and_then(|e| e.to_str()) == Some("md")
@@ -576,7 +578,9 @@ stage = "coder"
);
let count = count_active_agents_for_stage(&config, &agents, &PipelineStage::Coder);
assert_eq!(count, 1, "Only Running coder should be counted, not Completed");
assert_eq!(
count, 1,
"Only Running coder should be counted, not Completed"
);
}
}
@@ -52,18 +52,18 @@ pub(super) fn is_story_blocked(project_root: &Path, _stage_dir: &str, story_id:
///
/// Reads dependency state from the CRDT document first. Falls back to the
/// filesystem when the CRDT layer is not initialised.
pub(super) fn has_unmet_dependencies(
project_root: &Path,
stage_dir: &str,
story_id: &str,
) -> bool {
pub(super) fn has_unmet_dependencies(project_root: &Path, stage_dir: &str, story_id: &str) -> bool {
// Prefer CRDT-based check.
let crdt_deps = crate::crdt_state::check_unmet_deps_crdt(story_id);
if !crdt_deps.is_empty() {
return true;
}
// If the CRDT had the item and returned empty deps, it means all are met.
if crate::pipeline_state::read_typed(story_id).ok().flatten().is_some() {
if crate::pipeline_state::read_typed(story_id)
.ok()
.flatten()
.is_some()
{
return false;
}
// Fallback: filesystem check (CRDT not initialised or item not yet in CRDT).
@@ -82,7 +82,11 @@ pub(super) fn check_archived_dependencies(
story_id: &str,
) -> Vec<u32> {
// Prefer CRDT-based check when the item is known to CRDT.
if crate::pipeline_state::read_typed(story_id).ok().flatten().is_some() {
if crate::pipeline_state::read_typed(story_id)
.ok()
.flatten()
.is_some()
{
return crate::crdt_state::check_archived_deps_crdt(story_id);
}
// Fallback: filesystem.
@@ -146,7 +150,11 @@ mod tests {
"---\nname: Blocked\ndepends_on: [999]\n---\n",
)
.unwrap();
assert!(has_unmet_dependencies(tmp.path(), "2_current", "10_story_blocked"));
assert!(has_unmet_dependencies(
tmp.path(),
"2_current",
"10_story_blocked"
));
}
#[test]
@@ -162,7 +170,11 @@ mod tests {
"---\nname: Ok\ndepends_on: [999]\n---\n",
)
.unwrap();
assert!(!has_unmet_dependencies(tmp.path(), "2_current", "10_story_ok"));
assert!(!has_unmet_dependencies(
tmp.path(),
"2_current",
"10_story_ok"
));
}
#[test]
@@ -171,7 +183,11 @@ mod tests {
let current = tmp.path().join(".huskies/work/2_current");
std::fs::create_dir_all(&current).unwrap();
std::fs::write(current.join("5_story_free.md"), "---\nname: Free\n---\n").unwrap();
assert!(!has_unmet_dependencies(tmp.path(), "2_current", "5_story_free"));
assert!(!has_unmet_dependencies(
tmp.path(),
"2_current",
"5_story_free"
));
}
// ── Bug 503: archived-dep visibility ─────────────────────────────────────
@@ -184,7 +200,11 @@ mod tests {
let archived = tmp.path().join(".huskies/work/6_archived");
std::fs::create_dir_all(&backlog).unwrap();
std::fs::create_dir_all(&archived).unwrap();
std::fs::write(archived.join("500_spike_crdt.md"), "---\nname: CRDT Spike\n---\n").unwrap();
std::fs::write(
archived.join("500_spike_crdt.md"),
"---\nname: CRDT Spike\n---\n",
)
.unwrap();
std::fs::write(
backlog.join("503_story_dependent.md"),
"---\nname: Dependent\ndepends_on: [500]\n---\n",
@@ -84,8 +84,8 @@ impl AgentPool {
#[cfg(test)]
mod tests {
use super::*;
use super::super::super::{AgentPool, composite_key};
use super::*;
// ── check_orphaned_agents return value tests (bug 161) ──────────────────
+11 -6
View File
@@ -1,12 +1,12 @@
//! Agent pool — manages the set of active agents across all pipeline stages.
mod auto_assign;
mod pipeline;
mod start;
mod stop;
mod wait;
mod process;
mod query;
mod start;
mod stop;
mod types;
mod wait;
mod worktree;
#[cfg(test)]
@@ -68,10 +68,15 @@ impl AgentPool {
Err(broadcast::error::RecvError::Lagged(_)) => continue,
};
let (story_id, agent_name) = match &event {
WatcherEvent::RateLimitWarning { story_id, agent_name }
| WatcherEvent::RateLimitHardBlock { story_id, agent_name, .. } => {
(story_id.clone(), agent_name.clone())
WatcherEvent::RateLimitWarning {
story_id,
agent_name,
}
| WatcherEvent::RateLimitHardBlock {
story_id,
agent_name,
..
} => (story_id.clone(), agent_name.clone()),
_ => continue,
};
let key = composite_key(&story_id, &agent_name);
+100 -47
View File
@@ -1,18 +1,15 @@
//! Pipeline advance — moves stories forward through pipeline stages after agent completion.
use crate::config::ProjectConfig;
use crate::io::watcher::WatcherEvent;
use crate::slog;
use crate::slog_error;
use crate::slog_warn;
use crate::io::watcher::WatcherEvent;
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use std::sync::{Arc, Mutex};
use tokio::sync::broadcast;
use super::super::super::{
CompletionReport, PipelineStage,
agent_config_stage, pipeline_stage,
};
use super::super::super::{CompletionReport, PipelineStage, agent_config_stage, pipeline_stage};
use super::super::{AgentPool, StoryAgent};
impl AgentPool {
@@ -66,14 +63,16 @@ impl AgentPool {
"[pipeline] Coder '{agent_name}' passed gates for '{story_id}'. \
qa: server moving directly to merge."
);
if let Err(e) =
crate::agents::lifecycle::move_story_to_merge(&project_root, story_id)
{
if let Err(e) = crate::agents::lifecycle::move_story_to_merge(
&project_root,
story_id,
) {
slog_error!(
"[pipeline] Failed to move '{story_id}' to 4_merge/: {e}"
);
} else {
self.start_mergemaster_or_block(&project_root, story_id).await;
self.start_mergemaster_or_block(&project_root, story_id)
.await;
}
}
crate::io::story_metadata::QaMode::Agent => {
@@ -81,13 +80,17 @@ impl AgentPool {
"[pipeline] Coder '{agent_name}' passed gates for '{story_id}'. \
qa: agent moving to QA."
);
if let Err(e) = crate::agents::lifecycle::move_story_to_qa(&project_root, story_id) {
if let Err(e) =
crate::agents::lifecycle::move_story_to_qa(&project_root, story_id)
{
slog_error!("[pipeline] Failed to move '{story_id}' to 3_qa/: {e}");
} else if let Err(e) = self
.start_agent(&project_root, story_id, Some("qa"), None, None)
.await
{
slog_error!("[pipeline] Failed to start qa agent for '{story_id}': {e}");
slog_error!(
"[pipeline] Failed to start qa agent for '{story_id}': {e}"
);
}
}
crate::io::story_metadata::QaMode::Human => {
@@ -95,7 +98,9 @@ impl AgentPool {
"[pipeline] Coder '{agent_name}' passed gates for '{story_id}'. \
qa: human holding for human review."
);
if let Err(e) = crate::agents::lifecycle::move_story_to_qa(&project_root, story_id) {
if let Err(e) =
crate::agents::lifecycle::move_story_to_qa(&project_root, story_id)
{
slog_error!("[pipeline] Failed to move '{story_id}' to 3_qa/: {e}");
} else {
write_review_hold_to_store(story_id);
@@ -104,7 +109,8 @@ impl AgentPool {
}
} else {
// Increment retry count and check if blocked.
if let Some(reason) = should_block_story(story_id, config.max_retries, "coder") {
if let Some(reason) = should_block_story(story_id, config.max_retries, "coder")
{
// Story has exceeded retry limit — do not restart.
let _ = self.watcher_tx.send(WatcherEvent::StoryBlocked {
story_id: story_id.to_string(),
@@ -144,13 +150,14 @@ impl AgentPool {
.clone()
.unwrap_or_else(|| project_root.clone());
let cp = coverage_path.clone();
let coverage_result =
tokio::task::spawn_blocking(move || crate::agents::gates::run_coverage_gate(&cp))
.await
.unwrap_or_else(|e| {
slog_warn!("[pipeline] Coverage gate task panicked: {e}");
Ok((false, format!("Coverage gate task panicked: {e}")))
});
let coverage_result = tokio::task::spawn_blocking(move || {
crate::agents::gates::run_coverage_gate(&cp)
})
.await
.unwrap_or_else(|e| {
slog_warn!("[pipeline] Coverage gate task panicked: {e}");
Ok((false, format!("Coverage gate task panicked: {e}")))
});
let (coverage_passed, coverage_output) = match coverage_result {
Ok(pair) => pair,
Err(e) => (false, e),
@@ -184,17 +191,21 @@ impl AgentPool {
"[pipeline] QA passed gates and coverage for '{story_id}'. \
Moving directly to merge."
);
if let Err(e) =
crate::agents::lifecycle::move_story_to_merge(&project_root, story_id)
{
if let Err(e) = crate::agents::lifecycle::move_story_to_merge(
&project_root,
story_id,
) {
slog_error!(
"[pipeline] Failed to move '{story_id}' to 4_merge/: {e}"
);
} else {
self.start_mergemaster_or_block(&project_root, story_id).await;
self.start_mergemaster_or_block(&project_root, story_id)
.await;
}
}
} else if let Some(reason) = should_block_story(story_id, config.max_retries, "qa-coverage") {
} else if let Some(reason) =
should_block_story(story_id, config.max_retries, "qa-coverage")
{
// Story has exceeded retry limit — do not restart.
let _ = self.watcher_tx.send(WatcherEvent::StoryBlocked {
story_id: story_id.to_string(),
@@ -217,7 +228,8 @@ impl AgentPool {
slog_error!("[pipeline] Failed to restart qa for '{story_id}': {e}");
}
}
} else if let Some(reason) = should_block_story(story_id, config.max_retries, "qa") {
} else if let Some(reason) = should_block_story(story_id, config.max_retries, "qa")
{
// Story has exceeded retry limit — do not restart.
let _ = self.watcher_tx.send(WatcherEvent::StoryBlocked {
story_id: story_id.to_string(),
@@ -272,13 +284,14 @@ impl AgentPool {
"[pipeline] Mergemaster completed for '{story_id}'. Running post-merge tests on master."
);
let root = project_root.clone();
let test_result =
tokio::task::spawn_blocking(move || crate::agents::gates::run_project_tests(&root))
.await
.unwrap_or_else(|e| {
slog_warn!("[pipeline] Post-merge test task panicked: {e}");
Ok((false, format!("Test task panicked: {e}")))
});
let test_result = tokio::task::spawn_blocking(move || {
crate::agents::gates::run_project_tests(&root)
})
.await
.unwrap_or_else(|e| {
slog_warn!("[pipeline] Post-merge test task panicked: {e}");
Ok((false, format!("Test task panicked: {e}")))
});
let (passed, output) = match test_result {
Ok(pair) => pair,
Err(e) => (false, e),
@@ -309,7 +322,9 @@ impl AgentPool {
slog!(
"[pipeline] Story '{story_id}' done. Worktree preserved for inspection."
);
} else if let Some(reason) = should_block_story(story_id, config.max_retries, "mergemaster") {
} else if let Some(reason) =
should_block_story(story_id, config.max_retries, "mergemaster")
{
// Story has exceeded retry limit — do not restart.
let _ = self.watcher_tx.send(WatcherEvent::StoryBlocked {
story_id: story_id.to_string(),
@@ -564,7 +579,10 @@ mod tests {
)
.unwrap();
crate::db::ensure_content_store();
crate::db::write_content("9909_story_agent_qa", "---\nname: Test\nqa: agent\n---\ntest");
crate::db::write_content(
"9909_story_agent_qa",
"---\nname: Test\nqa: agent\n---\ntest",
);
let pool = AgentPool::new_test(3001);
pool.run_pipeline_advance(
@@ -758,10 +776,26 @@ stage = "qa"
let root = tmp.path();
// Init a bare git repo on master with one empty commit.
Command::new("git").args(["init"]).current_dir(root).output().unwrap();
Command::new("git").args(["config", "user.email", "test@test.com"]).current_dir(root).output().unwrap();
Command::new("git").args(["config", "user.name", "Test"]).current_dir(root).output().unwrap();
Command::new("git").args(["commit", "--allow-empty", "-m", "init"]).current_dir(root).output().unwrap();
Command::new("git")
.args(["init"])
.current_dir(root)
.output()
.unwrap();
Command::new("git")
.args(["config", "user.email", "test@test.com"])
.current_dir(root)
.output()
.unwrap();
Command::new("git")
.args(["config", "user.name", "Test"])
.current_dir(root)
.output()
.unwrap();
Command::new("git")
.args(["commit", "--allow-empty", "-m", "init"])
.current_dir(root)
.output()
.unwrap();
// Create a feature branch that points at master HEAD (zero commits ahead).
// This replicates the incident where the worktree was reset to master.
@@ -775,7 +809,11 @@ stage = "qa"
let current = root.join(".huskies/work/2_current");
fs::create_dir_all(&current).unwrap();
fs::create_dir_all(root.join(".huskies/work/4_merge")).unwrap();
fs::write(current.join("9919_story_no_commits.md"), "---\nname: Test\n---\n").unwrap();
fs::write(
current.join("9919_story_no_commits.md"),
"---\nname: Test\n---\n",
)
.unwrap();
crate::db::ensure_content_store();
crate::db::write_content("9919_story_no_commits", "---\nname: Test\n---\n");
@@ -835,8 +873,8 @@ stage = "qa"
#[tokio::test]
async fn pipeline_advance_picks_up_waiting_qa_stories_after_completion() {
use std::fs;
use super::super::super::auto_assign::is_agent_free;
use std::fs;
let tmp = tempfile::tempdir().unwrap();
let root = tmp.path();
@@ -908,8 +946,7 @@ stage = "qa"
// After pipeline advance, auto_assign should have started QA on story 293.
let agents = pool.agents.lock().unwrap();
let qa_on_293 = agents.values().any(|a| {
a.agent_name == "qa"
&& matches!(a.status, AgentStatus::Pending | AgentStatus::Running)
a.agent_name == "qa" && matches!(a.status, AgentStatus::Pending | AgentStatus::Running)
});
assert!(
qa_on_293,
@@ -940,10 +977,26 @@ stage = "qa"
let root = tmp.path();
// Init a git repo so post-merge tests would pass if they ran.
Command::new("git").args(["init"]).current_dir(root).output().unwrap();
Command::new("git").args(["config", "user.email", "test@test.com"]).current_dir(root).output().unwrap();
Command::new("git").args(["config", "user.name", "Test"]).current_dir(root).output().unwrap();
Command::new("git").args(["commit", "--allow-empty", "-m", "init"]).current_dir(root).output().unwrap();
Command::new("git")
.args(["init"])
.current_dir(root)
.output()
.unwrap();
Command::new("git")
.args(["config", "user.email", "test@test.com"])
.current_dir(root)
.output()
.unwrap();
Command::new("git")
.args(["config", "user.name", "Test"])
.current_dir(root)
.output()
.unwrap();
Command::new("git")
.args(["commit", "--allow-empty", "-m", "init"])
.current_dir(root)
.output()
.unwrap();
// Set up pipeline dirs.
fs::create_dir_all(root.join(".huskies/work/5_done")).unwrap();
+12 -5
View File
@@ -1,11 +1,13 @@
//! Agent completion handling — processes exit results and triggers pipeline advancement.
use crate::slog;
use crate::io::watcher::WatcherEvent;
use crate::slog;
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
use tokio::sync::broadcast;
use super::super::super::{AgentEvent, AgentStatus, CompletionReport, PipelineStage, pipeline_stage};
use super::super::super::{
AgentEvent, AgentStatus, CompletionReport, PipelineStage, pipeline_stage,
};
use super::super::{AgentPool, StoryAgent, composite_key};
use super::advance::spawn_pipeline_advance;
@@ -207,7 +209,10 @@ pub(in crate::agents::pool) async fn run_server_owned_completion(
// hold the build lock while gates try to run.
if let Some(wt_path) = worktree_path.as_ref()
&& let Ok(output) = std::process::Command::new("pgrep")
.args(["-f", &format!("--manifest-path {}/Cargo.toml", wt_path.display())])
.args([
"-f",
&format!("--manifest-path {}/Cargo.toml", wt_path.display()),
])
.output()
{
let pids = String::from_utf8_lossy(&output.stdout);
@@ -216,7 +221,9 @@ pub(in crate::agents::pool) async fn run_server_owned_completion(
crate::slog!(
"[agents] Killing stale cargo process (pid {pid}) for '{story_id}' before running gates"
);
unsafe { libc::kill(pid, libc::SIGKILL); }
unsafe {
libc::kill(pid, libc::SIGKILL);
}
}
}
}
@@ -311,8 +318,8 @@ pub(in crate::agents::pool) async fn run_server_owned_completion(
#[cfg(test)]
mod tests {
use super::*;
use super::super::super::AgentPool;
use super::*;
use crate::agents::{AgentEvent, AgentStatus, CompletionReport};
use std::path::PathBuf;
use std::process::Command;
+6 -5
View File
@@ -85,10 +85,11 @@ impl AgentPool {
let sid = story_id.to_string();
let br = branch.clone();
let merge_result =
tokio::task::spawn_blocking(move || crate::agents::merge::run_squash_merge(&root, &br, &sid))
.await
.map_err(|e| format!("Merge task panicked: {e}"))??;
let merge_result = tokio::task::spawn_blocking(move || {
crate::agents::merge::run_squash_merge(&root, &br, &sid)
})
.await
.map_err(|e| format!("Merge task panicked: {e}"))??;
if !merge_result.success {
return Ok(crate::agents::merge::MergeReport {
@@ -185,8 +186,8 @@ impl AgentPool {
#[cfg(test)]
mod tests {
use super::*;
use super::super::super::AgentPool;
use super::*;
use crate::agents::merge::{MergeJob, MergeJobStatus};
use std::process::Command;
+5 -1
View File
@@ -34,7 +34,11 @@ impl AgentPool {
/// Test helper: inject a child killer into the registry.
#[cfg(test)]
pub fn inject_child_killer(&self, key: &str, killer: Box<dyn portable_pty::ChildKiller + Send + Sync>) {
pub fn inject_child_killer(
&self,
key: &str,
killer: Box<dyn portable_pty::ChildKiller + Send + Sync>,
) {
let mut killers = self.child_killers.lock().unwrap();
killers.insert(key.to_string(), killer);
}
+1 -1
View File
@@ -4,8 +4,8 @@ use std::path::PathBuf;
use tokio::sync::broadcast;
use super::super::{AgentEvent, AgentInfo, AgentStatus, PipelineStage, agent_config_stage};
use super::types::{agent_info_from_entry, composite_key};
use super::AgentPool;
use super::types::{agent_info_from_entry, composite_key};
impl AgentPool {
/// Return the names of configured agents for `stage` that are not currently
+61 -41
View File
@@ -6,14 +6,15 @@ use std::path::Path;
use std::sync::{Arc, Mutex};
use tokio::sync::broadcast;
use super::super::runtime::{
AgentRuntime, ClaudeCodeRuntime, GeminiRuntime, OpenAiRuntime, RuntimeContext,
};
use super::super::{
AgentEvent, AgentInfo, AgentStatus, PipelineStage, agent_config_stage,
pipeline_stage,
AgentEvent, AgentInfo, AgentStatus, PipelineStage, agent_config_stage, pipeline_stage,
};
use super::types::{PendingGuard, StoryAgent, composite_key};
use super::{AgentPool, auto_assign};
use super::worktree::find_active_story_stage;
use super::super::runtime::{AgentRuntime, ClaudeCodeRuntime, GeminiRuntime, OpenAiRuntime, RuntimeContext};
use super::{AgentPool, auto_assign};
impl AgentPool {
/// Start an agent for a story: load config, create worktree, spawn agent.
@@ -102,7 +103,9 @@ impl AgentPool {
// the auto_assign path (bug 379).
let front_matter_agent: Option<String> = if agent_name.is_none() {
crate::db::read_content(story_id).and_then(|contents| {
crate::io::story_metadata::parse_front_matter(&contents).ok()?.agent
crate::io::story_metadata::parse_front_matter(&contents)
.ok()?
.agent
})
} else {
None
@@ -446,7 +449,10 @@ impl AgentPool {
let run_result = match runtime_name {
"claude-code" => {
let runtime = ClaudeCodeRuntime::new(child_killers_clone.clone(), watcher_tx_clone.clone());
let runtime = ClaudeCodeRuntime::new(
child_killers_clone.clone(),
watcher_tx_clone.clone(),
);
let ctx = RuntimeContext {
story_id: sid.clone(),
agent_name: aname.clone(),
@@ -514,7 +520,10 @@ impl AgentPool {
.find_agent(&aname)
.and_then(|a| a.model.clone());
let record = crate::agents::token_usage::build_record(
&sid, &aname, model, usage.clone(),
&sid,
&aname,
model,
usage.clone(),
);
if let Err(e) = crate::agents::token_usage::append_record(pr, &record) {
slog_error!(
@@ -568,15 +577,13 @@ impl AgentPool {
// re-dispatches a new mergemaster if the story still needs
// merging. This avoids an async call to start_agent inside
// a tokio::spawn (which would require Send).
let _ = watcher_tx_clone.send(
crate::io::watcher::WatcherEvent::WorkItem {
stage: "4_merge".to_string(),
item_id: sid.clone(),
action: "reassign".to_string(),
commit_msg: String::new(),
from_stage: None,
},
);
let _ = watcher_tx_clone.send(crate::io::watcher::WatcherEvent::WorkItem {
stage: "4_merge".to_string(),
item_id: sid.clone(),
action: "reassign".to_string(),
commit_msg: String::new(),
from_stage: None,
});
} else {
// Server-owned completion: run acceptance gates automatically
// when the agent process exits normally.
@@ -712,7 +719,9 @@ stage = "coder"
pool.inject_test_agent("story-1", "coder-1", AgentStatus::Running);
pool.inject_test_agent("story-2", "coder-2", AgentStatus::Pending);
let result = pool.start_agent(tmp.path(), "story-3", None, None, None).await;
let result = pool
.start_agent(tmp.path(), "story-3", None, None, None)
.await;
assert!(result.is_err());
let err = result.unwrap_err();
assert!(
@@ -744,7 +753,9 @@ stage = "coder"
let pool = AgentPool::new_test(3001);
pool.inject_test_agent("story-1", "coder-1", AgentStatus::Running);
let result = pool.start_agent(tmp.path(), "story-3", None, None, None).await;
let result = pool
.start_agent(tmp.path(), "story-3", None, None, None)
.await;
assert!(result.is_err());
let err = result.unwrap_err();
@@ -782,7 +793,9 @@ stage = "coder"
let pool = AgentPool::new_test(3001);
let result = pool.start_agent(tmp.path(), "story-5", None, None, None).await;
let result = pool
.start_agent(tmp.path(), "story-5", None, None, None)
.await;
match result {
Ok(_) => {}
Err(e) => {
@@ -843,7 +856,9 @@ stage = "coder"
let pool = AgentPool::new_test(3001);
pool.inject_test_agent("story-a", "qa", AgentStatus::Running);
let result = pool.start_agent(root, "story-b", Some("qa"), None, None).await;
let result = pool
.start_agent(root, "story-b", Some("qa"), None, None)
.await;
assert!(
result.is_err(),
@@ -870,7 +885,9 @@ stage = "coder"
let pool = AgentPool::new_test(3001);
pool.inject_test_agent("story-a", "qa", AgentStatus::Completed);
let result = pool.start_agent(root, "story-b", Some("qa"), None, None).await;
let result = pool
.start_agent(root, "story-b", Some("qa"), None, None)
.await;
if let Err(ref e) = result {
assert!(
@@ -962,7 +979,9 @@ stage = "coder"
let pool = AgentPool::new_test(3099);
pool.inject_test_agent("story-x", "qa", AgentStatus::Running);
let result = pool.start_agent(root, "story-y", Some("qa"), None, None).await;
let result = pool
.start_agent(root, "story-y", Some("qa"), None, None)
.await;
assert!(result.is_err());
let err = result.unwrap_err();
@@ -1247,11 +1266,7 @@ stage = "coder"
)
.unwrap();
crate::db::ensure_content_store();
crate::db::write_item_with_content(
"310_story_foo",
"2_current",
"---\nname: Foo\n---\n",
);
crate::db::write_item_with_content("310_story_foo", "2_current", "---\nname: Foo\n---\n");
let pool = AgentPool::new_test(3099);
let result = pool
@@ -1323,11 +1338,7 @@ stage = "coder"
)
.unwrap();
crate::db::ensure_content_store();
crate::db::write_item_with_content(
"55_story_baz",
"4_merge",
"---\nname: Baz\n---\n",
);
crate::db::write_item_with_content("55_story_baz", "4_merge", "---\nname: Baz\n---\n");
let pool = AgentPool::new_test(3099);
let result = pool
@@ -1459,7 +1470,13 @@ stage = "coder"
let pool = AgentPool::new_test(3098);
let result = pool
.start_agent(root, "502_story_split_brain", Some("mergemaster"), None, None)
.start_agent(
root,
"502_story_split_brain",
Some("mergemaster"),
None,
None,
)
.await;
// Stage check must not reject mergemaster.
@@ -1475,11 +1492,15 @@ stage = "coder"
// Before the fix, line 53 of start.rs would have demoted it to
// 2_current/ via move_story_to_current finding the 1_backlog shadow.
assert!(
sk_dir.join("work/4_merge/502_story_split_brain.md").exists(),
sk_dir
.join("work/4_merge/502_story_split_brain.md")
.exists(),
"story must still be in 4_merge/ after start_agent(mergemaster, ...)"
);
assert!(
!sk_dir.join("work/2_current/502_story_split_brain.md").exists(),
!sk_dir
.join("work/2_current/502_story_split_brain.md")
.exists(),
"story must NOT have been demoted to 2_current/ — that's bug 502"
);
}
@@ -1564,11 +1585,7 @@ stage = "coder"
)
.unwrap();
let story_content = "---\nname: Test Story\nagent: coder-opus\n---\n# Story 368\n";
std::fs::write(
backlog.join("368_story_test.md"),
story_content,
)
.unwrap();
std::fs::write(backlog.join("368_story_test.md"), story_content).unwrap();
// Also write to the filesystem current dir and content store so that
// start_agent reads the correct front matter even when another test has
// left a stale entry for "368_story_test" in the global CRDT.
@@ -1583,7 +1600,10 @@ stage = "coder"
let result = pool
.start_agent(tmp.path(), "368_story_test", None, None, None)
.await;
assert!(result.is_err(), "expected error when preferred agent is busy");
assert!(
result.is_err(),
"expected error when preferred agent is busy"
);
let err = result.unwrap_err();
assert!(
err.contains("coder-opus"),
+1 -1
View File
@@ -4,8 +4,8 @@ use crate::slog_error;
use std::path::Path;
use super::super::{AgentEvent, AgentStatus};
use super::types::composite_key;
use super::AgentPool;
use super::types::composite_key;
impl AgentPool {
/// Stop a running agent. Worktree is preserved for inspection.
+1 -1
View File
@@ -5,8 +5,8 @@ use std::sync::{Arc, Mutex};
use tokio::sync::broadcast;
use super::super::{AgentEvent, AgentStatus, CompletionReport};
use super::types::{StoryAgent, composite_key};
use super::AgentPool;
use super::types::{StoryAgent, composite_key};
impl AgentPool {
/// Test helper: inject a pre-built agent entry so unit tests can exercise
+1 -1
View File
@@ -1,7 +1,7 @@
//! Agent wait — blocks until an agent reaches a terminal state with optional timeout.
use super::super::{AgentEvent, AgentInfo, AgentStatus};
use super::types::{agent_info_from_entry, composite_key};
use super::AgentPool;
use super::types::{agent_info_from_entry, composite_key};
use tokio::sync::broadcast;
+11 -17
View File
@@ -23,7 +23,10 @@ impl AgentPool {
/// Return the active pipeline stage directory name for `story_id`, or `None` if the
/// story is not in any active stage (`2_current/`, `3_qa/`, `4_merge/`).
pub(super) fn find_active_story_stage(_project_root: &Path, story_id: &str) -> Option<&'static str> {
pub(super) fn find_active_story_stage(
_project_root: &Path,
story_id: &str,
) -> Option<&'static str> {
if let Ok(Some(item)) = crate::pipeline_state::read_typed(story_id)
&& item.stage.is_active()
{
@@ -39,11 +42,7 @@ mod tests {
#[test]
fn find_active_story_stage_detects_current() {
crate::db::ensure_content_store();
crate::db::write_item_with_content(
"10_story_test",
"2_current",
"---\nname: Test\n---\n",
);
crate::db::write_item_with_content("10_story_test", "2_current", "---\nname: Test\n---\n");
let tmp = tempfile::tempdir().unwrap();
assert_eq!(
find_active_story_stage(tmp.path(), "10_story_test"),
@@ -54,23 +53,18 @@ mod tests {
#[test]
fn find_active_story_stage_detects_qa() {
crate::db::ensure_content_store();
crate::db::write_item_with_content(
"11_story_test",
"3_qa",
"---\nname: Test\n---\n",
);
crate::db::write_item_with_content("11_story_test", "3_qa", "---\nname: Test\n---\n");
let tmp = tempfile::tempdir().unwrap();
assert_eq!(find_active_story_stage(tmp.path(), "11_story_test"), Some("3_qa"));
assert_eq!(
find_active_story_stage(tmp.path(), "11_story_test"),
Some("3_qa")
);
}
#[test]
fn find_active_story_stage_detects_merge() {
crate::db::ensure_content_store();
crate::db::write_item_with_content(
"12_story_test",
"4_merge",
"---\nname: Test\n---\n",
);
crate::db::write_item_with_content("12_story_test", "4_merge", "---\nname: Test\n---\n");
let tmp = tempfile::tempdir().unwrap();
assert_eq!(
find_active_story_stage(tmp.path(), "12_story_test"),
+20 -18
View File
@@ -237,10 +237,23 @@ fn run_agent_pty_blocking(
story_id.replace(['_', '.'], "-")
);
let session_count = std::fs::read_dir(&session_dir)
.map(|d| d.filter(|e| e.as_ref().map(|e| e.path().extension().is_some_and(|ext| ext == "jsonl")).unwrap_or(false)).count())
.map(|d| {
d.filter(|e| {
e.as_ref()
.map(|e| e.path().extension().is_some_and(|ext| ext == "jsonl"))
.unwrap_or(false)
})
.count()
})
.unwrap_or(0);
let session_bytes: u64 = std::fs::read_dir(&session_dir)
.map(|d| d.filter_map(|e| e.ok()).filter(|e| e.path().extension().is_some_and(|ext| ext == "jsonl")).filter_map(|e| e.metadata().ok()).map(|m| m.len()).sum())
.map(|d| {
d.filter_map(|e| e.ok())
.filter(|e| e.path().extension().is_some_and(|ext| ext == "jsonl"))
.filter_map(|e| e.metadata().ok())
.map(|m| m.len())
.sum()
})
.unwrap_or(0);
slog!(
@@ -373,12 +386,7 @@ fn run_agent_pty_blocking(
"stream_event" => {
if let Some(event) = json.get("event") {
handle_agent_stream_event(
event,
story_id,
agent_name,
tx,
event_log,
log_writer,
event, story_id, agent_name, tx, event_log, log_writer,
);
}
}
@@ -409,8 +417,7 @@ fn run_agent_pty_blocking(
t
}
None => {
let default = chrono::Utc::now()
+ chrono::Duration::minutes(5);
let default = chrono::Utc::now() + chrono::Duration::minutes(5);
slog!(
"[agent:{story_id}:{agent_name}] API rate limit hard block \
(status={status}); no reset_at in rate_limit_info, \
@@ -469,14 +476,10 @@ fn run_agent_pty_blocking(
let wait_result = child.wait();
match &wait_result {
Ok(status) => {
slog!(
"[agent:{story_id}:{agent_name}] Child exited: {status:?}"
);
slog!("[agent:{story_id}:{agent_name}] Child exited: {status:?}");
}
Err(e) => {
slog!(
"[agent:{story_id}:{agent_name}] Child wait error: {e}"
);
slog!("[agent:{story_id}:{agent_name}] Child wait error: {e}");
}
}
@@ -709,8 +712,7 @@ mod tests {
let tmp = tempfile::tempdir().unwrap();
let root = tmp.path();
let log_writer =
AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-emit").unwrap();
let log_writer = AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-emit").unwrap();
let log_mutex = Mutex::new(log_writer);
let (tx, _rx) = broadcast::channel::<AgentEvent>(64);
+20 -21
View File
@@ -4,7 +4,7 @@ use std::sync::{Arc, Mutex};
use reqwest::Client;
use serde::{Deserialize, Serialize};
use serde_json::{json, Value};
use serde_json::{Value, json};
use tokio::sync::broadcast;
use crate::agent_log::AgentLogWriter;
@@ -135,14 +135,15 @@ impl AgentRuntime for GeminiRuntime {
});
}
slog!("[gemini] Turn {turn} for {}:{}", ctx.story_id, ctx.agent_name);
let request_body = build_generate_content_request(
&system_instruction,
&contents,
&gemini_tools,
slog!(
"[gemini] Turn {turn} for {}:{}",
ctx.story_id,
ctx.agent_name
);
let request_body =
build_generate_content_request(&system_instruction, &contents, &gemini_tools);
let url = format!(
"https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent?key={api_key}"
);
@@ -201,8 +202,7 @@ impl AgentRuntime for GeminiRuntime {
text_parts.push(text.to_string());
}
if let Some(fc) = part.get("functionCall")
&& let (Some(name), Some(args)) =
(fc["name"].as_str(), fc.get("args"))
&& let (Some(name), Some(args)) = (fc["name"].as_str(), fc.get("args"))
{
function_calls.push(GeminiFunctionCall {
name: name.to_string(),
@@ -263,18 +263,14 @@ impl AgentRuntime for GeminiRuntime {
text: format!("\n[Tool call: {}]\n", fc.name),
});
let tool_result =
call_mcp_tool(&client, &mcp_base, &fc.name, &fc.args).await;
let tool_result = call_mcp_tool(&client, &mcp_base, &fc.name, &fc.args).await;
let response_value = match &tool_result {
Ok(result) => {
emit(AgentEvent::Output {
story_id: ctx.story_id.clone(),
agent_name: ctx.agent_name.clone(),
text: format!(
"[Tool result: {} chars]\n",
result.len()
),
text: format!("[Tool result: {} chars]\n", result.len()),
});
json!({ "result": result })
}
@@ -453,7 +449,10 @@ async fn fetch_and_convert_mcp_tools(
});
}
slog!("[gemini] Loaded {} MCP tools as function declarations", declarations.len());
slog!(
"[gemini] Loaded {} MCP tools as function declarations",
declarations.len()
);
Ok(declarations)
}
@@ -560,10 +559,7 @@ async fn call_mcp_tool(
// MCP tools/call returns { result: { content: [{ type: "text", text: "..." }] } }
let content = &body["result"]["content"];
if let Some(arr) = content.as_array() {
let texts: Vec<&str> = arr
.iter()
.filter_map(|c| c["text"].as_str())
.collect();
let texts: Vec<&str> = arr.iter().filter_map(|c| c["text"].as_str()).collect();
if !texts.is_empty() {
return Ok(texts.join("\n"));
}
@@ -747,7 +743,10 @@ mod tests {
let body = build_generate_content_request(&system, &contents, &tools);
assert!(body["tools"][0]["functionDeclarations"].is_array());
assert_eq!(body["tools"][0]["functionDeclarations"][0]["name"], "my_tool");
assert_eq!(
body["tools"][0]["functionDeclarations"][0]["name"],
"my_tool"
);
}
#[test]
+2 -2
View File
@@ -151,8 +151,8 @@ mod tests {
#[test]
fn claude_code_runtime_get_status_returns_idle() {
use std::collections::HashMap;
use crate::io::watcher::WatcherEvent;
use std::collections::HashMap;
let killers = Arc::new(Mutex::new(HashMap::new()));
let (watcher_tx, _) = broadcast::channel::<WatcherEvent>(16);
let runtime = ClaudeCodeRuntime::new(killers, watcher_tx);
@@ -161,8 +161,8 @@ mod tests {
#[test]
fn claude_code_runtime_stream_events_empty() {
use std::collections::HashMap;
use crate::io::watcher::WatcherEvent;
use std::collections::HashMap;
let killers = Arc::new(Mutex::new(HashMap::new()));
let (watcher_tx, _) = broadcast::channel::<WatcherEvent>(16);
let runtime = ClaudeCodeRuntime::new(killers, watcher_tx);
+2 -5
View File
@@ -3,7 +3,7 @@ use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::{Arc, Mutex};
use reqwest::Client;
use serde_json::{json, Value};
use serde_json::{Value, json};
use tokio::sync::broadcast;
use crate::agent_log::AgentLogWriter;
@@ -471,10 +471,7 @@ async fn call_mcp_tool(
// MCP tools/call returns { result: { content: [{ type: "text", text: "..." }] } }
let content = &body["result"]["content"];
if let Some(arr) = content.as_array() {
let texts: Vec<&str> = arr
.iter()
.filter_map(|c| c["text"].as_str())
.collect();
let texts: Vec<&str> = arr.iter().filter_map(|c| c["text"].as_str()).collect();
if !texts.is_empty() {
return Ok(texts.join("\n"));
}
+13 -3
View File
@@ -69,7 +69,10 @@ mod tests {
// "timmy ambient on" — bot name mentioned but not @-prefixed, so
// is_addressed is false; strip_bot_mention still strips "timmy ".
let result = try_handle_command(&dispatch, "timmy ambient on");
assert!(result.is_some(), "ambient on should fire even when is_addressed=false");
assert!(
result.is_some(),
"ambient on should fire even when is_addressed=false"
);
assert!(
ambient_rooms.lock().unwrap().contains(&room_id),
"room should be in ambient_rooms after ambient on"
@@ -92,7 +95,10 @@ mod tests {
};
// Bare "ambient off" in an ambient room (is_addressed=false).
let result = try_handle_command(&dispatch, "ambient off");
assert!(result.is_some(), "bare ambient off should be handled without LLM");
assert!(
result.is_some(),
"bare ambient off should be handled without LLM"
);
let output = result.unwrap();
assert!(
output.contains("Ambient mode off"),
@@ -161,7 +167,11 @@ mod tests {
#[test]
fn ambient_invalid_args_returns_usage() {
let result = super::super::tests::try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy ambient");
let result = super::super::tests::try_cmd_addressed(
"Timmy",
"@timmy:homeserver.local",
"@timmy ambient",
);
let output = result.unwrap();
assert!(
output.contains("Usage"),
+45 -14
View File
@@ -1,8 +1,8 @@
//! Handler for the `backlog` command — shows only Stage::Backlog items.
use crate::pipeline_state::{PipelineItem, Stage};
use super::CommandContext;
use super::status::{story_short_label, unmet_deps_from_items};
use crate::pipeline_state::{PipelineItem, Stage};
pub(super) fn handle_backlog(_ctx: &CommandContext) -> Option<String> {
Some(build_backlog_output())
@@ -94,16 +94,29 @@ mod tests {
make_item("30_story_in_qa", "In QA", Stage::Qa),
];
let output = build_backlog_from_items(&items);
assert!(output.contains("In Backlog"), "should show backlog item: {output}");
assert!(!output.contains("In Progress"), "should not show coding items: {output}");
assert!(!output.contains("In QA"), "should not show QA items: {output}");
assert!(
output.contains("In Backlog"),
"should show backlog item: {output}"
);
assert!(
!output.contains("In Progress"),
"should not show coding items: {output}"
);
assert!(
!output.contains("In QA"),
"should not show QA items: {output}"
);
}
// -- AC: shows number, type, name -----------------------------------------
#[test]
fn backlog_shows_number_type_and_name() {
let items = vec![make_item("42_story_my_feature", "My Feature", Stage::Backlog)];
let items = vec![make_item(
"42_story_my_feature",
"My Feature",
Stage::Backlog,
)];
let output = build_backlog_from_items(&items);
assert!(
output.contains("42 [story] — My Feature"),
@@ -116,7 +129,12 @@ mod tests {
#[test]
fn backlog_shows_waiting_on_for_unmet_deps() {
let items = vec![
make_item_with_deps("10_story_waiting", "Waiting Story", Stage::Backlog, vec![999]),
make_item_with_deps(
"10_story_waiting",
"Waiting Story",
Stage::Backlog,
vec![999],
),
make_item("999_story_dep", "Dep Story", Stage::Backlog),
];
let output = build_backlog_from_items(&items);
@@ -150,16 +168,17 @@ mod tests {
fn backlog_no_waiting_on_when_no_deps() {
let items = vec![make_item("5_story_nodeps", "No Deps", Stage::Backlog)];
let output = build_backlog_from_items(&items);
assert!(!output.contains("waiting on"), "no dep suffix when no deps: {output}");
assert!(
!output.contains("waiting on"),
"no dep suffix when no deps: {output}"
);
}
// -- AC: command is registered in the registry ----------------------------
#[test]
fn backlog_command_in_registry() {
let found = super::super::commands()
.iter()
.any(|c| c.name == "backlog");
let found = super::super::commands().iter().any(|c| c.name == "backlog");
assert!(found, "backlog must be registered in commands()");
}
@@ -171,7 +190,10 @@ mod tests {
"@timmy help",
);
let output = result.unwrap_or_default();
assert!(output.contains("backlog"), "backlog should appear in help output: {output}");
assert!(
output.contains("backlog"),
"backlog should appear in help output: {output}"
);
}
#[test]
@@ -181,7 +203,10 @@ mod tests {
"@timmy:homeserver.local",
"@timmy backlog",
);
assert!(result.is_some(), "backlog command should match and return Some");
assert!(
result.is_some(),
"backlog command should match and return Some"
);
}
#[test]
@@ -192,7 +217,10 @@ mod tests {
"@timmy backlog",
);
let output = result.unwrap_or_default();
assert!(output.contains("Backlog"), "backlog output should contain Backlog header: {output}");
assert!(
output.contains("Backlog"),
"backlog output should contain Backlog header: {output}"
);
}
// -- empty backlog --------------------------------------------------------
@@ -201,6 +229,9 @@ mod tests {
fn backlog_shows_none_when_empty() {
let items = vec![make_item("1_story_done", "Done", Stage::Coding)];
let output = build_backlog_from_items(&items);
assert!(output.contains("*(none)*"), "should show none when no backlog items: {output}");
assert!(
output.contains("*(none)*"),
"should show none when no backlog items: {output}"
);
}
}
+112 -40
View File
@@ -2,8 +2,8 @@
use std::collections::HashMap;
use super::status::story_short_label;
use super::CommandContext;
use super::status::story_short_label;
/// Show token spend: 24h total, top 5 stories, agent-type breakdown, and
/// all-time total.
@@ -102,7 +102,10 @@ mod tests {
use crate::agents::AgentPool;
use std::sync::Arc;
fn write_token_records(root: &std::path::Path, records: &[crate::agents::token_usage::TokenUsageRecord]) {
fn write_token_records(
root: &std::path::Path,
records: &[crate::agents::token_usage::TokenUsageRecord],
) {
for r in records {
crate::agents::token_usage::append_record(root, r).unwrap();
}
@@ -118,7 +121,12 @@ mod tests {
}
}
fn make_record(story_id: &str, agent_name: &str, cost: f64, hours_ago: i64) -> crate::agents::token_usage::TokenUsageRecord {
fn make_record(
story_id: &str,
agent_name: &str,
cost: f64,
hours_ago: i64,
) -> crate::agents::token_usage::TokenUsageRecord {
let ts = (chrono::Utc::now() - chrono::Duration::hours(hours_ago)).to_rfc3339();
crate::agents::token_usage::TokenUsageRecord {
story_id: story_id.to_string(),
@@ -157,55 +165,89 @@ mod tests {
#[test]
fn cost_command_appears_in_help() {
let result = super::super::tests::try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy help");
let result = super::super::tests::try_cmd_addressed(
"Timmy",
"@timmy:homeserver.local",
"@timmy help",
);
let output = result.unwrap();
assert!(output.contains("cost"), "help should list cost command: {output}");
assert!(
output.contains("cost"),
"help should list cost command: {output}"
);
}
#[test]
fn cost_command_no_records() {
let tmp = tempfile::TempDir::new().unwrap();
let output = cost_cmd_with_root(tmp.path()).unwrap();
assert!(output.contains("No usage records found"), "should show empty message: {output}");
assert!(
output.contains("No usage records found"),
"should show empty message: {output}"
);
}
#[test]
fn cost_command_shows_24h_total() {
let tmp = tempfile::TempDir::new().unwrap();
write_token_records(tmp.path(), &[
make_record("42_story_foo", "coder-1", 1.50, 2),
make_record("42_story_foo", "coder-1", 0.50, 5),
]);
write_token_records(
tmp.path(),
&[
make_record("42_story_foo", "coder-1", 1.50, 2),
make_record("42_story_foo", "coder-1", 0.50, 5),
],
);
let output = cost_cmd_with_root(tmp.path()).unwrap();
assert!(output.contains("**Last 24h:** $2.00"), "should show 24h total: {output}");
assert!(
output.contains("**Last 24h:** $2.00"),
"should show 24h total: {output}"
);
}
#[test]
fn cost_command_excludes_old_from_24h() {
let tmp = tempfile::TempDir::new().unwrap();
write_token_records(tmp.path(), &[
make_record("42_story_foo", "coder-1", 1.00, 2), // within 24h
make_record("43_story_bar", "coder-1", 5.00, 48), // older
]);
write_token_records(
tmp.path(),
&[
make_record("42_story_foo", "coder-1", 1.00, 2), // within 24h
make_record("43_story_bar", "coder-1", 5.00, 48), // older
],
);
let output = cost_cmd_with_root(tmp.path()).unwrap();
assert!(output.contains("**Last 24h:** $1.00"), "should only count recent: {output}");
assert!(output.contains("**All-time:** $6.00"), "all-time should include everything: {output}");
assert!(
output.contains("**Last 24h:** $1.00"),
"should only count recent: {output}"
);
assert!(
output.contains("**All-time:** $6.00"),
"all-time should include everything: {output}"
);
}
#[test]
fn cost_command_shows_top_stories() {
let tmp = tempfile::TempDir::new().unwrap();
write_token_records(tmp.path(), &[
make_record("42_story_foo", "coder-1", 3.00, 1),
make_record("43_story_bar", "coder-1", 1.00, 1),
make_record("42_story_foo", "qa-1", 2.00, 1),
]);
write_token_records(
tmp.path(),
&[
make_record("42_story_foo", "coder-1", 3.00, 1),
make_record("43_story_bar", "coder-1", 1.00, 1),
make_record("42_story_foo", "qa-1", 2.00, 1),
],
);
let output = cost_cmd_with_root(tmp.path()).unwrap();
assert!(output.contains("Top Stories"), "should have top stories section: {output}");
assert!(
output.contains("Top Stories"),
"should have top stories section: {output}"
);
// Story 42 ($5.00) should appear before story 43 ($1.00)
let pos_42 = output.find("42").unwrap();
let pos_43 = output.find("43").unwrap();
assert!(pos_42 < pos_43, "story 42 should appear before 43 (sorted by cost): {output}");
assert!(
pos_42 < pos_43,
"story 42 should appear before 43 (sorted by cost): {output}"
);
}
#[test]
@@ -213,45 +255,75 @@ mod tests {
let tmp = tempfile::TempDir::new().unwrap();
let mut records = Vec::new();
for i in 1..=7 {
records.push(make_record(&format!("{i}_story_s{i}"), "coder-1", i as f64, 1));
records.push(make_record(
&format!("{i}_story_s{i}"),
"coder-1",
i as f64,
1,
));
}
write_token_records(tmp.path(), &records);
let output = cost_cmd_with_root(tmp.path()).unwrap();
// The top 5 most expensive are stories 7,6,5,4,3. Stories 1 and 2 should be excluded.
let top_section = output.split("**By Agent Type").next().unwrap();
assert!(!top_section.contains("• 1 —"), "story 1 should not be in top 5: {output}");
assert!(!top_section.contains("2"), "story 2 should not be in top 5: {output}");
assert!(
!top_section.contains("1"),
"story 1 should not be in top 5: {output}"
);
assert!(
!top_section.contains("• 2 —"),
"story 2 should not be in top 5: {output}"
);
}
#[test]
fn cost_command_shows_agent_type_breakdown() {
let tmp = tempfile::TempDir::new().unwrap();
write_token_records(tmp.path(), &[
make_record("42_story_foo", "coder-1", 2.00, 1),
make_record("42_story_foo", "qa-1", 1.50, 1),
make_record("42_story_foo", "mergemaster", 0.50, 1),
]);
write_token_records(
tmp.path(),
&[
make_record("42_story_foo", "coder-1", 2.00, 1),
make_record("42_story_foo", "qa-1", 1.50, 1),
make_record("42_story_foo", "mergemaster", 0.50, 1),
],
);
let output = cost_cmd_with_root(tmp.path()).unwrap();
assert!(output.contains("By Agent Type"), "should have agent type section: {output}");
assert!(
output.contains("By Agent Type"),
"should have agent type section: {output}"
);
assert!(output.contains("coder"), "should show coder type: {output}");
assert!(output.contains("qa"), "should show qa type: {output}");
assert!(output.contains("mergemaster"), "should show mergemaster type: {output}");
assert!(
output.contains("mergemaster"),
"should show mergemaster type: {output}"
);
}
#[test]
fn cost_command_shows_all_time_total() {
let tmp = tempfile::TempDir::new().unwrap();
write_token_records(tmp.path(), &[
make_record("42_story_foo", "coder-1", 1.00, 2),
make_record("43_story_bar", "coder-1", 9.00, 100),
]);
write_token_records(
tmp.path(),
&[
make_record("42_story_foo", "coder-1", 1.00, 2),
make_record("43_story_bar", "coder-1", 9.00, 100),
],
);
let output = cost_cmd_with_root(tmp.path()).unwrap();
assert!(output.contains("**All-time:** $10.00"), "should show all-time total: {output}");
assert!(
output.contains("**All-time:** $10.00"),
"should show all-time total: {output}"
);
}
#[test]
fn cost_command_case_insensitive() {
let result = super::super::tests::try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy COST");
let result = super::super::tests::try_cmd_addressed(
"Timmy",
"@timmy:homeserver.local",
"@timmy COST",
);
assert!(result.is_some(), "COST should match case-insensitively");
}
+72 -18
View File
@@ -59,12 +59,16 @@ fn read_cached_coverage(project_root: &std::path::Path) -> String {
fn read_coverage_report(path: &std::path::Path) -> String {
let content = match std::fs::read_to_string(path) {
Ok(c) => c,
Err(e) => return format!("**Coverage (cached)**\n\nError reading `.coverage_report.json`: {e}"),
Err(e) => {
return format!("**Coverage (cached)**\n\nError reading `.coverage_report.json`: {e}");
}
};
let report: CoverageReport = match serde_json::from_str(&content) {
Ok(r) => r,
Err(e) => return format!("**Coverage (cached)**\n\nFailed to parse `.coverage_report.json`: {e}"),
Err(e) => {
return format!("**Coverage (cached)**\n\nFailed to parse `.coverage_report.json`: {e}");
}
};
format_coverage_report(&report)
@@ -81,13 +85,22 @@ fn format_coverage_report(report: &CoverageReport) -> String {
// Top 5 lowest-covered files (already sorted ascending in the JSON, but sort
// defensively here so the display is correct even if the file was hand-edited).
let mut sorted: Vec<&FileCoverage> = report.files.iter().collect();
sorted.sort_by(|a, b| a.coverage.partial_cmp(&b.coverage).unwrap_or(std::cmp::Ordering::Equal));
sorted.sort_by(|a, b| {
a.coverage
.partial_cmp(&b.coverage)
.unwrap_or(std::cmp::Ordering::Equal)
});
let targets: Vec<&FileCoverage> = sorted.into_iter().take(5).collect();
if !targets.is_empty() {
out.push_str("\n**Top 5 files needing coverage:**\n");
for (i, file) in targets.iter().enumerate() {
out.push_str(&format!("{}. {}{:.1}%\n", i + 1, file.path, file.coverage));
out.push_str(&format!(
"{}. {} — {:.1}%\n",
i + 1,
file.path,
file.coverage
));
}
}
@@ -162,8 +175,13 @@ fn run_coverage(project_root: &std::path::Path) -> String {
// Replace the "cached" label with "fresh".
result = result.replacen("Coverage (cached)", "Coverage (fresh)", 1);
// Replace the cached hint with a pass/fail indicator.
let pass_indicator = if out.status.success() { "PASS" } else { "FAIL: coverage below threshold" };
result = result.replacen("*Run `coverage run` for fresh results.*", pass_indicator, 1);
let pass_indicator = if out.status.success() {
"PASS"
} else {
"FAIL: coverage below threshold"
};
result =
result.replacen("*Run `coverage run` for fresh results.*", pass_indicator, 1);
return result;
}
@@ -322,9 +340,18 @@ mod tests {
let output = handle_coverage(&ctx).unwrap();
assert!(output.contains("72.5"), "should include overall: {output}");
assert!(output.contains("60.0"), "should include threshold: {output}");
assert!(output.contains("15.0"), "should include lowest-covered file pct: {output}");
assert!(output.contains("server/src/low.rs"), "should include lowest-covered file path: {output}");
assert!(
output.contains("60.0"),
"should include threshold: {output}"
);
assert!(
output.contains("15.0"),
"should include lowest-covered file pct: {output}"
);
assert!(
output.contains("server/src/low.rs"),
"should include lowest-covered file path: {output}"
);
}
#[test]
@@ -348,9 +375,18 @@ mod tests {
let output = handle_coverage(&ctx).unwrap();
assert!(output.contains("a.rs"), "should show lowest file: {output}");
assert!(output.contains("e.rs"), "should show 5th lowest file: {output}");
assert!(!output.contains("f.rs"), "should not show 6th file: {output}");
assert!(!output.contains("g.rs"), "should not show 7th file: {output}");
assert!(
output.contains("e.rs"),
"should show 5th lowest file: {output}"
);
assert!(
!output.contains("f.rs"),
"should not show 6th file: {output}"
);
assert!(
!output.contains("g.rs"),
"should not show 7th file: {output}"
);
}
#[test]
@@ -466,15 +502,24 @@ mod tests {
overall: 66.25,
threshold: 60.0,
files: vec![
FileCoverage { path: "a.rs".to_string(), coverage: 10.0 },
FileCoverage { path: "b.rs".to_string(), coverage: 80.0 },
FileCoverage {
path: "a.rs".to_string(),
coverage: 10.0,
},
FileCoverage {
path: "b.rs".to_string(),
coverage: 80.0,
},
],
};
let result = format_coverage_report(&report);
assert!(result.contains("66.2"), "should show overall: {result}");
assert!(result.contains("60.0"), "should show threshold: {result}");
assert!(result.contains("a.rs"), "should show lowest file: {result}");
assert!(result.contains("10.0"), "should show lowest file pct: {result}");
assert!(
result.contains("10.0"),
"should show lowest file pct: {result}"
);
}
#[test]
@@ -490,9 +535,18 @@ Frontend line coverage: 70.0%\n\
PASS: Coverage 66.25% meets threshold 60.00%\n\
";
let result = parse_coverage_output(sample, true);
assert!(result.contains("62.5"), "should include Rust coverage: {result}");
assert!(result.contains("70.0"), "should include Frontend coverage: {result}");
assert!(result.contains("66.25"), "should include Overall coverage: {result}");
assert!(
result.contains("62.5"),
"should include Rust coverage: {result}"
);
assert!(
result.contains("70.0"),
"should include Frontend coverage: {result}"
);
assert!(
result.contains("66.25"),
"should include Overall coverage: {result}"
);
assert!(result.contains("PASS"), "should indicate PASS: {result}");
}
+14 -10
View File
@@ -128,14 +128,20 @@ mod tests {
"@timmy help",
);
let output = result.unwrap();
assert!(output.contains("depends"), "help should list depends command: {output}");
assert!(
output.contains("depends"),
"help should list depends command: {output}"
);
}
#[test]
fn depends_no_args_returns_usage() {
let tmp = tempfile::TempDir::new().unwrap();
let output = depends_cmd_with_root(tmp.path(), "").unwrap();
assert!(output.contains("Usage"), "no args should show usage: {output}");
assert!(
output.contains("Usage"),
"no args should show usage: {output}"
);
}
#[test]
@@ -188,10 +194,9 @@ mod tests {
output.contains("477") && output.contains("478"),
"response should mention dep numbers: {output}"
);
let contents = std::fs::read_to_string(
tmp.path().join(".huskies/work/1_backlog/42_story_foo.md"),
)
.unwrap();
let contents =
std::fs::read_to_string(tmp.path().join(".huskies/work/1_backlog/42_story_foo.md"))
.unwrap();
assert!(
contents.contains("depends_on: [477, 478]"),
"file should have depends_on set: {contents}"
@@ -212,10 +217,9 @@ mod tests {
output.contains("Cleared"),
"should confirm clearing deps: {output}"
);
let contents = std::fs::read_to_string(
tmp.path().join(".huskies/work/2_current/10_story_bar.md"),
)
.unwrap();
let contents =
std::fs::read_to_string(tmp.path().join(".huskies/work/2_current/10_story_bar.md"))
.unwrap();
assert!(
!contents.contains("depends_on"),
"file should have depends_on cleared: {contents}"
+14 -3
View File
@@ -100,9 +100,16 @@ mod tests {
#[test]
fn git_command_appears_in_help() {
let result = super::super::tests::try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy help");
let result = super::super::tests::try_cmd_addressed(
"Timmy",
"@timmy:homeserver.local",
"@timmy help",
);
let output = result.unwrap();
assert!(output.contains("git"), "help should list git command: {output}");
assert!(
output.contains("git"),
"help should list git command: {output}"
);
}
#[test]
@@ -197,7 +204,11 @@ mod tests {
#[test]
fn git_command_case_insensitive() {
let result = super::super::tests::try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy GIT");
let result = super::super::tests::try_cmd_addressed(
"Timmy",
"@timmy:homeserver.local",
"@timmy GIT",
);
assert!(result.is_some(), "GIT should match case-insensitively");
}
}
+21 -7
View File
@@ -1,6 +1,6 @@
//! Handler for the `help` command.
use super::{commands, CommandContext};
use super::{CommandContext, commands};
pub(super) fn handle_help(ctx: &CommandContext) -> Option<String> {
let mut output = format!("**{} Commands**\n\n", ctx.bot_name);
@@ -14,7 +14,7 @@ pub(super) fn handle_help(ctx: &CommandContext) -> Option<String> {
#[cfg(test)]
mod tests {
use super::super::tests::{try_cmd_addressed, commands};
use super::super::tests::{commands, try_cmd_addressed};
#[test]
fn help_command_matches() {
@@ -74,7 +74,10 @@ mod tests {
fn help_output_includes_status() {
let result = try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy help");
let output = result.unwrap();
assert!(output.contains("status"), "help should list status command: {output}");
assert!(
output.contains("status"),
"help should list status command: {output}"
);
}
#[test]
@@ -86,7 +89,9 @@ mod tests {
.iter()
.map(|c| {
let marker = format!("**{}**", c.name);
let pos = output.find(&marker).expect("command must appear in help as **name**");
let pos = output
.find(&marker)
.expect("command must appear in help as **name**");
(pos, c.name)
})
.collect();
@@ -94,20 +99,29 @@ mod tests {
let names_in_order: Vec<&str> = positions.iter().map(|(_, n)| *n).collect();
let mut sorted = names_in_order.clone();
sorted.sort();
assert_eq!(names_in_order, sorted, "commands must appear in alphabetical order");
assert_eq!(
names_in_order, sorted,
"commands must appear in alphabetical order"
);
}
#[test]
fn help_output_includes_ambient() {
let result = try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy help");
let output = result.unwrap();
assert!(output.contains("ambient"), "help should list ambient command: {output}");
assert!(
output.contains("ambient"),
"help should list ambient command: {output}"
);
}
#[test]
fn help_output_includes_htop() {
let result = try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy help");
let output = result.unwrap();
assert!(output.contains("htop"), "help should list htop command: {output}");
assert!(
output.contains("htop"),
"help should list htop command: {output}"
);
}
}
+63 -9
View File
@@ -152,11 +152,53 @@ fn loc_top_n(project_root: &std::path::Path, top_n: usize) -> String {
fn is_source_extension(ext: &str) -> bool {
matches!(
ext,
"rs" | "ts" | "tsx" | "js" | "jsx" | "py" | "go" | "java" | "c" | "cpp" | "h"
| "hpp" | "cs" | "rb" | "swift" | "kt" | "scala" | "hs" | "ml" | "ex" | "exs"
| "clj" | "lua" | "sh" | "bash" | "zsh" | "fish" | "ps1" | "toml" | "yaml"
| "yml" | "json" | "md" | "html" | "css" | "scss" | "less" | "sql" | "graphql"
| "proto" | "tf" | "hcl" | "nix" | "r" | "jl" | "dart" | "vue" | "svelte"
"rs" | "ts"
| "tsx"
| "js"
| "jsx"
| "py"
| "go"
| "java"
| "c"
| "cpp"
| "h"
| "hpp"
| "cs"
| "rb"
| "swift"
| "kt"
| "scala"
| "hs"
| "ml"
| "ex"
| "exs"
| "clj"
| "lua"
| "sh"
| "bash"
| "zsh"
| "fish"
| "ps1"
| "toml"
| "yaml"
| "yml"
| "json"
| "md"
| "html"
| "css"
| "scss"
| "less"
| "sql"
| "graphql"
| "proto"
| "tf"
| "hcl"
| "nix"
| "r"
| "jl"
| "dart"
| "vue"
| "svelte"
)
}
@@ -202,7 +244,10 @@ mod tests {
"@timmy help",
);
let output = result.unwrap();
assert!(output.contains("loc"), "help should list loc command: {output}");
assert!(
output.contains("loc"),
"help should list loc command: {output}"
);
}
#[test]
@@ -220,7 +265,10 @@ mod tests {
);
// At most 10 entries (numbered lines "1." through "10.")
let count = output.lines().filter(|l| l.contains(". `")).count();
assert!(count <= 10, "default should return at most 10 files, got {count}");
assert!(
count <= 10,
"default should return at most 10 files, got {count}"
);
}
#[test]
@@ -233,7 +281,10 @@ mod tests {
let ctx = make_ctx(&agents, &ambient_rooms, repo_root, "5");
let output = handle_loc(&ctx).unwrap();
let count = output.lines().filter(|l| l.contains(". `")).count();
assert!(count <= 5, "loc 5 should return at most 5 files, got {count}");
assert!(
count <= 5,
"loc 5 should return at most 5 files, got {count}"
);
}
#[test]
@@ -246,7 +297,10 @@ mod tests {
let ctx = make_ctx(&agents, &ambient_rooms, repo_root, "20");
let output = handle_loc(&ctx).unwrap();
let count = output.lines().filter(|l| l.contains(". `")).count();
assert!(count <= 20, "loc 20 should return at most 20 files, got {count}");
assert!(
count <= 20,
"loc 20 should return at most 20 files, got {count}"
);
}
#[test]
+9 -2
View File
@@ -110,7 +110,9 @@ fn find_story_name(root: &std::path::Path, num_str: &str) -> Option<String> {
// Try content store first.
for id in crate::db::all_content_ids() {
let file_num = id.split('_').next().unwrap_or("");
if file_num == num_str && let Some(c) = crate::db::read_content(&id) {
if file_num == num_str
&& let Some(c) = crate::db::read_content(&id)
{
return crate::io::story_metadata::parse_front_matter(&c)
.ok()
.and_then(|m| m.name);
@@ -119,7 +121,12 @@ fn find_story_name(root: &std::path::Path, num_str: &str) -> Option<String> {
// Fallback: filesystem scan.
let stages = [
"1_backlog", "2_current", "3_qa", "4_merge", "5_done", "6_archived",
"1_backlog",
"2_current",
"3_qa",
"4_merge",
"5_done",
"6_archived",
];
for stage in &stages {
let dir = root.join(".huskies").join("work").join(stage);
+27 -13
View File
@@ -86,9 +86,7 @@ pub(super) fn handle_test(ctx: &CommandContext) -> Option<String> {
let mut result = format!("**Test: {status}**\n\n");
if tests_passed > 0 || tests_failed > 0 {
result.push_str(&format!(
"{tests_passed} passed, {tests_failed} failed\n\n"
));
result.push_str(&format!("{tests_passed} passed, {tests_failed} failed\n\n"));
}
result.push_str(&format!("```\n{truncated}\n```"));
@@ -128,7 +126,11 @@ fn parse_test_counts(output: &str) -> (u64, u64) {
fn extract_count(line: &str, label: &str) -> Option<u64> {
let pos = line.find(label)?;
let before = line[..pos].trim_end();
let num_str: String = before.chars().rev().take_while(|c| c.is_ascii_digit()).collect();
let num_str: String = before
.chars()
.rev()
.take_while(|c| c.is_ascii_digit())
.collect();
if num_str.is_empty() {
return None;
}
@@ -250,10 +252,7 @@ mod tests {
#[test]
fn test_command_works_via_dispatch() {
let dir = tempfile::tempdir().unwrap();
write_script(
dir.path(),
"#!/usr/bin/env bash\necho 'ok'\nexit 0\n",
);
write_script(dir.path(), "#!/usr/bin/env bash\necho 'ok'\nexit 0\n");
let agents = test_agents();
let ambient = test_ambient();
let room_id = "!test:example.com".to_string();
@@ -317,8 +316,14 @@ mod tests {
let ambient = test_ambient();
let ctx = make_ctx(&agents, &ambient, dir.path(), "");
let output = handle_test(&ctx).unwrap();
assert!(output.contains("PASS"), "no-arg should use project root: {output}");
assert!(output.contains('7'), "should show count from project root script: {output}");
assert!(
output.contains("PASS"),
"no-arg should use project root: {output}"
);
assert!(
output.contains('7'),
"should show count from project root script: {output}"
);
}
#[test]
@@ -329,8 +334,14 @@ mod tests {
let ambient = test_ambient();
let ctx = make_ctx(&agents, &ambient, dir.path(), "541");
let output = handle_test(&ctx).unwrap();
assert!(output.contains("PASS"), "should run tests in worktree: {output}");
assert!(output.contains('2'), "should show count from worktree script: {output}");
assert!(
output.contains("PASS"),
"should run tests in worktree: {output}"
);
assert!(
output.contains('2'),
"should show count from worktree script: {output}"
);
}
#[test]
@@ -382,6 +393,9 @@ mod tests {
"run_tests with story number must respond via dispatch"
);
let output = result.unwrap();
assert!(output.contains("PASS"), "should PASS for valid worktree: {output}");
assert!(
output.contains("PASS"),
"should PASS for valid worktree: {output}"
);
}
}
+12 -16
View File
@@ -12,7 +12,7 @@ use super::CommandContext;
use crate::http::mcp::wizard_tools::{
generation_hint, is_script_step, step_output_path, write_if_missing,
};
use crate::io::wizard::{format_wizard_state, StepStatus, WizardState};
use crate::io::wizard::{StepStatus, WizardState, format_wizard_state};
pub(super) fn handle_setup(ctx: &CommandContext) -> Option<String> {
let sub = ctx.args.trim().to_ascii_lowercase();
@@ -84,17 +84,16 @@ fn wizard_confirm_reply(ctx: &CommandContext) -> String {
let content = state.steps[idx].content.clone();
// Write content to disk (only if a file path exists and the file is absent).
let write_msg =
if let (Some(c), Some(ref path)) = (&content, step_output_path(root, step)) {
let executable = is_script_step(step);
match write_if_missing(path, c, executable) {
Ok(true) => format!(" File written: `{}`.", path.display()),
Ok(false) => format!(" File `{}` already exists — skipped.", path.display()),
Err(e) => return format!("Error: {e}"),
}
} else {
String::new()
};
let write_msg = if let (Some(c), Some(ref path)) = (&content, step_output_path(root, step)) {
let executable = is_script_step(step);
match write_if_missing(path, c, executable) {
Ok(true) => format!(" File written: `{}`.", path.display()),
Ok(false) => format!(" File `{}` already exists — skipped.", path.display()),
Err(e) => return format!("Error: {e}"),
}
} else {
String::new()
};
if let Err(e) = state.confirm_step(step) {
return format!("Cannot confirm step: {e}");
@@ -140,10 +139,7 @@ fn wizard_skip_reply(ctx: &CommandContext) -> String {
}
if state.completed {
format!(
"Step '{}' skipped. Setup wizard complete!",
step.label()
)
format!("Step '{}' skipped. Setup wizard complete!", step.label())
} else {
let next = &state.steps[state.current_step_index()];
format!(
+14 -3
View File
@@ -78,9 +78,16 @@ mod tests {
#[test]
fn show_command_appears_in_help() {
let result = super::super::tests::try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy help");
let result = super::super::tests::try_cmd_addressed(
"Timmy",
"@timmy:homeserver.local",
"@timmy help",
);
let output = result.unwrap();
assert!(output.contains("show"), "help should list show command: {output}");
assert!(
output.contains("show"),
"help should list show command: {output}"
);
}
#[test]
@@ -167,7 +174,11 @@ mod tests {
#[test]
fn show_command_case_insensitive() {
let result = super::super::tests::try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy SHOW 1");
let result = super::super::tests::try_cmd_addressed(
"Timmy",
"@timmy:homeserver.local",
"@timmy SHOW 1",
);
assert!(result.is_some(), "SHOW should match case-insensitively");
}
}
+70 -28
View File
@@ -119,14 +119,13 @@ fn build_status_from_items(
.collect();
// Read token usage once for all stories to avoid repeated file I/O.
let cost_by_story: HashMap<String, f64> =
crate::agents::token_usage::read_all(project_root)
.unwrap_or_default()
.into_iter()
.fold(HashMap::new(), |mut map, r| {
*map.entry(r.story_id).or_insert(0.0) += r.usage.total_cost_usd;
map
});
let cost_by_story: HashMap<String, f64> = crate::agents::token_usage::read_all(project_root)
.unwrap_or_default()
.into_iter()
.fold(HashMap::new(), |mut map, r| {
*map.entry(r.story_id).or_insert(0.0) += r.usage.total_cost_usd;
map
});
let config = ProjectConfig::load(project_root).ok();
@@ -165,10 +164,8 @@ fn build_status_from_items(
}
// Blocked items: Archived { reason: Blocked } shown with 🔴 indicator.
let mut blocked_items: Vec<&PipelineItem> = items
.iter()
.filter(|i| i.stage.is_blocked())
.collect();
let mut blocked_items: Vec<&PipelineItem> =
items.iter().filter(|i| i.stage.is_blocked()).collect();
blocked_items.sort_by(|a, b| a.story_id.0.cmp(&b.story_id.0));
if !blocked_items.is_empty() {
out.push_str(&format!("**Blocked** ({})\n", blocked_items.len()));
@@ -294,13 +291,21 @@ mod tests {
#[test]
fn status_command_matches() {
let result = super::super::tests::try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy status");
let result = super::super::tests::try_cmd_addressed(
"Timmy",
"@timmy:homeserver.local",
"@timmy status",
);
assert!(result.is_some(), "status command should match");
}
#[test]
fn status_command_returns_pipeline_text() {
let result = super::super::tests::try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy status");
let result = super::super::tests::try_cmd_addressed(
"Timmy",
"@timmy:homeserver.local",
"@timmy status",
);
let output = result.unwrap();
assert!(
output.contains("Pipeline Status"),
@@ -310,7 +315,11 @@ mod tests {
#[test]
fn status_command_case_insensitive() {
let result = super::super::tests::try_cmd_addressed("Timmy", "@timmy:homeserver.local", "@timmy STATUS");
let result = super::super::tests::try_cmd_addressed(
"Timmy",
"@timmy:homeserver.local",
"@timmy STATUS",
);
assert!(result.is_some(), "STATUS should match case-insensitively");
}
@@ -318,7 +327,10 @@ mod tests {
#[test]
fn short_label_extracts_number_and_name() {
let label = story_short_label("293_story_register_all_bot_commands", Some("Register all bot commands"));
let label = story_short_label(
"293_story_register_all_bot_commands",
Some("Register all bot commands"),
);
assert_eq!(label, "293 [story] — Register all bot commands");
}
@@ -336,7 +348,10 @@ mod tests {
#[test]
fn short_label_does_not_include_underscore_slug() {
let label = story_short_label("293_story_register_all_bot_commands_in_the_command_registry", Some("Register all bot commands"));
let label = story_short_label(
"293_story_register_all_bot_commands_in_the_command_registry",
Some("Register all bot commands"),
);
assert!(
!label.contains("story_register"),
"label should not contain the slug portion: {label}"
@@ -345,19 +360,28 @@ mod tests {
#[test]
fn short_label_shows_bug_type() {
let label = story_short_label("375_bug_default_project_toml", Some("Default project.toml issue"));
let label = story_short_label(
"375_bug_default_project_toml",
Some("Default project.toml issue"),
);
assert_eq!(label, "375 [bug] — Default project.toml issue");
}
#[test]
fn short_label_shows_spike_type() {
let label = story_short_label("61_spike_filesystem_watcher_architecture", Some("Filesystem watcher architecture"));
let label = story_short_label(
"61_spike_filesystem_watcher_architecture",
Some("Filesystem watcher architecture"),
);
assert_eq!(label, "61 [spike] — Filesystem watcher architecture");
}
#[test]
fn short_label_shows_refactor_type() {
let label = story_short_label("260_refactor_upgrade_libsqlite3_sys", Some("Upgrade libsqlite3-sys"));
let label = story_short_label(
"260_refactor_upgrade_libsqlite3_sys",
Some("Upgrade libsqlite3-sys"),
);
assert_eq!(label, "260 [refactor] — Upgrade libsqlite3-sys");
}
@@ -506,7 +530,12 @@ mod tests {
// Story 10 depends on story 999, which is NOT in all_items (treated as met)
// OR present in backlog (unmet). Let's add dep 999 in Backlog stage (unmet).
let items = vec![
make_item_with_deps("10_story_waiting", "Waiting Story", Stage::Coding, vec![999]),
make_item_with_deps(
"10_story_waiting",
"Waiting Story",
Stage::Coding,
vec![999],
),
make_item("999_story_dep", "Dep Story", Stage::Backlog),
];
@@ -526,11 +555,20 @@ mod tests {
// Dep 999 is in Done stage — met.
let items = vec![
make_item_with_deps("10_story_unblocked", "Unblocked Story", Stage::Coding, vec![999]),
make_item("999_story_dep", "Dep Story", Stage::Done {
merged_at: Utc::now(),
merge_commit: crate::pipeline_state::GitSha("abc123".to_string()),
}),
make_item_with_deps(
"10_story_unblocked",
"Unblocked Story",
Stage::Coding,
vec![999],
),
make_item(
"999_story_dep",
"Dep Story",
Stage::Done {
merged_at: Utc::now(),
merge_commit: crate::pipeline_state::GitSha("abc123".to_string()),
},
),
];
let agents = AgentPool::new_test(3000);
@@ -678,8 +716,12 @@ mod tests {
// Must appear under Done, not Backlog.
let done_pos = output.find("**Done**").expect("Done section must exist");
let backlog_pos = output.find("**Backlog**").expect("Backlog section must exist");
let story_pos = output.find("503 [story]").expect("story must appear in output");
let backlog_pos = output
.find("**Backlog**")
.expect("Backlog section must exist");
let story_pos = output
.find("503 [story]")
.expect("story must appear in output");
assert!(
story_pos > done_pos,
+31 -25
View File
@@ -33,17 +33,13 @@ pub(super) fn handle_triage(ctx: &CommandContext) -> Option<String> {
match find_story_by_number(num_str) {
Some((story_id, item)) => Some(build_triage_dump(ctx, &story_id, &item, num_str)),
None => Some(format!(
"Story **{num_str}** not found in the pipeline."
)),
None => Some(format!("Story **{num_str}** not found in the pipeline.")),
}
}
/// Find a pipeline item whose numeric prefix matches `num_str` by querying the
/// CRDT state. Returns `(story_id, PipelineItem)` for the first match.
fn find_story_by_number(
num_str: &str,
) -> Option<(String, crate::pipeline_state::PipelineItem)> {
fn find_story_by_number(num_str: &str) -> Option<(String, crate::pipeline_state::PipelineItem)> {
let items = crate::pipeline_state::read_all_typed();
for item in items {
let file_num = item
@@ -74,7 +70,10 @@ fn build_triage_dump(
};
let meta = crate::io::story_metadata::parse_front_matter(&contents).ok();
let name = meta.as_ref().and_then(|m| m.name.as_deref()).unwrap_or("(unnamed)");
let name = meta
.as_ref()
.and_then(|m| m.name.as_deref())
.unwrap_or("(unnamed)");
let mut out = String::new();
@@ -147,10 +146,7 @@ fn build_triage_dump(
out.push_str(&format!("**Branch:** `{branch}`\n\n"));
// ---- git diff --stat ----
let diff_stat = run_git(
&wt_path,
&["diff", "--stat", "master...HEAD"],
);
let diff_stat = run_git(&wt_path, &["diff", "--stat", "master...HEAD"]);
if !diff_stat.is_empty() {
out.push_str("**Diff stat (vs master):**\n```\n");
out.push_str(&diff_stat);
@@ -162,12 +158,7 @@ fn build_triage_dump(
// ---- Last 5 commits on feature branch ----
let log = run_git(
&wt_path,
&[
"log",
"master..HEAD",
"--pretty=format:%h %s",
"-5",
],
&["log", "master..HEAD", "--pretty=format:%h %s", "-5"],
);
if !log.is_empty() {
out.push_str("**Recent commits (branch only):**\n```\n");
@@ -192,10 +183,15 @@ fn parse_acceptance_criteria(contents: &str) -> Vec<(bool, String)> {
.lines()
.filter_map(|line| {
let trimmed = line.trim();
if let Some(text) = trimmed.strip_prefix("- [x] ").or_else(|| trimmed.strip_prefix("- [X] ")) {
if let Some(text) = trimmed
.strip_prefix("- [x] ")
.or_else(|| trimmed.strip_prefix("- [X] "))
{
Some((true, text.to_string()))
} else {
trimmed.strip_prefix("- [ ] ").map(|text| (false, text.to_string()))
trimmed
.strip_prefix("- [ ] ")
.map(|text| (false, text.to_string()))
}
})
.collect()
@@ -248,7 +244,10 @@ mod tests {
#[test]
fn whatsup_command_is_not_registered() {
let found = super::super::commands().iter().any(|c| c.name == "whatsup");
assert!(!found, "whatsup command must not be in the registry (renamed to status)");
assert!(
!found,
"whatsup command must not be in the registry (renamed to status)"
);
}
#[test]
@@ -340,7 +339,10 @@ mod tests {
"---\nname: Backlog Item\n---\n",
);
let output = status_triage_cmd(tmp.path(), "9901").unwrap();
assert!(output.contains("9901"), "should show story number: {output}");
assert!(
output.contains("9901"),
"should show story number: {output}"
);
assert!(
output.contains("Backlog Item"),
"should show story name: {output}"
@@ -361,7 +363,10 @@ mod tests {
"---\nname: QA Item\n---\n",
);
let output = status_triage_cmd(tmp.path(), "9902").unwrap();
assert!(output.contains("9902"), "should show story number: {output}");
assert!(
output.contains("9902"),
"should show story number: {output}"
);
assert!(
output.contains("QA Item"),
"should show story name: {output}"
@@ -439,7 +444,10 @@ mod tests {
output.contains("depends_on") || output.contains("#477"),
"should show depends_on field: {output}"
);
assert!(output.contains("478"), "should list all dependency numbers: {output}");
assert!(
output.contains("478"),
"should list all dependency numbers: {output}"
);
}
#[test]
@@ -459,7 +467,6 @@ mod tests {
);
}
// -- parse_acceptance_criteria -----------------------------------------
#[test]
@@ -479,5 +486,4 @@ mod tests {
let result = parse_acceptance_criteria(input);
assert!(result.is_empty());
}
}
+29 -17
View File
@@ -5,7 +5,10 @@
//! and returns a confirmation.
use super::CommandContext;
use crate::io::story_metadata::{clear_front_matter_field, clear_front_matter_field_in_content, parse_front_matter, set_front_matter_field};
use crate::io::story_metadata::{
clear_front_matter_field, clear_front_matter_field_in_content, parse_front_matter,
set_front_matter_field,
};
use std::path::Path;
/// Handle the `unblock` command.
@@ -37,9 +40,7 @@ pub(crate) fn unblock_by_number(project_root: &Path, story_number: &str) -> Stri
match crate::chat::lookup::find_story_by_number(project_root, story_number) {
Some(found) => found,
None => {
return format!(
"No story, bug, or spike with number **{story_number}** found."
);
return format!("No story, bug, or spike with number **{story_number}** found.");
}
};
@@ -71,9 +72,7 @@ fn unblock_by_story_id(story_id: &str) -> String {
let has_merge_failure = meta.merge_failure.is_some();
if !has_blocked && !has_merge_failure {
return format!(
"**{story_name}** ({story_id}) is not blocked. Nothing to unblock."
);
return format!("**{story_name}** ({story_id}) is not blocked. Nothing to unblock.");
}
let mut updated = contents;
@@ -94,9 +93,16 @@ fn unblock_by_story_id(story_id: &str) -> String {
crate::db::write_item_with_content(story_id, &stage, &updated);
let mut cleared = Vec::new();
if has_blocked { cleared.push("blocked"); }
if has_merge_failure { cleared.push("merge_failure"); }
format!("Unblocked **{story_name}** ({story_id}). Cleared: {}. Retry count reset to 0.", cleared.join(", "))
if has_blocked {
cleared.push("blocked");
}
if has_merge_failure {
cleared.push("merge_failure");
}
format!(
"Unblocked **{story_name}** ({story_id}). Cleared: {}. Retry count reset to 0.",
cleared.join(", ")
)
}
/// Core unblock logic: reset blocked state for a known story file path.
@@ -121,9 +127,7 @@ pub(crate) fn unblock_by_path(path: &Path, story_id: &str) -> String {
let has_merge_failure = meta.merge_failure.is_some();
if !has_blocked && !has_merge_failure {
return format!(
"**{story_name}** ({story_id}) is not blocked. Nothing to unblock."
);
return format!("**{story_name}** ({story_id}) is not blocked. Nothing to unblock.");
}
// Clear the blocked flag if present.
@@ -147,9 +151,16 @@ pub(crate) fn unblock_by_path(path: &Path, story_id: &str) -> String {
}
let mut cleared = Vec::new();
if has_blocked { cleared.push("blocked"); }
if has_merge_failure { cleared.push("merge_failure"); }
format!("Unblocked **{story_name}** ({story_id}). Cleared: {}. Retry count reset to 0.", cleared.join(", "))
if has_blocked {
cleared.push("blocked");
}
if has_merge_failure {
cleared.push("merge_failure");
}
format!(
"Unblocked **{story_name}** ({story_id}). Cleared: {}. Retry count reset to 0.",
cleared.join(", ")
)
}
// ---------------------------------------------------------------------------
@@ -276,7 +287,8 @@ mod tests {
let contents = crate::db::read_content("9903_story_stuck")
.or_else(|| {
std::fs::read_to_string(
tmp.path().join(".huskies/work/2_current/9903_story_stuck.md"),
tmp.path()
.join(".huskies/work/2_current/9903_story_stuck.md"),
)
.ok()
})
+28 -18
View File
@@ -17,9 +17,7 @@ pub(super) fn handle_unreleased(ctx: &CommandContext) -> Option<String> {
if commits.is_empty() {
let msg = match &tag {
Some(t) => format!(
"No unreleased stories since the last release tag **{t}**."
),
Some(t) => format!("No unreleased stories since the last release tag **{t}**."),
None => "No release tags found and no story merge commits on master.".to_string(),
};
return Some(msg);
@@ -36,9 +34,7 @@ pub(super) fn handle_unreleased(ctx: &CommandContext) -> Option<String> {
if stories.is_empty() {
let msg = match &tag {
Some(t) => format!(
"No unreleased stories since the last release tag **{t}**."
),
Some(t) => format!("No unreleased stories since the last release tag **{t}**."),
None => "No release tags found and no story merge commits on master.".to_string(),
};
return Some(msg);
@@ -50,8 +46,7 @@ pub(super) fn handle_unreleased(ctx: &CommandContext) -> Option<String> {
None => "**Unreleased stories (no prior release tag):**\n\n".to_string(),
};
for (num, slug) in &stories {
let name = find_story_name(root, &num.to_string())
.unwrap_or_else(|| slug_to_name(slug));
let name = find_story_name(root, &num.to_string()).unwrap_or_else(|| slug_to_name(slug));
out.push_str(&format!("- **{num}** — {name}\n"));
}
Some(out)
@@ -79,10 +74,7 @@ fn find_last_release_tag(root: &std::path::Path) -> Option<String> {
/// Return the subjects of all `huskies: merge …` commits reachable from HEAD
/// but not from `since_tag` (or all commits when `since_tag` is `None`).
fn list_merge_commits_since(
root: &std::path::Path,
since_tag: Option<&str>,
) -> Vec<String> {
fn list_merge_commits_since(root: &std::path::Path, since_tag: Option<&str>) -> Vec<String> {
use std::process::Command;
let range = match since_tag {
@@ -153,7 +145,9 @@ fn find_story_name(root: &std::path::Path, num_str: &str) -> Option<String> {
// Try content store first.
for id in crate::db::all_content_ids() {
let file_num = id.split('_').next().unwrap_or("");
if file_num == num_str && let Some(c) = crate::db::read_content(&id) {
if file_num == num_str
&& let Some(c) = crate::db::read_content(&id)
{
return crate::io::story_metadata::parse_front_matter(&c)
.ok()
.and_then(|m| m.name);
@@ -162,7 +156,12 @@ fn find_story_name(root: &std::path::Path, num_str: &str) -> Option<String> {
// Fallback: filesystem scan.
const STAGES: &[&str] = &[
"1_backlog", "2_current", "3_qa", "4_merge", "5_done", "6_archived",
"1_backlog",
"2_current",
"3_qa",
"4_merge",
"5_done",
"6_archived",
];
for stage in STAGES {
let dir = root.join(".huskies").join("work").join(stage);
@@ -225,7 +224,9 @@ mod tests {
#[test]
fn unreleased_command_is_registered() {
let found = super::super::commands().iter().any(|c| c.name == "unreleased");
let found = super::super::commands()
.iter()
.any(|c| c.name == "unreleased");
assert!(found, "unreleased command must be in the registry");
}
@@ -249,7 +250,10 @@ mod tests {
let tmp = tempfile::TempDir::new().unwrap();
let output = unreleased_cmd_with_root(tmp.path()).unwrap();
// Should return some message (not panic), either about no tags or no commits.
assert!(!output.is_empty(), "should return a non-empty message: {output}");
assert!(
!output.is_empty(),
"should return a non-empty message: {output}"
);
}
#[test]
@@ -261,7 +265,10 @@ mod tests {
let output = unreleased_cmd_with_root(repo_root).unwrap();
// The response should mention "unreleased" or "no unreleased" — just make
// sure it's non-empty and doesn't panic.
assert!(!output.is_empty(), "should return a non-empty message: {output}");
assert!(
!output.is_empty(),
"should return a non-empty message: {output}"
);
}
#[test]
@@ -271,7 +278,10 @@ mod tests {
"@timmy:homeserver.local",
"@timmy UNRELEASED",
);
assert!(result.is_some(), "UNRELEASED should match case-insensitively");
assert!(
result.is_some(),
"UNRELEASED should match case-insensitively"
);
}
// -- parse_story_from_subject ------------------------------------------
+4 -1
View File
@@ -80,7 +80,10 @@ mod tests {
fn not_found_returns_none() {
let tmp = tempfile::TempDir::new().unwrap();
let result = find_story_by_number(tmp.path(), "999");
assert!(result.is_none(), "should return None when story is not found");
assert!(
result.is_none(),
"should return None when story is not found"
);
}
#[test]
+9 -7
View File
@@ -6,11 +6,11 @@
pub mod commands;
pub(crate) mod lookup;
#[cfg(test)]
pub(crate) mod test_helpers;
pub mod timer;
pub mod transport;
pub mod util;
#[cfg(test)]
pub(crate) mod test_helpers;
use async_trait::async_trait;
@@ -96,8 +96,9 @@ mod tests {
fn assert_transport<T: ChatTransport>() {}
assert_transport::<crate::chat::transport::slack::SlackTransport>();
let _: Arc<dyn ChatTransport> =
Arc::new(crate::chat::transport::slack::SlackTransport::new("xoxb-test".to_string()));
let _: Arc<dyn ChatTransport> = Arc::new(
crate::chat::transport::slack::SlackTransport::new("xoxb-test".to_string()),
);
}
/// Verify that TwilioWhatsAppTransport satisfies the ChatTransport trait
@@ -107,11 +108,12 @@ mod tests {
fn assert_transport<T: ChatTransport>() {}
assert_transport::<crate::chat::transport::whatsapp::TwilioWhatsAppTransport>();
let _: Arc<dyn ChatTransport> =
Arc::new(crate::chat::transport::whatsapp::TwilioWhatsAppTransport::new(
let _: Arc<dyn ChatTransport> = Arc::new(
crate::chat::transport::whatsapp::TwilioWhatsAppTransport::new(
"ACtest".to_string(),
"authtoken".to_string(),
"+14155551234".to_string(),
));
),
);
}
}
+48 -67
View File
@@ -161,10 +161,7 @@ pub(crate) async fn tick_once(
}
let remaining = store.list().len();
crate::slog!(
"[timer] Tick: {} due, {remaining} remaining",
due.len()
);
crate::slog!("[timer] Tick: {} due, {remaining} remaining", due.len());
for entry in due {
crate::slog!("[timer] Timer fired for story {}", entry.story_id);
@@ -287,9 +284,7 @@ pub fn spawn_rate_limit_auto_scheduler(
}
Ok(_) => {}
Err(tokio::sync::broadcast::error::RecvError::Lagged(n)) => {
crate::slog!(
"[timer] Rate-limit auto-scheduler lagged, skipped {n} events"
);
crate::slog!("[timer] Rate-limit auto-scheduler lagged, skipped {n} events");
}
Err(tokio::sync::broadcast::error::RecvError::Closed) => {
crate::slog!(
@@ -398,44 +393,43 @@ pub async fn handle_timer_command(
let story_id = match resolve_story_id(&story_number_or_id, project_root) {
Some(id) => id,
None => {
return format!(
"No story with number or ID **{story_number_or_id}** found."
);
return format!("No story with number or ID **{story_number_or_id}** found.");
}
};
// The story must be in backlog or current. When the timer fires,
// backlog stories are moved to current automatically.
// Check CRDT state first, then fall back to filesystem.
let in_valid_stage = if let Ok(Some(item)) = crate::pipeline_state::read_typed(&story_id) {
use crate::pipeline_state::Stage;
matches!(item.stage, Stage::Backlog | Stage::Coding)
} else {
let work_dir = project_root.join(".huskies").join("work");
work_dir.join("1_backlog").join(format!("{story_id}.md")).exists()
|| work_dir.join("2_current").join(format!("{story_id}.md")).exists()
};
let in_valid_stage =
if let Ok(Some(item)) = crate::pipeline_state::read_typed(&story_id) {
use crate::pipeline_state::Stage;
matches!(item.stage, Stage::Backlog | Stage::Coding)
} else {
let work_dir = project_root.join(".huskies").join("work");
work_dir
.join("1_backlog")
.join(format!("{story_id}.md"))
.exists()
|| work_dir
.join("2_current")
.join(format!("{story_id}.md"))
.exists()
};
if !in_valid_stage {
return format!(
"Story **{story_id}** is not in backlog or current."
);
return format!("Story **{story_id}** is not in backlog or current.");
}
let scheduled_at = match next_occurrence_of_hhmm(&hhmm, tz_str) {
Some(t) => t,
None => {
return format!(
"Invalid time **{hhmm}**. Use `HH:MM` format (e.g. `14:30`)."
);
return format!("Invalid time **{hhmm}**. Use `HH:MM` format (e.g. `14:30`).");
}
};
match store.add(story_id.clone(), scheduled_at) {
Ok(()) => {
let (display_time, tz_label) = format_in_timezone(scheduled_at, tz_str);
format!(
"Timer set for **{story_id}** at **{display_time}** ({tz_label})."
)
format!("Timer set for **{story_id}** at **{display_time}** ({tz_label}).")
}
Err(e) => format!("Failed to save timer: {e}"),
}
@@ -448,11 +442,7 @@ pub async fn handle_timer_command(
let mut lines = vec!["**Pending timers:**".to_string()];
for t in &timers {
let (display_time, _) = format_in_timezone(t.scheduled_at, tz_str);
lines.push(format!(
"- **{}** → {}",
t.story_id,
display_time
));
lines.push(format!("- **{}** → {}", t.story_id, display_time));
}
lines.join("\n")
}
@@ -465,13 +455,11 @@ pub async fn handle_timer_command(
format!("No timer found for **{story_id}**.")
}
}
TimerCommand::BadArgs => {
"Usage:\n\
TimerCommand::BadArgs => "Usage:\n\
- `timer <story_id> <HH:MM>` schedule deferred start\n\
- `timer list` show pending timers\n\
- `timer cancel <story_id>` remove a timer"
.to_string()
}
.to_string(),
}
}
@@ -529,10 +517,7 @@ fn format_in_timezone(dt: DateTime<Utc>, timezone: Option<&str>) -> (String, Str
match timezone.and_then(|s| s.parse::<Tz>().ok()) {
Some(tz) => {
let tz_time = dt.with_timezone(&tz);
(
tz_time.format("%Y-%m-%d %H:%M").to_string(),
tz.to_string(),
)
(tz_time.format("%Y-%m-%d %H:%M").to_string(), tz.to_string())
}
None => {
let local_time = dt.with_timezone(&Local);
@@ -571,7 +556,12 @@ fn resolve_story_id(number_or_id: &str, project_root: &Path) -> Option<String> {
// --- DB-first lookup ---
for id in crate::db::all_content_ids() {
let file_num = id.split('_').next().unwrap_or("");
if file_num == number_or_id && crate::pipeline_state::read_typed(&id).ok().flatten().is_some() {
if file_num == number_or_id
&& crate::pipeline_state::read_typed(&id)
.ok()
.flatten()
.is_some()
{
return Some(id);
}
}
@@ -643,14 +633,20 @@ mod tests {
#[test]
fn next_occurrence_with_named_timezone_is_in_the_future() {
let result = next_occurrence_of_hhmm("14:30", Some("Europe/London")).unwrap();
assert!(result > Utc::now(), "next occurrence (Europe/London) must be in the future");
assert!(
result > Utc::now(),
"next occurrence (Europe/London) must be in the future"
);
}
#[test]
fn next_occurrence_with_invalid_timezone_falls_back_to_local() {
// An unrecognised timezone name falls back to chrono::Local (returns Some).
let result = next_occurrence_of_hhmm("14:30", Some("Invalid/Zone"));
assert!(result.is_some(), "invalid timezone should fall back to local and return Some");
assert!(
result.is_some(),
"invalid timezone should fall back to local and return Some"
);
}
// ── extract_timer_command ───────────────────────────────────────────
@@ -679,11 +675,7 @@ mod tests {
#[test]
fn timer_cancel_story_id() {
assert_eq!(
extract_timer_command(
"Timmy timer cancel 421_story_foo",
"Timmy",
"@bot:home"
),
extract_timer_command("Timmy timer cancel 421_story_foo", "Timmy", "@bot:home"),
Some(TimerCommand::Cancel {
story_number_or_id: "421_story_foo".to_string()
})
@@ -701,11 +693,7 @@ mod tests {
#[test]
fn timer_schedule_with_story_id() {
assert_eq!(
extract_timer_command(
"Timmy timer 421_story_foo 14:30",
"Timmy",
"@bot:home"
),
extract_timer_command("Timmy timer 421_story_foo 14:30", "Timmy", "@bot:home"),
Some(TimerCommand::Schedule {
story_number_or_id: "421_story_foo".to_string(),
hhmm: "14:30".to_string(),
@@ -727,11 +715,7 @@ mod tests {
#[test]
fn timer_schedule_missing_time_is_bad_args() {
assert_eq!(
extract_timer_command(
"Timmy timer 421_story_foo",
"Timmy",
"@bot:home"
),
extract_timer_command("Timmy timer 421_story_foo", "Timmy", "@bot:home"),
Some(TimerCommand::BadArgs)
);
}
@@ -944,10 +928,7 @@ mod tests {
dir.path(),
)
.await;
assert!(
result.contains("No timer found"),
"unexpected: {result}"
);
assert!(result.contains("No timer found"), "unexpected: {result}");
}
#[tokio::test]
@@ -1014,10 +995,7 @@ mod tests {
dir.path(),
)
.await;
assert!(
result.contains("Timer set for"),
"unexpected: {result}"
);
assert!(result.contains("Timer set for"), "unexpected: {result}");
assert_eq!(store.list().len(), 1);
}
@@ -1111,7 +1089,10 @@ mod tests {
"story should be in the content store after timer fires"
);
// Timer was consumed.
assert!(store.list().is_empty(), "fired timer should be removed from store");
assert!(
store.list().is_empty(),
"fired timer should be removed from store"
);
}
// ── AC4: tick_once integration test ─────────────────────────────────
+11 -24
View File
@@ -6,9 +6,9 @@ use std::sync::{Arc, Mutex};
use tokio::sync::{Mutex as TokioMutex, oneshot};
use crate::agents::AgentPool;
use crate::chat::ChatTransport;
use crate::chat::transport::matrix::{ConversationEntry, ConversationRole, RoomConversation};
use crate::chat::util::is_permission_approval;
use crate::chat::ChatTransport;
use crate::http::context::{PermissionDecision, PermissionForward};
use crate::slog;
@@ -42,8 +42,7 @@ pub struct DiscordContext {
/// Permission requests from the MCP `prompt_permission` tool arrive here.
pub perm_rx: Arc<TokioMutex<tokio::sync::mpsc::UnboundedReceiver<PermissionForward>>>,
/// Pending permission replies keyed by channel ID.
pub pending_perm_replies:
Arc<TokioMutex<HashMap<String, oneshot::Sender<PermissionDecision>>>>,
pub pending_perm_replies: Arc<TokioMutex<HashMap<String, oneshot::Sender<PermissionDecision>>>>,
/// Seconds before an unanswered permission prompt is auto-denied.
pub permission_timeout_secs: u64,
}
@@ -135,16 +134,13 @@ pub(super) async fn handle_incoming_message(
let total_ticks = (duration_secs as usize) / 2;
for tick in 1..=total_ticks {
tokio::time::sleep(interval).await;
let updated =
crate::chat::transport::matrix::htop::build_htop_message(
&agents,
(tick * 2) as u32,
duration_secs,
);
let updated = crate::chat::transport::matrix::htop::build_htop_message(
&agents,
(tick * 2) as u32,
duration_secs,
);
let updated = markdown_to_discord(&updated);
if let Err(e) =
transport.edit_message(&ch, &msg_id, &updated, "").await
{
if let Err(e) = transport.edit_message(&ch, &msg_id, &updated, "").await {
slog!("[discord] Failed to edit htop message: {e}");
break;
}
@@ -320,12 +316,7 @@ pub(super) async fn handle_incoming_message(
}
/// Forward a message to Claude Code and send the response back via Discord.
async fn handle_llm_message(
ctx: &DiscordContext,
channel: &str,
user: &str,
user_message: &str,
) {
async fn handle_llm_message(ctx: &DiscordContext, channel: &str, user: &str, user_message: &str) {
use crate::chat::util::drain_complete_paragraphs;
use crate::llm::providers::claude_code::{ClaudeCodeProvider, ClaudeCodeResult};
use std::sync::atomic::{AtomicBool, Ordering};
@@ -334,9 +325,7 @@ async fn handle_llm_message(
// Look up existing session ID for this channel.
let resume_session_id: Option<String> = {
let guard = ctx.history.lock().await;
guard
.get(channel)
.and_then(|conv| conv.session_id.clone())
guard.get(channel).and_then(|conv| conv.session_id.clone())
};
let bot_name = &ctx.bot_name;
@@ -446,9 +435,7 @@ async fn handle_llm_message(
let last_text = messages
.iter()
.rev()
.find(|m| {
m.role == crate::llm::types::Role::Assistant && !m.content.is_empty()
})
.find(|m| m.role == crate::llm::types::Role::Assistant && !m.content.is_empty())
.map(|m| m.content.clone())
.unwrap_or_default();
if !last_text.is_empty() {
+10 -25
View File
@@ -150,8 +150,7 @@ async fn run_gateway(ctx: Arc<DiscordContext>) -> Result<(), String> {
.ok_or("Gateway closed before Hello")?
.map_err(|e| format!("Gateway read error: {e}"))?;
let hello_payload: GatewayPayload =
parse_ws_message(&hello).ok_or("Failed to parse Hello")?;
let hello_payload: GatewayPayload = parse_ws_message(&hello).ok_or("Failed to parse Hello")?;
if hello_payload.op != OP_HELLO {
return Err(format!(
@@ -164,8 +163,7 @@ async fn run_gateway(ctx: Arc<DiscordContext>) -> Result<(), String> {
serde_json::from_value(hello_payload.d.ok_or("Hello missing data")?)
.map_err(|e| format!("Failed to parse Hello data: {e}"))?;
let heartbeat_interval =
std::time::Duration::from_millis(hello_data.heartbeat_interval);
let heartbeat_interval = std::time::Duration::from_millis(hello_data.heartbeat_interval);
slog!(
"[discord] Heartbeat interval: {}ms",
hello_data.heartbeat_interval
@@ -258,19 +256,12 @@ async fn run_gateway(ctx: Arc<DiscordContext>) -> Result<(), String> {
&& let Ok(ready) = serde_json::from_value::<ReadyData>(d)
{
bot_user_id = Some(ready.user.id.clone());
slog!(
"[discord] READY — bot user ID: {}",
ready.user.id
);
slog!("[discord] READY — bot user ID: {}", ready.user.id);
}
}
"MESSAGE_CREATE" => {
if let Some(d) = payload.d {
dispatch_message(
Arc::clone(&ctx),
d,
bot_user_id.clone(),
);
dispatch_message(Arc::clone(&ctx), d, bot_user_id.clone());
}
}
_ => {}
@@ -355,15 +346,11 @@ fn dispatch_message(
// Check if the bot was mentioned, or if we respond to all messages in
// configured channels (ambient mode).
let bot_mentioned = bot_user_id.as_ref().is_some_and(|bid| {
msg.mentions.iter().any(|m| m.id == *bid)
});
let bot_mentioned = bot_user_id
.as_ref()
.is_some_and(|bid| msg.mentions.iter().any(|m| m.id == *bid));
let in_ambient = ctx
.ambient_rooms
.lock()
.unwrap()
.contains(&msg.channel_id);
let in_ambient = ctx.ambient_rooms.lock().unwrap().contains(&msg.channel_id);
if !bot_mentioned && !in_ambient {
return;
@@ -392,8 +379,7 @@ fn dispatch_message(
msg.channel_id
);
commands::handle_incoming_message(&ctx, &msg.channel_id, &author.id, &content)
.await;
commands::handle_incoming_message(&ctx, &msg.channel_id, &author.id, &content).await;
});
}
@@ -417,8 +403,7 @@ mod tests {
let json = r#"{"op": 10, "d": {"heartbeat_interval": 41250}}"#;
let payload: GatewayPayload = serde_json::from_str(json).unwrap();
assert_eq!(payload.op, OP_HELLO);
let hello: HelloData =
serde_json::from_value(payload.d.unwrap()).unwrap();
let hello: HelloData = serde_json::from_value(payload.d.unwrap()).unwrap();
assert_eq!(hello.heartbeat_interval, 41250);
}
+8 -17
View File
@@ -181,8 +181,7 @@ mod tests {
.create_async()
.await;
let transport =
DiscordTransport::with_api_base("test-token".to_string(), server.url());
let transport = DiscordTransport::with_api_base("test-token".to_string(), server.url());
let result = transport
.send_message("123456", "hello", "<p>hello</p>")
@@ -202,8 +201,7 @@ mod tests {
.create_async()
.await;
let transport =
DiscordTransport::with_api_base("test-token".to_string(), server.url());
let transport = DiscordTransport::with_api_base("test-token".to_string(), server.url());
let result = transport.send_message("bad", "hello", "").await;
assert!(result.is_err());
@@ -220,8 +218,7 @@ mod tests {
.create_async()
.await;
let transport =
DiscordTransport::with_api_base("test-token".to_string(), server.url());
let transport = DiscordTransport::with_api_base("test-token".to_string(), server.url());
let result = transport
.edit_message("123456", "999888777", "updated", "")
@@ -240,12 +237,9 @@ mod tests {
.create_async()
.await;
let transport =
DiscordTransport::with_api_base("test-token".to_string(), server.url());
let transport = DiscordTransport::with_api_base("test-token".to_string(), server.url());
let result = transport
.edit_message("123456", "bad", "updated", "")
.await;
let result = transport.edit_message("123456", "bad", "updated", "").await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("404"));
}
@@ -259,8 +253,7 @@ mod tests {
.create_async()
.await;
let transport =
DiscordTransport::with_api_base("test-token".to_string(), server.url());
let transport = DiscordTransport::with_api_base("test-token".to_string(), server.url());
assert!(transport.send_typing("123456", true).await.is_ok());
}
@@ -281,8 +274,7 @@ mod tests {
.create_async()
.await;
let transport =
DiscordTransport::with_api_base("test-token".to_string(), server.url());
let transport = DiscordTransport::with_api_base("test-token".to_string(), server.url());
let result = transport.send_message("123456", "hello", "").await;
assert!(result.is_err());
@@ -296,7 +288,6 @@ mod tests {
fn assert_transport<T: ChatTransport>() {}
assert_transport::<DiscordTransport>();
let _: Arc<dyn ChatTransport> =
Arc::new(DiscordTransport::new("test-token".to_string()));
let _: Arc<dyn ChatTransport> = Arc::new(DiscordTransport::new("test-token".to_string()));
}
}
+6 -13
View File
@@ -17,10 +17,7 @@ use std::path::Path;
#[derive(Debug, PartialEq)]
pub enum AssignCommand {
/// Assign the story with this number to the given model.
Assign {
story_number: String,
model: String,
},
Assign { story_number: String, model: String },
/// The user typed `assign` but without valid arguments.
BadArgs,
}
@@ -96,9 +93,7 @@ pub async fn handle_assign(
match crate::chat::lookup::find_story_by_number(project_root, story_number) {
Some(found) => found,
None => {
return format!(
"No story, bug, or spike with number **{story_number}** found."
);
return format!("No story, bug, or spike with number **{story_number}** found.");
}
};
@@ -282,11 +277,8 @@ mod tests {
fn extract_assign_command_multibyte_prefix_no_panic() {
// "xxxx⏺ assign 42 opus" — ⏺ (U+23FA) is 3 bytes, starting at byte 4.
// "@timmy" has len 6 so text[..6] lands inside ⏺ — panics without the fix.
let cmd = extract_assign_command(
"xxxx\u{23FA} assign 42 opus",
"Timmy",
"@timmy:home.local",
);
let cmd =
extract_assign_command("xxxx\u{23FA} assign 42 opus", "Timmy", "@timmy:home.local");
assert_eq!(cmd, None);
}
@@ -453,7 +445,8 @@ mod tests {
);
// Should indicate a restart occurred (not just "will be used when starts")
assert!(
response.to_lowercase().contains("stop") || response.to_lowercase().contains("reassign"),
response.to_lowercase().contains("stop")
|| response.to_lowercase().contains("reassign"),
"response should indicate stop/reassign: {response}"
);
}
@@ -1,7 +1,7 @@
//! Matrix bot context — shared state for the Matrix bot (rooms, history, permissions).
use crate::agents::AgentPool;
use crate::chat::timer::TimerStore;
use crate::chat::ChatTransport;
use crate::chat::timer::TimerStore;
use crate::http::context::{PermissionDecision, PermissionForward};
use matrix_sdk::ruma::{OwnedEventId, OwnedRoomId, OwnedUserId};
use std::collections::{HashMap, HashSet};
@@ -104,7 +104,10 @@ mod tests {
#[test]
fn startup_announcement_uses_configured_display_name_not_hardcoded() {
assert_eq!(format_startup_announcement("HAL"), "HAL is online.");
assert_eq!(format_startup_announcement("Assistant"), "Assistant is online.");
assert_eq!(
format_startup_announcement("Assistant"),
"Assistant is online."
);
}
#[test]
@@ -71,11 +71,7 @@ pub fn load_history(project_root: &std::path::Path) -> HashMap<OwnedRoomId, Room
persisted
.rooms
.into_iter()
.filter_map(|(k, v)| {
k.parse::<OwnedRoomId>()
.ok()
.map(|room_id| (room_id, v))
})
.filter_map(|(k, v)| k.parse::<OwnedRoomId>().ok().map(|room_id| (room_id, v)))
.collect()
}
@@ -97,9 +97,7 @@ pub fn is_addressed_to_other(body: &str, bot_user_id: &OwnedUserId, bot_name: &s
// Handles both "@localpart" and "@localpart:homeserver" forms.
if let Some(rest) = lower.strip_prefix('@') {
// Extract everything up to the first whitespace character.
let word_end = rest
.find(|c: char| c.is_whitespace())
.unwrap_or(rest.len());
let word_end = rest.find(|c: char| c.is_whitespace()).unwrap_or(rest.len());
let mention = &rest[..word_end]; // e.g. "sally" or "sally:example.com"
// Strip the homeserver part to get just the localpart.
@@ -82,9 +82,7 @@ pub(super) async fn on_room_message(
// Always let "ambient on" through — it is the one command that must work
// even when the bot is not mentioned and ambient mode is off, otherwise
// there is no way to re-enable ambient mode without an @-mention.
let is_ambient_on = body
.to_ascii_lowercase()
.contains("ambient on");
let is_ambient_on = body.to_ascii_lowercase().contains("ambient on");
if !is_addressed && !is_ambient && !is_ambient_on {
slog!(
@@ -97,7 +95,9 @@ pub(super) async fn on_room_message(
// In ambient mode, ignore messages that are explicitly addressed to a
// different entity (e.g. "sally: do X" or "@sally do X" when we are stu).
// We still let through messages addressed to us and the "ambient on" command.
if is_ambient && !is_addressed && !is_ambient_on
if is_ambient
&& !is_addressed
&& !is_ambient_on
&& is_addressed_to_other(&body, &ctx.bot_user_id, &ctx.bot_name)
{
slog!(
@@ -158,7 +158,10 @@ pub(super) async fn on_room_message(
"Permission denied."
};
let html = markdown_to_html(confirmation);
if let Ok(msg_id) = ctx.transport.send_message(&room_id_str, confirmation, &html).await
if let Ok(msg_id) = ctx
.transport
.send_message(&room_id_str, confirmation, &html)
.await
&& let Ok(event_id) = msg_id.parse()
{
ctx.bot_sent_event_ids.lock().await.insert(event_id);
@@ -182,9 +185,14 @@ pub(super) async fn on_room_message(
ambient_rooms: &ctx.ambient_rooms,
room_id: &room_id_str,
};
if let Some((response, response_html)) = super::super::commands::try_handle_command_with_html(&dispatch, &user_message) {
if let Some((response, response_html)) =
super::super::commands::try_handle_command_with_html(&dispatch, &user_message)
{
slog!("[matrix-bot] Handled bot command from {sender}");
if let Ok(msg_id) = ctx.transport.send_message(&room_id_str, &response, &response_html).await
if let Ok(msg_id) = ctx
.transport
.send_message(&room_id_str, &response, &response_html)
.await
&& let Ok(event_id) = msg_id.parse()
{
ctx.bot_sent_event_ids.lock().await.insert(event_id);
@@ -224,7 +232,10 @@ pub(super) async fn on_room_message(
}
};
let html = markdown_to_html(&response);
if let Ok(msg_id) = ctx.transport.send_message(&room_id_str, &response, &html).await
if let Ok(msg_id) = ctx
.transport
.send_message(&room_id_str, &response, &html)
.await
&& let Ok(event_id) = msg_id.parse()
{
ctx.bot_sent_event_ids.lock().await.insert(event_id);
@@ -272,9 +283,7 @@ pub(super) async fn on_room_message(
) {
let response = match del_cmd {
super::super::delete::DeleteCommand::Delete { story_number } => {
slog!(
"[matrix-bot] Handling delete command from {sender}: story {story_number}"
);
slog!("[matrix-bot] Handling delete command from {sender}: story {story_number}");
super::super::delete::handle_delete(
&ctx.bot_name,
&story_number,
@@ -288,7 +297,10 @@ pub(super) async fn on_room_message(
}
};
let html = markdown_to_html(&response);
if let Ok(msg_id) = ctx.transport.send_message(&room_id_str, &response, &html).await
if let Ok(msg_id) = ctx
.transport
.send_message(&room_id_str, &response, &html)
.await
&& let Ok(event_id) = msg_id.parse()
{
ctx.bot_sent_event_ids.lock().await.insert(event_id);
@@ -305,9 +317,7 @@ pub(super) async fn on_room_message(
) {
let response = match rmtree_cmd {
super::super::rmtree::RmtreeCommand::Rmtree { story_number } => {
slog!(
"[matrix-bot] Handling rmtree command from {sender}: story {story_number}"
);
slog!("[matrix-bot] Handling rmtree command from {sender}: story {story_number}");
super::super::rmtree::handle_rmtree(
&ctx.bot_name,
&story_number,
@@ -321,7 +331,10 @@ pub(super) async fn on_room_message(
}
};
let html = markdown_to_html(&response);
if let Ok(msg_id) = ctx.transport.send_message(&room_id_str, &response, &html).await
if let Ok(msg_id) = ctx
.transport
.send_message(&room_id_str, &response, &html)
.await
&& let Ok(event_id) = msg_id.parse()
{
ctx.bot_sent_event_ids.lock().await.insert(event_id);
@@ -361,7 +374,10 @@ pub(super) async fn on_room_message(
}
};
let html = markdown_to_html(&response);
if let Ok(msg_id) = ctx.transport.send_message(&room_id_str, &response, &html).await
if let Ok(msg_id) = ctx
.transport
.send_message(&room_id_str, &response, &html)
.await
&& let Ok(event_id) = msg_id.parse()
{
ctx.bot_sent_event_ids.lock().await.insert(event_id);
@@ -387,7 +403,10 @@ pub(super) async fn on_room_message(
)
.await;
let html = markdown_to_html(&response);
if let Ok(msg_id) = ctx.transport.send_message(&room_id_str, &response, &html).await
if let Ok(msg_id) = ctx
.transport
.send_message(&room_id_str, &response, &html)
.await
&& let Ok(event_id) = msg_id.parse()
{
ctx.bot_sent_event_ids.lock().await.insert(event_id);
@@ -408,19 +427,22 @@ pub(super) async fn on_room_message(
// Acknowledge immediately — the rebuild may take a while or re-exec.
let ack = "Rebuilding server… this may take a moment.";
let ack_html = markdown_to_html(ack);
if let Ok(msg_id) = ctx.transport.send_message(&room_id_str, ack, &ack_html).await
if let Ok(msg_id) = ctx
.transport
.send_message(&room_id_str, ack, &ack_html)
.await
&& let Ok(event_id) = msg_id.parse()
{
ctx.bot_sent_event_ids.lock().await.insert(event_id);
}
let response = super::super::rebuild::handle_rebuild(
&ctx.bot_name,
&ctx.project_root,
&ctx.agents,
)
.await;
let response =
super::super::rebuild::handle_rebuild(&ctx.bot_name, &ctx.project_root, &ctx.agents)
.await;
let html = markdown_to_html(&response);
if let Ok(msg_id) = ctx.transport.send_message(&room_id_str, &response, &html).await
if let Ok(msg_id) = ctx
.transport
.send_message(&room_id_str, &response, &html)
.await
&& let Ok(event_id) = msg_id.parse()
{
ctx.bot_sent_event_ids.lock().await.insert(event_id);
@@ -443,7 +465,10 @@ pub(super) async fn on_room_message(
)
.await;
let html = markdown_to_html(&response);
if let Ok(msg_id) = ctx.transport.send_message(&room_id_str, &response, &html).await
if let Ok(msg_id) = ctx
.transport
.send_message(&room_id_str, &response, &html)
.await
&& let Ok(event_id) = msg_id.parse()
{
ctx.bot_sent_event_ids.lock().await.insert(event_id);
@@ -470,9 +495,7 @@ pub(super) async fn handle_message(
// flattening history into a text prefix.
let resume_session_id: Option<String> = {
let guard = ctx.history.lock().await;
guard
.get(&room_id)
.and_then(|conv| conv.session_id.clone())
guard.get(&room_id).and_then(|conv| conv.session_id.clone())
};
// The prompt is just the current message with sender attribution.
@@ -501,7 +524,9 @@ pub(super) async fn handle_message(
let post_task = tokio::spawn(async move {
while let Some(chunk) = msg_rx.recv().await {
let html = markdown_to_html(&chunk);
if let Ok(msg_id) = post_transport.send_message(&post_room_id, &chunk, &html).await
if let Ok(msg_id) = post_transport
.send_message(&post_room_id, &chunk, &html)
.await
&& let Ok(event_id) = msg_id.parse()
{
sent_ids_for_post.lock().await.insert(event_id);
@@ -631,9 +656,7 @@ pub(super) async fn handle_message(
Err(e) => {
slog!("[matrix-bot] LLM error: {e}");
let err_msg = if let Some(url) = crate::llm::oauth::extract_login_url_from_error(&e) {
format!(
"Authentication required. [Click here to log in to Claude]({url})"
)
format!("Authentication required. [Click here to log in to Claude]({url})")
} else {
format!("Error processing your request: {e}")
};
@@ -654,7 +677,11 @@ pub(super) async fn handle_message(
let conv = guard.entry(room_id).or_default();
// Store the session ID so the next turn uses --resume.
slog!("[matrix-bot] storing session_id: {:?} (was: {:?})", new_session_id, conv.session_id);
slog!(
"[matrix-bot] storing session_id: {:?} (was: {:?})",
new_session_id,
conv.session_id
);
if new_session_id.is_some() {
conv.session_id = new_session_id;
}
@@ -713,7 +740,10 @@ mod tests {
let err = "OAuth session expired or credentials missing. Please log in: http://localhost:3001/oauth/authorize";
let url = crate::llm::oauth::extract_login_url_from_error(err);
assert!(url.is_some(), "should extract URL from OAuth error");
let msg = format!("Authentication required. [Click here to log in to Claude]({})", url.unwrap());
let msg = format!(
"Authentication required. [Click here to log in to Claude]({})",
url.unwrap()
);
assert!(msg.contains("http://localhost:3001/oauth/authorize"));
assert!(msg.contains("[Click here to log in to Claude]"));
}
+30 -30
View File
@@ -1,12 +1,12 @@
//! Matrix bot run loop — connects to the homeserver and processes sync events.
use crate::agents::AgentPool;
use crate::slog;
use matrix_sdk::{Client, LoopCtrl, config::SyncSettings};
use matrix_sdk::ruma::OwnedRoomId;
use std::sync::atomic::{AtomicBool, AtomicU64, Ordering};
use matrix_sdk::{Client, LoopCtrl, config::SyncSettings};
use std::collections::{HashMap, HashSet};
use std::path::PathBuf;
use std::sync::Arc;
use std::sync::atomic::{AtomicBool, AtomicU64, Ordering};
use tokio::sync::Mutex as TokioMutex;
use tokio::sync::{mpsc, watch};
@@ -73,7 +73,10 @@ pub async fn run_bot(
.ok_or_else(|| "No user ID after login".to_string())?
.to_owned();
slog!("[matrix-bot] Logged in as {bot_user_id} (device: {})", login_response.device_id);
slog!(
"[matrix-bot] Logged in as {bot_user_id} (device: {})",
login_response.device_id
);
// Bootstrap cross-signing keys for E2EE verification support.
// Pass the bot's password for UIA (User-Interactive Authentication) —
@@ -81,9 +84,7 @@ pub async fn run_bot(
{
use matrix_sdk::ruma::api::client::uiaa;
let password_auth = uiaa::AuthData::Password(uiaa::Password::new(
uiaa::UserIdentifier::UserIdOrLocalpart(
config.username.clone().unwrap_or_default(),
),
uiaa::UserIdentifier::UserIdOrLocalpart(config.username.clone().unwrap_or_default()),
config.password.clone().unwrap_or_default(),
));
if let Err(e) = client
@@ -171,11 +172,7 @@ pub async fn run_bot(
);
// Restore persisted ambient rooms from config.
let persisted_ambient: HashSet<String> = config
.ambient_rooms
.iter()
.cloned()
.collect();
let persisted_ambient: HashSet<String> = config.ambient_rooms.iter().cloned().collect();
if !persisted_ambient.is_empty() {
slog!(
"[matrix-bot] Restored ambient mode for {} room(s): {:?}",
@@ -189,11 +186,13 @@ pub async fn run_bot(
"whatsapp" => {
if config.whatsapp_provider == "twilio" {
slog!("[matrix-bot] Using WhatsApp/Twilio transport");
Arc::new(crate::chat::transport::whatsapp::TwilioWhatsAppTransport::new(
config.twilio_account_sid.clone().unwrap_or_default(),
config.twilio_auth_token.clone().unwrap_or_default(),
config.twilio_whatsapp_number.clone().unwrap_or_default(),
))
Arc::new(
crate::chat::transport::whatsapp::TwilioWhatsAppTransport::new(
config.twilio_account_sid.clone().unwrap_or_default(),
config.twilio_auth_token.clone().unwrap_or_default(),
config.twilio_whatsapp_number.clone().unwrap_or_default(),
),
)
} else {
slog!("[matrix-bot] Using WhatsApp/Meta transport");
Arc::new(crate::chat::transport::whatsapp::WhatsAppTransport::new(
@@ -208,7 +207,9 @@ pub async fn run_bot(
}
_ => {
slog!("[matrix-bot] Using Matrix transport");
Arc::new(super::super::transport_impl::MatrixTransport::new(client.clone()))
Arc::new(super::super::transport_impl::MatrixTransport::new(
client.clone(),
))
}
};
@@ -222,10 +223,7 @@ pub async fn run_bot(
project_root.join(".huskies").join("timers.json"),
));
// Auto-schedule timers when an agent hits a hard rate limit.
crate::chat::timer::spawn_rate_limit_auto_scheduler(
Arc::clone(&timer_store),
watcher_rx_auto,
);
crate::chat::timer::spawn_rate_limit_auto_scheduler(Arc::clone(&timer_store), watcher_rx_auto);
let ctx = BotContext {
bot_user_id,
@@ -246,7 +244,9 @@ pub async fn run_bot(
timer_store,
};
slog!("[matrix-bot] Cryptographic identity verification is always ON — commands from unencrypted rooms or unverified devices are rejected");
slog!(
"[matrix-bot] Cryptographic identity verification is always ON — commands from unencrypted rooms or unverified devices are rejected"
);
// Register event handlers and inject shared context.
client.add_event_handler_context(ctx);
@@ -256,8 +256,7 @@ pub async fn run_bot(
// Spawn the stage-transition notification listener before entering the
// sync loop so it starts receiving watcher events immediately.
let notif_room_id_strings: Vec<String> =
notif_room_ids.iter().map(|r| r.to_string()).collect();
let notif_room_id_strings: Vec<String> = notif_room_ids.iter().map(|r| r.to_string()).collect();
super::super::notifications::spawn_notification_listener(
Arc::clone(&transport),
move || notif_room_id_strings.clone(),
@@ -269,8 +268,7 @@ pub async fn run_bot(
// configured rooms when the server is about to stop (SIGINT/SIGTERM or rebuild).
{
let shutdown_transport = Arc::clone(&transport);
let shutdown_rooms: Vec<String> =
announce_room_ids.iter().map(|r| r.to_string()).collect();
let shutdown_rooms: Vec<String> = announce_room_ids.iter().map(|r| r.to_string()).collect();
let shutdown_bot_name = announce_bot_name.clone();
let mut rx = shutdown_rx;
tokio::spawn(async move {
@@ -400,8 +398,7 @@ mod tests {
#[test]
fn io_error_is_not_fatal() {
let e: matrix_sdk::Error =
std::io::Error::new(std::io::ErrorKind::ConnectionRefused, "connection refused")
.into();
std::io::Error::new(std::io::ErrorKind::ConnectionRefused, "connection refused").into();
assert!(!is_fatal_sync_error(&e));
}
@@ -423,7 +420,11 @@ mod tests {
const MAX_BACKOFF_SECS: u64 = 300;
let steps: Vec<u64> = std::iter::successors(Some(5u64), |&d| {
let next = (d * 2).min(MAX_BACKOFF_SECS);
if next < MAX_BACKOFF_SECS { Some(next) } else { None }
if next < MAX_BACKOFF_SECS {
Some(next)
} else {
None
}
})
.collect();
// First few steps: 5, 10, 20, 40, 80, 160
@@ -433,4 +434,3 @@ mod tests {
assert_eq!(steps[3], 40);
}
}
@@ -84,8 +84,9 @@ pub(super) async fn on_to_device_verification_request(
}
break;
}
VerificationRequestState::Done
| VerificationRequestState::Cancelled(_) => break,
VerificationRequestState::Done | VerificationRequestState::Cancelled(_) => {
break;
}
_ => {}
}
}
@@ -100,10 +101,7 @@ pub(super) async fn on_to_device_verification_request(
/// Modern Element sends `m.key.verification.request` as an `m.room.message`
/// event rather than a to-device event. We look for that message type and
/// drive the same SAS flow as the to-device handler.
pub(super) async fn on_room_verification_request(
ev: OriginalSyncRoomMessageEvent,
client: Client,
) {
pub(super) async fn on_room_verification_request(ev: OriginalSyncRoomMessageEvent, client: Client) {
// Only act on in-room verification request messages.
if !matches!(ev.content.msgtype, MessageType::VerificationRequest(_)) {
return;
@@ -152,8 +150,9 @@ pub(super) async fn on_room_verification_request(
}
break;
}
VerificationRequestState::Done
| VerificationRequestState::Cancelled(_) => break,
VerificationRequestState::Done | VerificationRequestState::Cancelled(_) => {
break;
}
_ => {}
}
}
+53 -46
View File
@@ -77,7 +77,6 @@ pub struct BotConfig {
// ── WhatsApp Business API fields ─────────────────────────────────
// These are only required when `transport = "whatsapp"`.
/// WhatsApp Business phone number ID from the Meta dashboard.
#[serde(default)]
pub whatsapp_phone_number_id: Option<String>,
@@ -105,7 +104,6 @@ pub struct BotConfig {
// ── Twilio WhatsApp fields ─────────────────────────────────────────
// Only required when `transport = "whatsapp"` and `whatsapp_provider = "twilio"`.
/// Twilio Account SID (starts with `AC`).
#[serde(default)]
pub twilio_account_sid: Option<String>,
@@ -126,7 +124,6 @@ pub struct BotConfig {
// ── Slack Bot API fields ─────────────────────────────────────────
// These are only required when `transport = "slack"`.
/// Slack Bot User OAuth Token (starts with `xoxb-`).
#[serde(default)]
pub slack_bot_token: Option<String>,
@@ -139,7 +136,6 @@ pub struct BotConfig {
// ── Discord Bot API fields ──────────────────────────────────────
// These are only required when `transport = "discord"`.
/// Discord bot token from the Discord Developer Portal.
#[serde(default)]
pub discord_bot_token: Option<String>,
@@ -189,21 +185,33 @@ impl BotConfig {
if config.transport == "whatsapp" {
if config.whatsapp_provider == "twilio" {
// Validate Twilio-specific fields.
if config.twilio_account_sid.as_ref().is_none_or(|s| s.is_empty()) {
if config
.twilio_account_sid
.as_ref()
.is_none_or(|s| s.is_empty())
{
eprintln!(
"[bot] bot.toml: whatsapp_provider=\"twilio\" requires \
twilio_account_sid"
);
return None;
}
if config.twilio_auth_token.as_ref().is_none_or(|s| s.is_empty()) {
if config
.twilio_auth_token
.as_ref()
.is_none_or(|s| s.is_empty())
{
eprintln!(
"[bot] bot.toml: whatsapp_provider=\"twilio\" requires \
twilio_auth_token"
);
return None;
}
if config.twilio_whatsapp_number.as_ref().is_none_or(|s| s.is_empty()) {
if config
.twilio_whatsapp_number
.as_ref()
.is_none_or(|s| s.is_empty())
{
eprintln!(
"[bot] bot.toml: whatsapp_provider=\"twilio\" requires \
twilio_whatsapp_number"
@@ -212,21 +220,33 @@ impl BotConfig {
}
} else {
// Validate Meta (default) WhatsApp fields.
if config.whatsapp_phone_number_id.as_ref().is_none_or(|s| s.is_empty()) {
if config
.whatsapp_phone_number_id
.as_ref()
.is_none_or(|s| s.is_empty())
{
eprintln!(
"[bot] bot.toml: transport=\"whatsapp\" requires \
whatsapp_phone_number_id"
);
return None;
}
if config.whatsapp_access_token.as_ref().is_none_or(|s| s.is_empty()) {
if config
.whatsapp_access_token
.as_ref()
.is_none_or(|s| s.is_empty())
{
eprintln!(
"[bot] bot.toml: transport=\"whatsapp\" requires \
whatsapp_access_token"
);
return None;
}
if config.whatsapp_verify_token.as_ref().is_none_or(|s| s.is_empty()) {
if config
.whatsapp_verify_token
.as_ref()
.is_none_or(|s| s.is_empty())
{
eprintln!(
"[bot] bot.toml: transport=\"whatsapp\" requires \
whatsapp_verify_token"
@@ -243,7 +263,11 @@ impl BotConfig {
);
return None;
}
if config.slack_signing_secret.as_ref().is_none_or(|s| s.is_empty()) {
if config
.slack_signing_secret
.as_ref()
.is_none_or(|s| s.is_empty())
{
eprintln!(
"[bot] bot.toml: transport=\"slack\" requires \
slack_signing_secret"
@@ -259,7 +283,11 @@ impl BotConfig {
}
} else if config.transport == "discord" {
// Validate Discord-specific fields.
if config.discord_bot_token.as_ref().is_none_or(|s| s.is_empty()) {
if config
.discord_bot_token
.as_ref()
.is_none_or(|s| s.is_empty())
{
eprintln!(
"[bot] bot.toml: transport=\"discord\" requires \
discord_bot_token"
@@ -276,21 +304,15 @@ impl BotConfig {
} else {
// Default transport is Matrix — validate Matrix-specific fields.
if config.homeserver.as_ref().is_none_or(|s| s.is_empty()) {
eprintln!(
"[bot] bot.toml: transport=\"matrix\" requires homeserver"
);
eprintln!("[bot] bot.toml: transport=\"matrix\" requires homeserver");
return None;
}
if config.username.as_ref().is_none_or(|s| s.is_empty()) {
eprintln!(
"[bot] bot.toml: transport=\"matrix\" requires username"
);
eprintln!("[bot] bot.toml: transport=\"matrix\" requires username");
return None;
}
if config.password.as_ref().is_none_or(|s| s.is_empty()) {
eprintln!(
"[bot] bot.toml: transport=\"matrix\" requires password"
);
eprintln!("[bot] bot.toml: transport=\"matrix\" requires password");
return None;
}
if config.room_ids.is_empty() {
@@ -402,7 +424,10 @@ enabled = true
let result = BotConfig::load(tmp.path());
assert!(result.is_some());
let config = result.unwrap();
assert_eq!(config.homeserver.as_deref(), Some("https://matrix.example.com"));
assert_eq!(
config.homeserver.as_deref(),
Some("https://matrix.example.com")
);
assert_eq!(config.username.as_deref(), Some("@bot:example.com"));
assert_eq!(
config.effective_room_ids(),
@@ -761,18 +786,9 @@ whatsapp_verify_token = "my-verify"
.unwrap();
let config = BotConfig::load(tmp.path()).unwrap();
assert_eq!(config.transport, "whatsapp");
assert_eq!(
config.whatsapp_phone_number_id.as_deref(),
Some("123456")
);
assert_eq!(
config.whatsapp_access_token.as_deref(),
Some("EAAtoken")
);
assert_eq!(
config.whatsapp_verify_token.as_deref(),
Some("my-verify")
);
assert_eq!(config.whatsapp_phone_number_id.as_deref(), Some("123456"));
assert_eq!(config.whatsapp_access_token.as_deref(), Some("EAAtoken"));
assert_eq!(config.whatsapp_verify_token.as_deref(), Some("my-verify"));
}
#[test]
@@ -1106,14 +1122,8 @@ discord_channel_ids = ["123456789012345678"]
.unwrap();
let config = BotConfig::load(tmp.path()).unwrap();
assert_eq!(config.transport, "discord");
assert_eq!(
config.discord_bot_token.as_deref(),
Some("Bot.Token.Here")
);
assert_eq!(
config.discord_channel_ids,
vec!["123456789012345678"]
);
assert_eq!(config.discord_bot_token.as_deref(), Some("Bot.Token.Here"));
assert_eq!(config.discord_channel_ids, vec!["123456789012345678"]);
}
#[test]
@@ -1176,9 +1186,6 @@ discord_allowed_users = ["111222333", "444555666"]
"#,
)
.unwrap();
assert_eq!(
config.discord_allowed_users,
vec!["111222333", "444555666"]
);
assert_eq!(config.discord_allowed_users, vec!["111222333", "444555666"]);
}
}
+1 -3
View File
@@ -65,9 +65,7 @@ pub async fn handle_delete(
match crate::chat::lookup::find_story_by_number(project_root, story_number) {
Some(found) => found,
None => {
return format!(
"No story, bug, or spike with number **{story_number}** found."
);
return format!("No story, bug, or spike with number **{story_number}** found.");
}
};
+18 -5
View File
@@ -13,9 +13,9 @@ use std::time::Duration;
use tokio::sync::{Mutex as TokioMutex, watch};
use crate::agents::{AgentPool, AgentStatus};
use crate::chat::ChatTransport;
use crate::chat::util::strip_bot_mention;
use crate::slog;
use crate::chat::ChatTransport;
use super::bot::markdown_to_html;
@@ -51,7 +51,11 @@ pub type HtopSessions = Arc<TokioMutex<HashMap<String, HtopSession>>>;
/// - `htop stop` → `Stop`
/// - `htop 10m` → `Start { duration_secs: 600 }`
/// - `htop 120` → `Start { duration_secs: 120 }` (bare seconds)
pub fn extract_htop_command(message: &str, bot_name: &str, bot_user_id: &str) -> Option<HtopCommand> {
pub fn extract_htop_command(
message: &str,
bot_name: &str,
bot_user_id: &str,
) -> Option<HtopCommand> {
let stripped = strip_bot_mention(message, bot_name, bot_user_id);
let trimmed = stripped.trim();
@@ -261,7 +265,10 @@ pub async fn run_htop_loop(
let text = build_htop_message(&agents, tick as u32, duration_secs);
let html = markdown_to_html(&text);
if let Err(e) = transport.edit_message(&room_id, &initial_message_id, &text, &html).await {
if let Err(e) = transport
.edit_message(&room_id, &initial_message_id, &text, &html)
.await
{
slog!("[htop] Failed to update message: {e}");
return;
}
@@ -274,7 +281,10 @@ pub async fn run_htop_loop(
async fn send_stopped_message(transport: &dyn ChatTransport, room_id: &str, message_id: &str) {
let text = "**htop** — monitoring stopped.";
let html = markdown_to_html(text);
if let Err(e) = transport.edit_message(room_id, message_id, text, &html).await {
if let Err(e) = transport
.edit_message(room_id, message_id, text, &html)
.await
{
slog!("[htop] Failed to send stop message: {e}");
}
}
@@ -302,7 +312,10 @@ pub async fn handle_htop_start(
// Send the initial message.
let initial_text = build_htop_message(&agents, 0, duration_secs);
let initial_html = markdown_to_html(&initial_text);
let message_id = match transport.send_message(room_id, &initial_text, &initial_html).await {
let message_id = match transport
.send_message(room_id, &initial_text, &initial_html)
.await
{
Ok(id) => id,
Err(e) => {
slog!("[htop] Failed to send initial message: {e}");
+11 -4
View File
@@ -21,11 +21,11 @@ pub mod commands;
pub(crate) mod config;
pub mod delete;
pub mod htop;
pub mod notifications;
pub mod rebuild;
pub mod reset;
pub mod rmtree;
pub mod start;
pub mod notifications;
pub mod transport_impl;
pub use bot::{ConversationEntry, ConversationRole, RoomConversation};
@@ -92,9 +92,16 @@ pub fn spawn_bot(
let watcher_rx = watcher_tx.subscribe();
let watcher_rx_auto = watcher_tx.subscribe();
tokio::spawn(async move {
if let Err(e) =
bot::run_bot(config, root, watcher_rx, watcher_rx_auto, perm_rx, agents, shutdown_rx)
.await
if let Err(e) = bot::run_bot(
config,
root,
watcher_rx,
watcher_rx_auto,
perm_rx,
agents,
shutdown_rx,
)
.await
{
crate::slog!("[matrix-bot] Fatal error: {e}");
}
+236 -214
View File
@@ -3,11 +3,11 @@
//! Subscribes to [`WatcherEvent`] broadcasts and posts a notification to all
//! configured Matrix rooms whenever a work item moves between pipeline stages.
use crate::chat::ChatTransport;
use crate::config::ProjectConfig;
use crate::io::story_metadata::parse_front_matter;
use crate::io::watcher::WatcherEvent;
use crate::slog;
use crate::chat::ChatTransport;
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use std::sync::Arc;
@@ -81,9 +81,7 @@ pub fn format_error_notification(
let name = story_name.unwrap_or(item_id);
let plain = format!("\u{274c} #{number} {name} \u{2014} {reason}");
let html = format!(
"\u{274c} <strong>#{number}</strong> <em>{name}</em> \u{2014} {reason}"
);
let html = format!("\u{274c} <strong>#{number}</strong> <em>{name}</em> \u{2014} {reason}");
(plain, html)
}
@@ -113,9 +111,8 @@ pub fn format_blocked_notification(
let name = story_name.unwrap_or(item_id);
let plain = format!("\u{1f6ab} #{number} {name} \u{2014} BLOCKED: {reason}");
let html = format!(
"\u{1f6ab} <strong>#{number}</strong> <em>{name}</em> \u{2014} BLOCKED: {reason}"
);
let html =
format!("\u{1f6ab} <strong>#{number}</strong> <em>{name}</em> \u{2014} BLOCKED: {reason}");
(plain, html)
}
@@ -126,7 +123,6 @@ const RATE_LIMIT_DEBOUNCE: Duration = Duration::from_secs(60);
/// into a single notification (only the final stage is announced).
const STAGE_TRANSITION_DEBOUNCE: Duration = Duration::from_millis(200);
/// Format a rate limit warning notification message.
///
/// Returns `(plain_text, html)` suitable for `ChatTransport::send_message`.
@@ -138,9 +134,8 @@ pub fn format_rate_limit_notification(
let number = extract_story_number(item_id).unwrap_or(item_id);
let name = story_name.unwrap_or(item_id);
let plain = format!(
"\u{26a0}\u{fe0f} #{number} {name} \u{2014} {agent_name} hit an API rate limit"
);
let plain =
format!("\u{26a0}\u{fe0f} #{number} {name} \u{2014} {agent_name} hit an API rate limit");
let html = format!(
"\u{26a0}\u{fe0f} <strong>#{number}</strong> <em>{name}</em> \u{2014} \
{agent_name} hit an API rate limit"
@@ -223,9 +218,7 @@ pub fn spawn_notification_listener(
// and must be skipped — the old inferred_from_stage fallback
// produced wrong notifications for stories that skipped stages
// (e.g. "QA → Merge" when QA was never entered).
let from_display = from_stage
.as_deref()
.map(stage_display_name);
let from_display = from_stage.as_deref().map(stage_display_name);
let Some(from_display) = from_display else {
continue; // creation or unknown transition — skip
};
@@ -246,33 +239,24 @@ pub fn spawn_notification_listener(
e.2 = story_name.clone();
}
})
.or_insert_with(|| {
(from_display.to_string(), stage.clone(), story_name)
});
.or_insert_with(|| (from_display.to_string(), stage.clone(), story_name));
// Start or extend the debounce window.
flush_deadline =
Some(tokio::time::Instant::now() + STAGE_TRANSITION_DEBOUNCE);
flush_deadline = Some(tokio::time::Instant::now() + STAGE_TRANSITION_DEBOUNCE);
}
Ok(WatcherEvent::MergeFailure {
ref story_id,
ref reason,
}) => {
let story_name =
read_story_name(&project_root, "4_merge", story_id);
let (plain, html) = format_error_notification(
story_id,
story_name.as_deref(),
reason,
);
let story_name = read_story_name(&project_root, "4_merge", story_id);
let (plain, html) =
format_error_notification(story_id, story_name.as_deref(), reason);
slog!("[bot] Sending error notification: {plain}");
for room_id in &get_room_ids() {
if let Err(e) = transport.send_message(room_id, &plain, &html).await {
slog!(
"[bot] Failed to send error notification to {room_id}: {e}"
);
slog!("[bot] Failed to send error notification to {room_id}: {e}");
}
}
}
@@ -303,11 +287,8 @@ pub fn spawn_notification_listener(
rate_limit_last_notified.insert(debounce_key, now);
let story_name = find_story_name_any_stage(&project_root, story_id);
let (plain, html) = format_rate_limit_notification(
story_id,
story_name.as_deref(),
agent_name,
);
let (plain, html) =
format_rate_limit_notification(story_id, story_name.as_deref(), agent_name);
slog!("[bot] Sending rate-limit notification: {plain}");
@@ -325,19 +306,14 @@ pub fn spawn_notification_listener(
ref reason,
}) => {
let story_name = find_story_name_any_stage(&project_root, story_id);
let (plain, html) = format_blocked_notification(
story_id,
story_name.as_deref(),
reason,
);
let (plain, html) =
format_blocked_notification(story_id, story_name.as_deref(), reason);
slog!("[bot] Sending blocked notification: {plain}");
for room_id in &get_room_ids() {
if let Err(e) = transport.send_message(room_id, &plain, &html).await {
slog!(
"[bot] Failed to send blocked notification to {room_id}: {e}"
);
slog!("[bot] Failed to send blocked notification to {room_id}: {e}");
}
}
}
@@ -362,14 +338,10 @@ pub fn spawn_notification_listener(
}
Ok(_) => {} // Ignore other events
Err(broadcast::error::RecvError::Lagged(n)) => {
slog!(
"[bot] Notification listener lagged, skipped {n} events"
);
slog!("[bot] Notification listener lagged, skipped {n} events");
}
Err(broadcast::error::RecvError::Closed) => {
slog!(
"[bot] Watcher channel closed, stopping notification listener"
);
slog!("[bot] Watcher channel closed, stopping notification listener");
// Flush any coalesced transitions that haven't fired yet.
for (item_id, (from_display, to_stage_key, story_name)) in
pending_transitions.drain()
@@ -383,12 +355,8 @@ pub fn spawn_notification_listener(
);
slog!("[bot] Sending stage notification: {plain}");
for room_id in &get_room_ids() {
if let Err(e) =
transport.send_message(room_id, &plain, &html).await
{
slog!(
"[bot] Failed to send notification to {room_id}: {e}"
);
if let Err(e) = transport.send_message(room_id, &plain, &html).await {
slog!("[bot] Failed to send notification to {room_id}: {e}");
}
}
}
@@ -402,8 +370,8 @@ pub fn spawn_notification_listener(
#[cfg(test)]
mod tests {
use super::*;
use async_trait::async_trait;
use crate::chat::MessageId;
use async_trait::async_trait;
// ── MockTransport ───────────────────────────────────────────────────────
@@ -417,18 +385,38 @@ mod tests {
impl MockTransport {
fn new() -> (Arc<Self>, CallLog) {
let calls: CallLog = Arc::new(std::sync::Mutex::new(Vec::new()));
(Arc::new(Self { calls: Arc::clone(&calls) }), calls)
(
Arc::new(Self {
calls: Arc::clone(&calls),
}),
calls,
)
}
}
#[async_trait]
impl crate::chat::ChatTransport for MockTransport {
async fn send_message(&self, room_id: &str, plain: &str, html: &str) -> Result<MessageId, String> {
self.calls.lock().unwrap().push((room_id.to_string(), plain.to_string(), html.to_string()));
async fn send_message(
&self,
room_id: &str,
plain: &str,
html: &str,
) -> Result<MessageId, String> {
self.calls.lock().unwrap().push((
room_id.to_string(),
plain.to_string(),
html.to_string(),
));
Ok("mock-msg-id".to_string())
}
async fn edit_message(&self, _room_id: &str, _id: &str, _plain: &str, _html: &str) -> Result<(), String> {
async fn edit_message(
&self,
_room_id: &str,
_id: &str,
_plain: &str,
_html: &str,
) -> Result<(), String> {
Ok(())
}
@@ -462,10 +450,12 @@ mod tests {
tmp.path().to_path_buf(),
);
watcher_tx.send(WatcherEvent::RateLimitWarning {
story_id: "365_story_rate_limit".to_string(),
agent_name: "coder-1".to_string(),
}).unwrap();
watcher_tx
.send(WatcherEvent::RateLimitWarning {
story_id: "365_story_rate_limit".to_string(),
agent_name: "coder-1".to_string(),
})
.unwrap();
// Give the spawned task time to process the event.
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
@@ -475,9 +465,15 @@ mod tests {
let (room_id, plain, _html) = &calls[0];
assert_eq!(room_id, "!room123:example.org");
assert!(plain.contains("365"), "plain should contain story number");
assert!(plain.contains("Rate Limit Test Story"), "plain should contain story name");
assert!(
plain.contains("Rate Limit Test Story"),
"plain should contain story name"
);
assert!(plain.contains("coder-1"), "plain should contain agent name");
assert!(plain.contains("rate limit"), "plain should mention rate limit");
assert!(
plain.contains("rate limit"),
"plain should mention rate limit"
);
}
/// AC4: a second RateLimitWarning for the same agent within the debounce
@@ -498,16 +494,22 @@ mod tests {
// Send the same warning twice in rapid succession.
for _ in 0..2 {
watcher_tx.send(WatcherEvent::RateLimitWarning {
story_id: "42_story_debounce".to_string(),
agent_name: "coder-2".to_string(),
}).unwrap();
watcher_tx
.send(WatcherEvent::RateLimitWarning {
story_id: "42_story_debounce".to_string(),
agent_name: "coder-2".to_string(),
})
.unwrap();
}
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
let calls = calls.lock().unwrap();
assert_eq!(calls.len(), 1, "Debounce should suppress the second notification");
assert_eq!(
calls.len(),
1,
"Debounce should suppress the second notification"
);
}
/// AC4 (corollary): warnings for different agents are NOT debounced against
@@ -526,19 +528,27 @@ mod tests {
tmp.path().to_path_buf(),
);
watcher_tx.send(WatcherEvent::RateLimitWarning {
story_id: "42_story_foo".to_string(),
agent_name: "coder-1".to_string(),
}).unwrap();
watcher_tx.send(WatcherEvent::RateLimitWarning {
story_id: "42_story_foo".to_string(),
agent_name: "coder-2".to_string(),
}).unwrap();
watcher_tx
.send(WatcherEvent::RateLimitWarning {
story_id: "42_story_foo".to_string(),
agent_name: "coder-1".to_string(),
})
.unwrap();
watcher_tx
.send(WatcherEvent::RateLimitWarning {
story_id: "42_story_foo".to_string(),
agent_name: "coder-2".to_string(),
})
.unwrap();
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
let calls = calls.lock().unwrap();
assert_eq!(calls.len(), 2, "Different agents should each trigger a notification");
assert_eq!(
calls.len(),
2,
"Different agents should each trigger a notification"
);
}
// ── dynamic room IDs (WhatsApp ambient_rooms pattern) ───────────────────
@@ -573,25 +583,40 @@ mod tests {
);
// Add a room after the listener is spawned (simulates a user messaging first).
rooms.lock().unwrap().insert("phone:+15551234567".to_string());
rooms
.lock()
.unwrap()
.insert("phone:+15551234567".to_string());
watcher_tx.send(WatcherEvent::WorkItem {
stage: "3_qa".to_string(),
item_id: "10_story_foo".to_string(),
action: "qa".to_string(),
commit_msg: "huskies: qa 10_story_foo".to_string(),
from_stage: Some("2_current".to_string()),
}).unwrap();
watcher_tx
.send(WatcherEvent::WorkItem {
stage: "3_qa".to_string(),
item_id: "10_story_foo".to_string(),
action: "qa".to_string(),
commit_msg: "huskies: qa 10_story_foo".to_string(),
from_stage: Some("2_current".to_string()),
})
.unwrap();
// Wait longer than STAGE_TRANSITION_DEBOUNCE (200ms) so the coalesced
// notification flushes.
tokio::time::sleep(std::time::Duration::from_millis(350)).await;
let calls = calls.lock().unwrap();
assert_eq!(calls.len(), 1, "Should deliver to the dynamically added room");
assert_eq!(
calls.len(),
1,
"Should deliver to the dynamically added room"
);
assert_eq!(calls[0].0, "phone:+15551234567");
assert!(calls[0].1.contains("10"), "plain should contain story number");
assert!(calls[0].1.contains("Foo Story"), "plain should contain story name");
assert!(
calls[0].1.contains("10"),
"plain should contain story number"
);
assert!(
calls[0].1.contains("Foo Story"),
"plain should contain story name"
);
}
/// When no rooms are registered (e.g. no WhatsApp users have messaged yet),
@@ -603,20 +628,17 @@ mod tests {
let (watcher_tx, watcher_rx) = broadcast::channel::<WatcherEvent>(16);
let (transport, calls) = MockTransport::new();
spawn_notification_listener(
transport,
Vec::new,
watcher_rx,
tmp.path().to_path_buf(),
);
spawn_notification_listener(transport, Vec::new, watcher_rx, tmp.path().to_path_buf());
watcher_tx.send(WatcherEvent::WorkItem {
stage: "3_qa".to_string(),
item_id: "10_story_foo".to_string(),
action: "qa".to_string(),
commit_msg: "huskies: qa 10_story_foo".to_string(),
from_stage: Some("2_current".to_string()),
}).unwrap();
watcher_tx
.send(WatcherEvent::WorkItem {
stage: "3_qa".to_string(),
item_id: "10_story_foo".to_string(),
action: "qa".to_string(),
commit_msg: "huskies: qa 10_story_foo".to_string(),
from_stage: Some("2_current".to_string()),
})
.unwrap();
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
@@ -660,11 +682,7 @@ mod tests {
#[test]
fn read_story_name_reads_from_front_matter() {
let tmp = tempfile::tempdir().unwrap();
let stage_dir = tmp
.path()
.join(".huskies")
.join("work")
.join("2_current");
let stage_dir = tmp.path().join(".huskies").join("work").join("2_current");
std::fs::create_dir_all(&stage_dir).unwrap();
std::fs::write(
stage_dir.join("42_story_my_feature.md"),
@@ -686,11 +704,7 @@ mod tests {
#[test]
fn read_story_name_returns_none_for_missing_name_field() {
let tmp = tempfile::tempdir().unwrap();
let stage_dir = tmp
.path()
.join(".huskies")
.join("work")
.join("2_current");
let stage_dir = tmp.path().join(".huskies").join("work").join("2_current");
std::fs::create_dir_all(&stage_dir).unwrap();
std::fs::write(
stage_dir.join("42_story_no_name.md"),
@@ -706,8 +720,11 @@ mod tests {
#[test]
fn format_error_notification_with_story_name() {
let (plain, html) =
format_error_notification("262_story_bot_errors", Some("Bot error notifications"), "merge conflict in src/main.rs");
let (plain, html) = format_error_notification(
"262_story_bot_errors",
Some("Bot error notifications"),
"merge conflict in src/main.rs",
);
assert_eq!(
plain,
"\u{274c} #262 Bot error notifications \u{2014} merge conflict in src/main.rs"
@@ -720,12 +737,8 @@ mod tests {
#[test]
fn format_error_notification_without_story_name_falls_back_to_item_id() {
let (plain, _html) =
format_error_notification("42_bug_fix_thing", None, "tests failed");
assert_eq!(
plain,
"\u{274c} #42 42_bug_fix_thing \u{2014} tests failed"
);
let (plain, _html) = format_error_notification("42_bug_fix_thing", None, "tests failed");
assert_eq!(plain, "\u{274c} #42 42_bug_fix_thing \u{2014} tests failed");
}
#[test]
@@ -759,8 +772,7 @@ mod tests {
#[test]
fn format_blocked_notification_falls_back_to_item_id() {
let (plain, _html) =
format_blocked_notification("42_story_thing", None, "empty diff");
let (plain, _html) = format_blocked_notification("42_story_thing", None, "empty diff");
assert_eq!(
plain,
"\u{1f6ab} #42 42_story_thing \u{2014} BLOCKED: empty diff"
@@ -792,10 +804,12 @@ mod tests {
tmp.path().to_path_buf(),
);
watcher_tx.send(WatcherEvent::StoryBlocked {
story_id: "425_story_blocking_test".to_string(),
reason: "Retry limit exceeded (3/3) at coder stage".to_string(),
}).unwrap();
watcher_tx
.send(WatcherEvent::StoryBlocked {
story_id: "425_story_blocking_test".to_string(),
reason: "Retry limit exceeded (3/3) at coder stage".to_string(),
})
.unwrap();
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
@@ -804,10 +818,22 @@ mod tests {
let (room_id, plain, html) = &calls[0];
assert_eq!(room_id, "!room123:example.org");
assert!(plain.contains("425"), "plain should contain story number");
assert!(plain.contains("Blocking Test Story"), "plain should contain story name");
assert!(plain.contains("BLOCKED"), "plain should contain BLOCKED label");
assert!(plain.contains("Retry limit exceeded"), "plain should contain the reason");
assert!(html.contains("BLOCKED"), "html should contain BLOCKED label");
assert!(
plain.contains("Blocking Test Story"),
"plain should contain story name"
);
assert!(
plain.contains("BLOCKED"),
"plain should contain BLOCKED label"
);
assert!(
plain.contains("Retry limit exceeded"),
"plain should contain the reason"
);
assert!(
html.contains("BLOCKED"),
"html should contain BLOCKED label"
);
}
/// StoryBlocked with no room registered should not panic.
@@ -818,17 +844,14 @@ mod tests {
let (watcher_tx, watcher_rx) = broadcast::channel::<WatcherEvent>(16);
let (transport, calls) = MockTransport::new();
spawn_notification_listener(
transport,
Vec::new,
watcher_rx,
tmp.path().to_path_buf(),
);
spawn_notification_listener(transport, Vec::new, watcher_rx, tmp.path().to_path_buf());
watcher_tx.send(WatcherEvent::StoryBlocked {
story_id: "42_story_no_rooms".to_string(),
reason: "empty diff".to_string(),
}).unwrap();
watcher_tx
.send(WatcherEvent::StoryBlocked {
story_id: "42_story_no_rooms".to_string(),
reason: "empty diff".to_string(),
})
.unwrap();
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
@@ -840,11 +863,8 @@ mod tests {
#[test]
fn format_rate_limit_notification_includes_agent_and_story() {
let (plain, html) = format_rate_limit_notification(
"365_story_my_feature",
Some("My Feature"),
"coder-2",
);
let (plain, html) =
format_rate_limit_notification("365_story_my_feature", Some("My Feature"), "coder-2");
assert_eq!(
plain,
"\u{26a0}\u{fe0f} #365 My Feature \u{2014} coder-2 hit an API rate limit"
@@ -857,8 +877,7 @@ mod tests {
#[test]
fn format_rate_limit_notification_falls_back_to_item_id() {
let (plain, _html) =
format_rate_limit_notification("42_story_thing", None, "coder-1");
let (plain, _html) = format_rate_limit_notification("42_story_thing", None, "coder-1");
assert_eq!(
plain,
"\u{26a0}\u{fe0f} #42 42_story_thing \u{2014} coder-1 hit an API rate limit"
@@ -869,12 +888,8 @@ mod tests {
#[test]
fn format_notification_done_stage_includes_party_emoji() {
let (plain, html) = format_stage_notification(
"353_story_done",
Some("Done Story"),
"Merge",
"Done",
);
let (plain, html) =
format_stage_notification("353_story_done", Some("Done Story"), "Merge", "Done");
assert_eq!(
plain,
"\u{1f389} #353 Done Story \u{2014} Merge \u{2192} Done"
@@ -887,12 +902,8 @@ mod tests {
#[test]
fn format_notification_non_done_stage_has_no_emoji() {
let (plain, _html) = format_stage_notification(
"42_story_thing",
Some("Some Story"),
"Backlog",
"Current",
);
let (plain, _html) =
format_stage_notification("42_story_thing", Some("Some Story"), "Backlog", "Current");
assert!(!plain.contains("\u{1f389}"));
}
@@ -916,26 +927,14 @@ mod tests {
#[test]
fn format_notification_without_story_name_falls_back_to_item_id() {
let (plain, _html) = format_stage_notification(
"42_bug_fix_thing",
None,
"Current",
"QA",
);
assert_eq!(
plain,
"#42 42_bug_fix_thing \u{2014} Current \u{2192} QA"
);
let (plain, _html) = format_stage_notification("42_bug_fix_thing", None, "Current", "QA");
assert_eq!(plain, "#42 42_bug_fix_thing \u{2014} Current \u{2192} QA");
}
#[test]
fn format_notification_non_numeric_id_uses_full_id() {
let (plain, _html) = format_stage_notification(
"abc_story_thing",
Some("Some Story"),
"QA",
"Merge",
);
let (plain, _html) =
format_stage_notification("abc_story_thing", Some("Some Story"), "QA", "Merge");
assert_eq!(
plain,
"#abc_story_thing Some Story \u{2014} QA \u{2192} Merge"
@@ -967,15 +966,21 @@ mod tests {
tmp.path().to_path_buf(),
);
watcher_tx.send(WatcherEvent::RateLimitWarning {
story_id: "42_story_suppress".to_string(),
agent_name: "coder-1".to_string(),
}).unwrap();
watcher_tx
.send(WatcherEvent::RateLimitWarning {
story_id: "42_story_suppress".to_string(),
agent_name: "coder-1".to_string(),
})
.unwrap();
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
let calls = calls.lock().unwrap();
assert_eq!(calls.len(), 0, "RateLimitWarning should be suppressed when rate_limit_notifications = false");
assert_eq!(
calls.len(),
0,
"RateLimitWarning should be suppressed when rate_limit_notifications = false"
);
}
/// RateLimitHardBlock is never posted to Matrix — it is logged server-side only.
@@ -994,11 +999,13 @@ mod tests {
);
let reset_at = chrono::Utc::now() + chrono::Duration::hours(1);
watcher_tx.send(WatcherEvent::RateLimitHardBlock {
story_id: "42_story_hard_block".to_string(),
agent_name: "coder-1".to_string(),
reset_at,
}).unwrap();
watcher_tx
.send(WatcherEvent::RateLimitHardBlock {
story_id: "42_story_hard_block".to_string(),
agent_name: "coder-1".to_string(),
reset_at,
})
.unwrap();
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
@@ -1028,10 +1035,12 @@ mod tests {
tmp.path().to_path_buf(),
);
watcher_tx.send(WatcherEvent::StoryBlocked {
story_id: "42_story_blocked".to_string(),
reason: "retry limit exceeded".to_string(),
}).unwrap();
watcher_tx
.send(WatcherEvent::StoryBlocked {
story_id: "42_story_blocked".to_string(),
reason: "retry limit exceeded".to_string(),
})
.unwrap();
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
@@ -1064,10 +1073,12 @@ mod tests {
);
// First warning is sent.
watcher_tx.send(WatcherEvent::RateLimitWarning {
story_id: "42_story_reload".to_string(),
agent_name: "coder-1".to_string(),
}).unwrap();
watcher_tx
.send(WatcherEvent::RateLimitWarning {
story_id: "42_story_reload".to_string(),
agent_name: "coder-1".to_string(),
})
.unwrap();
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
// Disable notifications and trigger hot-reload.
@@ -1080,14 +1091,20 @@ mod tests {
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
// Second warning (different agent to bypass debounce) should be suppressed.
watcher_tx.send(WatcherEvent::RateLimitWarning {
story_id: "42_story_reload".to_string(),
agent_name: "coder-2".to_string(),
}).unwrap();
watcher_tx
.send(WatcherEvent::RateLimitWarning {
story_id: "42_story_reload".to_string(),
agent_name: "coder-2".to_string(),
})
.unwrap();
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
let calls = calls.lock().unwrap();
assert_eq!(calls.len(), 1, "Only the first warning should be sent; second should be suppressed after hot-reload");
assert_eq!(
calls.len(),
1,
"Only the first warning should be sent; second should be suppressed after hot-reload"
);
}
// ── Bug 549: synthetic events with from_stage=None must not notify ──────
@@ -1111,19 +1128,22 @@ mod tests {
);
// Synthetic reassign event within 4_merge — no actual stage change.
watcher_tx.send(WatcherEvent::WorkItem {
stage: "4_merge".to_string(),
item_id: "549_story_skip_qa".to_string(),
action: "reassign".to_string(),
commit_msg: String::new(),
from_stage: None,
}).unwrap();
watcher_tx
.send(WatcherEvent::WorkItem {
stage: "4_merge".to_string(),
item_id: "549_story_skip_qa".to_string(),
action: "reassign".to_string(),
commit_msg: String::new(),
from_stage: None,
})
.unwrap();
tokio::time::sleep(std::time::Duration::from_millis(350)).await;
let calls = calls.lock().unwrap();
assert_eq!(
calls.len(), 0,
calls.len(),
0,
"Synthetic events with from_stage=None must not generate notifications"
);
}
@@ -1152,13 +1172,15 @@ mod tests {
);
// Story skips QA: from_stage is 2_current, not 3_qa.
watcher_tx.send(WatcherEvent::WorkItem {
stage: "4_merge".to_string(),
item_id: "549_story_skip_qa".to_string(),
action: "merge".to_string(),
commit_msg: "huskies: merge 549_story_skip_qa".to_string(),
from_stage: Some("2_current".to_string()),
}).unwrap();
watcher_tx
.send(WatcherEvent::WorkItem {
stage: "4_merge".to_string(),
item_id: "549_story_skip_qa".to_string(),
action: "merge".to_string(),
commit_msg: "huskies: merge 549_story_skip_qa".to_string(),
from_stage: Some("2_current".to_string()),
})
.unwrap();
tokio::time::sleep(std::time::Duration::from_millis(350)).await;
+2 -5
View File
@@ -73,11 +73,8 @@ mod tests {
#[test]
fn extract_with_full_user_id() {
let cmd = extract_rebuild_command(
"@timmy:home.local rebuild",
"Timmy",
"@timmy:home.local",
);
let cmd =
extract_rebuild_command("@timmy:home.local rebuild", "Timmy", "@timmy:home.local");
assert_eq!(cmd, Some(RebuildCommand));
}
+19 -12
View File
@@ -50,7 +50,9 @@ pub async fn handle_reset(
) -> String {
{
let mut guard = history.lock().await;
let conv = guard.entry(room_id.clone()).or_insert_with(RoomConversation::default);
let conv = guard
.entry(room_id.clone())
.or_insert_with(RoomConversation::default);
conv.session_id = None;
conv.entries.clear();
crate::chat::transport::matrix::bot::save_history(project_root, &guard);
@@ -75,8 +77,7 @@ mod tests {
#[test]
fn extract_with_full_user_id() {
let cmd =
extract_reset_command("@timmy:home.local reset", "Timmy", "@timmy:home.local");
let cmd = extract_reset_command("@timmy:home.local reset", "Timmy", "@timmy:home.local");
assert_eq!(cmd, Some(ResetCommand));
}
@@ -115,21 +116,27 @@ mod tests {
let room_id: OwnedRoomId = "!test:example.com".parse().unwrap();
let history: ConversationHistory = Arc::new(TokioMutex::new({
let mut m = HashMap::new();
m.insert(room_id.clone(), RoomConversation {
session_id: Some("old-session-id".to_string()),
entries: vec![ConversationEntry {
role: ConversationRole::User,
sender: "@alice:example.com".to_string(),
content: "previous message".to_string(),
}],
});
m.insert(
room_id.clone(),
RoomConversation {
session_id: Some("old-session-id".to_string()),
entries: vec![ConversationEntry {
role: ConversationRole::User,
sender: "@alice:example.com".to_string(),
content: "previous message".to_string(),
}],
},
);
m
}));
let tmp = tempfile::tempdir().unwrap();
let response = handle_reset("Timmy", &room_id, &history, tmp.path()).await;
assert!(response.contains("reset"), "response should mention reset: {response}");
assert!(
response.contains("reset"),
"response should mention reset: {response}"
);
let guard = history.lock().await;
let conv = guard.get(&room_id).unwrap();
+3 -8
View File
@@ -107,9 +107,7 @@ pub async fn handle_rmtree(
return format!("Failed to remove worktree for story {story_number}: {e}");
}
crate::slog!(
"[matrix-bot] rmtree command: removed worktree for {story_id} (bot={bot_name})"
);
crate::slog!("[matrix-bot] rmtree command: removed worktree for {story_id} (bot={bot_name})");
let mut response = format!("Removed worktree for **{story_id}**.");
if !stopped_agents.is_empty() {
@@ -131,11 +129,8 @@ mod tests {
#[test]
fn extract_with_full_user_id() {
let cmd = extract_rmtree_command(
"@timmy:home.local rmtree 42",
"Timmy",
"@timmy:home.local",
);
let cmd =
extract_rmtree_command("@timmy:home.local rmtree 42", "Timmy", "@timmy:home.local");
assert_eq!(
cmd,
Some(RmtreeCommand::Rmtree {
+18 -6
View File
@@ -84,9 +84,7 @@ pub async fn handle_start(
match crate::chat::lookup::find_story_by_number(project_root, story_number) {
Some(found) => found,
None => {
return format!(
"No story, bug, or spike with number **{story_number}** found."
);
return format!("No story, bug, or spike with number **{story_number}** found.");
}
};
@@ -115,7 +113,13 @@ pub async fn handle_start(
);
match agents
.start_agent(project_root, &story_id, resolved_agent.as_deref(), None, None)
.start_agent(
project_root,
&story_id,
resolved_agent.as_deref(),
None,
None,
)
.await
{
Ok(info) => {
@@ -231,7 +235,14 @@ mod tests {
async fn handle_start_returns_not_found_for_unknown_number() {
let tmp = tempfile::tempdir().unwrap();
let project_root = tmp.path();
for stage in &["1_backlog", "2_current", "3_qa", "4_merge", "5_done", "6_archived"] {
for stage in &[
"1_backlog",
"2_current",
"3_qa",
"4_merge",
"5_done",
"6_archived",
] {
std::fs::create_dir_all(project_root.join(".huskies").join("work").join(stage))
.unwrap();
}
@@ -276,7 +287,8 @@ mod tests {
"response must not say 'Failed' when coders are busy: {response}"
);
assert!(
response.to_lowercase().contains("queue") || response.to_lowercase().contains("available"),
response.to_lowercase().contains("queue")
|| response.to_lowercase().contains("available"),
"response must mention queued/available state: {response}"
);
}
+74 -44
View File
@@ -1,21 +1,21 @@
//! Slack incoming message dispatch and slash command handling.
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::collections::HashSet;
use std::path::PathBuf;
use std::sync::{Arc, Mutex};
use tokio::sync::{Mutex as TokioMutex, oneshot};
use serde::{Deserialize, Serialize};
use super::format::markdown_to_slack;
use super::history::{SlackConversationHistory, save_slack_history};
use super::meta::SlackTransport;
use crate::agents::AgentPool;
use crate::chat::ChatTransport;
use crate::chat::transport::matrix::{ConversationEntry, ConversationRole, RoomConversation};
use crate::chat::util::is_permission_approval;
use crate::slog;
use crate::chat::ChatTransport;
use crate::http::context::{PermissionDecision, PermissionForward};
use super::meta::SlackTransport;
use super::history::{SlackConversationHistory, save_slack_history};
use super::format::markdown_to_slack;
use crate::slog;
// ── Slash command types ─────────────────────────────────────────────────
@@ -81,8 +81,7 @@ pub struct SlackWebhookContext {
/// Permission requests from the MCP `prompt_permission` tool arrive here.
pub perm_rx: Arc<TokioMutex<tokio::sync::mpsc::UnboundedReceiver<PermissionForward>>>,
/// Pending permission replies keyed by channel ID.
pub pending_perm_replies:
Arc<TokioMutex<HashMap<String, oneshot::Sender<PermissionDecision>>>>,
pub pending_perm_replies: Arc<TokioMutex<HashMap<String, oneshot::Sender<PermissionDecision>>>>,
/// Seconds before an unanswered permission prompt is auto-denied.
pub permission_timeout_secs: u64,
}
@@ -154,8 +153,11 @@ pub(super) async fn handle_incoming_message(
}
HtopCommand::Start { duration_secs } => {
// On Slack, htop uses native message editing for live updates.
let snapshot =
crate::chat::transport::matrix::htop::build_htop_message(&ctx.agents, 0, duration_secs);
let snapshot = crate::chat::transport::matrix::htop::build_htop_message(
&ctx.agents,
0,
duration_secs,
);
let snapshot = markdown_to_slack(&snapshot);
let msg_id = match ctx.transport.send_message(channel, &snapshot, "").await {
Ok(id) => id,
@@ -179,9 +181,7 @@ pub(super) async fn handle_incoming_message(
duration_secs,
);
let updated = markdown_to_slack(&updated);
if let Err(e) =
transport.edit_message(&ch, &msg_id, &updated, "").await
{
if let Err(e) = transport.edit_message(&ch, &msg_id, &updated, "").await {
slog!("[slack] Failed to edit htop message: {e}");
break;
}
@@ -245,7 +245,9 @@ pub(super) async fn handle_incoming_message(
) {
let response = match rmtree_cmd {
crate::chat::transport::matrix::rmtree::RmtreeCommand::Rmtree { story_number } => {
slog!("[slack] Handling rmtree command from {user} in {channel}: story {story_number}");
slog!(
"[slack] Handling rmtree command from {user} in {channel}: story {story_number}"
);
crate::chat::transport::matrix::rmtree::handle_rmtree(
&ctx.bot_name,
&story_number,
@@ -273,7 +275,9 @@ pub(super) async fn handle_incoming_message(
slog!("[slack] Handling reset command from {user} in {channel}");
{
let mut guard = ctx.history.lock().await;
let conv = guard.entry(channel.to_string()).or_insert_with(RoomConversation::default);
let conv = guard
.entry(channel.to_string())
.or_insert_with(RoomConversation::default);
conv.session_id = None;
conv.entries.clear();
save_slack_history(&ctx.project_root, &guard);
@@ -295,7 +299,9 @@ pub(super) async fn handle_incoming_message(
story_number,
agent_hint,
} => {
slog!("[slack] Handling start command from {user} in {channel}: story {story_number}");
slog!(
"[slack] Handling start command from {user} in {channel}: story {story_number}"
);
crate::chat::transport::matrix::start::handle_start(
&ctx.bot_name,
&story_number,
@@ -320,8 +326,13 @@ pub(super) async fn handle_incoming_message(
&ctx.bot_user_id,
) {
let response = match assign_cmd {
crate::chat::transport::matrix::assign::AssignCommand::Assign { story_number, model } => {
slog!("[slack] Handling assign command from {user} in {channel}: story {story_number} model {model}");
crate::chat::transport::matrix::assign::AssignCommand::Assign {
story_number,
model,
} => {
slog!(
"[slack] Handling assign command from {user} in {channel}: story {story_number} model {model}"
);
crate::chat::transport::matrix::assign::handle_assign(
&ctx.bot_name,
&story_number,
@@ -352,17 +363,15 @@ async fn handle_llm_message(
user: &str,
user_message: &str,
) {
use crate::llm::providers::claude_code::{ClaudeCodeProvider, ClaudeCodeResult};
use crate::chat::util::drain_complete_paragraphs;
use crate::llm::providers::claude_code::{ClaudeCodeProvider, ClaudeCodeResult};
use std::sync::atomic::{AtomicBool, Ordering};
use tokio::sync::watch;
// Look up existing session ID for this channel.
let resume_session_id: Option<String> = {
let guard = ctx.history.lock().await;
guard
.get(channel)
.and_then(|conv| conv.session_id.clone())
guard.get(channel).and_then(|conv| conv.session_id.clone())
};
let bot_name = &ctx.bot_name;
@@ -383,7 +392,9 @@ async fn handle_llm_message(
let post_task = tokio::spawn(async move {
while let Some(chunk) = msg_rx.recv().await {
let formatted = markdown_to_slack(&chunk);
let _ = post_transport.send_message(&post_channel, &formatted, "").await;
let _ = post_transport
.send_message(&post_channel, &formatted, "")
.await;
}
});
@@ -472,9 +483,7 @@ async fn handle_llm_message(
let last_text = messages
.iter()
.rev()
.find(|m| {
m.role == crate::llm::types::Role::Assistant && !m.content.is_empty()
})
.find(|m| m.role == crate::llm::types::Role::Assistant && !m.content.is_empty())
.map(|m| m.content.clone())
.unwrap_or_default();
if !last_text.is_empty() {
@@ -559,7 +568,10 @@ mod tests {
#[test]
fn slash_command_maps_status() {
assert_eq!(slash_command_to_bot_keyword("/huskies-status"), Some("status"));
assert_eq!(
slash_command_to_bot_keyword("/huskies-status"),
Some("status")
);
}
#[test]
@@ -600,9 +612,8 @@ mod tests {
response_type: "ephemeral",
text: "hello".to_string(),
};
let json: serde_json::Value = serde_json::from_str(
&serde_json::to_string(&resp).unwrap()
).unwrap();
let json: serde_json::Value =
serde_json::from_str(&serde_json::to_string(&resp).unwrap()).unwrap();
assert_eq!(json["response_type"], "ephemeral");
assert_eq!(json["text"], "hello");
}
@@ -642,7 +653,10 @@ mod tests {
};
let result = try_handle_command(&dispatch, &synthetic);
assert!(result.is_some(), "status slash command should produce output via registry");
assert!(
result.is_some(),
"status slash command should produce output via registry"
);
assert!(result.unwrap().contains("Pipeline Status"));
}
@@ -671,7 +685,10 @@ mod tests {
let result = try_handle_command(&dispatch, &synthetic);
assert!(result.is_some(), "show slash command should produce output");
let output = result.unwrap();
assert!(output.contains("999"), "show output should reference the story number: {output}");
assert!(
output.contains("999"),
"show output should reference the story number: {output}"
);
}
// ── rebuild command extraction ─────────────────────────────────────
@@ -704,7 +721,10 @@ mod tests {
"Huskies",
"slack-bot",
);
assert!(result.is_none(), "'status' should not be recognised as rebuild");
assert!(
result.is_none(),
"'status' should not be recognised as rebuild"
);
}
// ── reset command extraction ───────────────────────────────────────
@@ -731,21 +751,26 @@ mod tests {
#[tokio::test]
async fn reset_command_clears_slack_session() {
use crate::chat::transport::matrix::{
ConversationEntry, ConversationRole, RoomConversation,
};
use std::sync::Arc;
use tokio::sync::Mutex as TokioMutex;
use crate::chat::transport::matrix::{ConversationEntry, ConversationRole, RoomConversation};
let channel = "C01ABCDEF";
let history: SlackConversationHistory = Arc::new(TokioMutex::new({
let mut m = HashMap::new();
m.insert(channel.to_string(), RoomConversation {
session_id: Some("old-session".to_string()),
entries: vec![ConversationEntry {
role: ConversationRole::User,
sender: "U01GHIJKL".to_string(),
content: "previous message".to_string(),
}],
});
m.insert(
channel.to_string(),
RoomConversation {
session_id: Some("old-session".to_string()),
entries: vec![ConversationEntry {
role: ConversationRole::User,
sender: "U01GHIJKL".to_string(),
content: "previous message".to_string(),
}],
},
);
m
}));
@@ -755,7 +780,9 @@ mod tests {
{
let mut guard = history.lock().await;
let conv = guard.entry(channel.to_string()).or_insert_with(RoomConversation::default);
let conv = guard
.entry(channel.to_string())
.or_insert_with(RoomConversation::default);
conv.session_id = None;
conv.entries.clear();
save_slack_history(tmp.path(), &guard);
@@ -862,6 +889,9 @@ mod tests {
"Timmy",
"@timmy:home.local",
);
assert!(result.is_none(), "'status' should not be recognised as assign on Slack");
assert!(
result.is_none(),
"'status' should not be recognised as assign on Slack"
);
}
}
+10 -6
View File
@@ -20,10 +20,8 @@ pub fn markdown_to_slack(text: &str) -> String {
LazyLock::new(|| Regex::new(r"(?m)^#{1,6}\s+(.+)$").unwrap());
static RE_BOLD_ITALIC: LazyLock<Regex> =
LazyLock::new(|| Regex::new(r"\*\*\*(.+?)\*\*\*").unwrap());
static RE_BOLD: LazyLock<Regex> =
LazyLock::new(|| Regex::new(r"\*\*(.+?)\*\*").unwrap());
static RE_STRIKETHROUGH: LazyLock<Regex> =
LazyLock::new(|| Regex::new(r"~~(.+?)~~").unwrap());
static RE_BOLD: LazyLock<Regex> = LazyLock::new(|| Regex::new(r"\*\*(.+?)\*\*").unwrap());
static RE_STRIKETHROUGH: LazyLock<Regex> = LazyLock::new(|| Regex::new(r"~~(.+?)~~").unwrap());
static RE_LINK: LazyLock<Regex> =
LazyLock::new(|| Regex::new(r"\[([^\]]+)\]\(([^)]+)\)").unwrap());
@@ -105,8 +103,14 @@ mod tests {
fn slack_fenced_code_block_preserved() {
let input = "```rust\nlet x = 1;\n```";
let output = markdown_to_slack(input);
assert!(output.contains("let x = 1;"), "code block content must be preserved");
assert!(output.contains("```"), "fenced code delimiters must be preserved");
assert!(
output.contains("let x = 1;"),
"code block content must be preserved"
);
assert!(
output.contains("```"),
"fenced code delimiters must be preserved"
);
}
#[test]
+8 -26
View File
@@ -104,9 +104,8 @@ impl ChatTransport for SlackTransport {
return Err(format!("Slack API returned {status}: {resp_text}"));
}
let parsed: SlackApiResponse = serde_json::from_str(&resp_text).map_err(|e| {
format!("Failed to parse Slack API response: {e} — body: {resp_text}")
})?;
let parsed: SlackApiResponse = serde_json::from_str(&resp_text)
.map_err(|e| format!("Failed to parse Slack API response: {e} — body: {resp_text}"))?;
if !parsed.ok {
return Err(format!(
@@ -190,10 +189,7 @@ mod tests {
.create_async()
.await;
let transport = SlackTransport::with_api_base(
"xoxb-test-token".to_string(),
server.url(),
);
let transport = SlackTransport::with_api_base("xoxb-test-token".to_string(), server.url());
let result = transport
.send_message("C01ABCDEF", "hello", "<p>hello</p>")
@@ -212,14 +208,9 @@ mod tests {
.create_async()
.await;
let transport = SlackTransport::with_api_base(
"xoxb-test-token".to_string(),
server.url(),
);
let transport = SlackTransport::with_api_base("xoxb-test-token".to_string(), server.url());
let result = transport
.send_message("C_INVALID", "hello", "")
.await;
let result = transport.send_message("C_INVALID", "hello", "").await;
assert!(result.is_err());
assert!(
result.unwrap_err().contains("channel_not_found"),
@@ -237,10 +228,7 @@ mod tests {
.create_async()
.await;
let transport = SlackTransport::with_api_base(
"xoxb-test-token".to_string(),
server.url(),
);
let transport = SlackTransport::with_api_base("xoxb-test-token".to_string(), server.url());
let result = transport
.edit_message("C01ABCDEF", "1234567890.123456", "updated", "")
@@ -258,10 +246,7 @@ mod tests {
.create_async()
.await;
let transport = SlackTransport::with_api_base(
"xoxb-test-token".to_string(),
server.url(),
);
let transport = SlackTransport::with_api_base("xoxb-test-token".to_string(), server.url());
let result = transport
.edit_message("C01ABCDEF", "bad-ts", "updated", "")
@@ -287,10 +272,7 @@ mod tests {
.create_async()
.await;
let transport = SlackTransport::with_api_base(
"xoxb-test-token".to_string(),
server.url(),
);
let transport = SlackTransport::with_api_base("xoxb-test-token".to_string(), server.url());
let result = transport.send_message("C01ABCDEF", "hello", "").await;
assert!(result.is_err());
+17 -29
View File
@@ -12,15 +12,15 @@ pub mod history;
pub mod meta;
pub mod verify;
pub use commands::SlackWebhookContext;
pub use format::markdown_to_slack;
pub use history::load_slack_history;
pub use meta::SlackTransport;
pub use format::markdown_to_slack;
pub use commands::SlackWebhookContext;
use serde::Deserialize;
use poem::{Request, Response, handler, http::StatusCode};
use crate::slog;
use poem::{Request, Response, handler, http::StatusCode};
// ── Slack Events API types ──────────────────────────────────────────────
@@ -71,10 +71,7 @@ pub async fn webhook_receive(
.header("X-Slack-Request-Timestamp")
.unwrap_or("")
.to_string();
let signature = req
.header("X-Slack-Signature")
.unwrap_or("")
.to_string();
let signature = req.header("X-Slack-Signature").unwrap_or("").to_string();
let bytes = match body.into_bytes().await {
Ok(b) => b,
@@ -98,9 +95,7 @@ pub async fn webhook_receive(
Ok(e) => e,
Err(e) => {
slog!("[slack] Failed to parse webhook payload: {e}");
return Response::builder()
.status(StatusCode::OK)
.body("ok");
return Response::builder().status(StatusCode::OK).body("ok");
}
};
@@ -124,8 +119,7 @@ pub async fn webhook_receive(
&& event.r#type.as_deref() == Some("message")
&& event.subtype.is_none()
&& event.bot_id.is_none()
&& let (Some(channel), Some(user), Some(text)) =
(event.channel, event.user, event.text)
&& let (Some(channel), Some(user), Some(text)) = (event.channel, event.user, event.text)
&& ctx.channel_ids.contains(&channel)
{
let ctx = Arc::clone(*ctx);
@@ -135,9 +129,7 @@ pub async fn webhook_receive(
});
}
Response::builder()
.status(StatusCode::OK)
.body("ok")
Response::builder().status(StatusCode::OK).body("ok")
}
/// POST /webhook/slack/command — receive incoming Slack slash commands.
@@ -155,10 +147,7 @@ pub async fn slash_command_receive(
.header("X-Slack-Request-Timestamp")
.unwrap_or("")
.to_string();
let signature = req
.header("X-Slack-Signature")
.unwrap_or("")
.to_string();
let signature = req.header("X-Slack-Signature").unwrap_or("").to_string();
let bytes = match body.into_bytes().await {
Ok(b) => b,
@@ -178,16 +167,15 @@ pub async fn slash_command_receive(
.body("Invalid signature");
}
let payload: commands::SlackSlashCommandPayload =
match serde_urlencoded::from_bytes(&bytes) {
Ok(p) => p,
Err(e) => {
slog!("[slack] Failed to parse slash command payload: {e}");
return Response::builder()
.status(StatusCode::BAD_REQUEST)
.body("Bad request");
}
};
let payload: commands::SlackSlashCommandPayload = match serde_urlencoded::from_bytes(&bytes) {
Ok(p) => p,
Err(e) => {
slog!("[slack] Failed to parse slash command payload: {e}");
return Response::builder()
.status(StatusCode::BAD_REQUEST)
.body("Bad request");
}
};
slog!(
"[slack] Slash command from {}: {} {}",
+6 -1
View File
@@ -215,7 +215,12 @@ mod tests {
let body = b"test body";
let sig = compute_test_signature("correct-secret", timestamp, body);
assert!(!verify_slack_signature("wrong-secret", timestamp, body, &sig));
assert!(!verify_slack_signature(
"wrong-secret",
timestamp,
body,
&sig
));
}
/// Helper to compute a test signature using our sha256 + HMAC implementation.
+60 -35
View File
@@ -1,22 +1,24 @@
//! WhatsApp command handling — processes incoming WhatsApp messages as bot commands.
use std::sync::Arc;
use crate::chat::transport::matrix::{ConversationEntry, ConversationRole, RoomConversation};
use crate::chat::util::is_permission_approval;
use crate::http::context::{PermissionDecision};
use crate::slog;
use super::WhatsAppWebhookContext;
use super::format::{chunk_for_whatsapp, markdown_to_whatsapp};
use super::history::save_whatsapp_history;
use crate::chat::transport::matrix::{ConversationEntry, ConversationRole, RoomConversation};
use crate::chat::util::is_permission_approval;
use crate::http::context::PermissionDecision;
use crate::slog;
/// Dispatch an incoming WhatsApp message to bot commands.
pub(super) async fn handle_incoming_message(ctx: &WhatsAppWebhookContext, sender: &str, message: &str) {
pub(super) async fn handle_incoming_message(
ctx: &WhatsAppWebhookContext,
sender: &str,
message: &str,
) {
use crate::chat::commands::{CommandDispatch, try_handle_command};
// Allowlist check: when configured, silently ignore unauthorized senders.
if !ctx.allowed_phones.is_empty()
&& !ctx.allowed_phones.iter().any(|p| p == sender)
{
if !ctx.allowed_phones.is_empty() && !ctx.allowed_phones.iter().any(|p| p == sender) {
slog!("[whatsapp] Ignoring message from unauthorized sender: {sender}");
return;
}
@@ -173,7 +175,9 @@ pub(super) async fn handle_incoming_message(ctx: &WhatsAppWebhookContext, sender
slog!("[whatsapp] Handling reset command from {sender}");
{
let mut guard = ctx.history.lock().await;
let conv = guard.entry(sender.to_string()).or_insert_with(RoomConversation::default);
let conv = guard
.entry(sender.to_string())
.or_insert_with(RoomConversation::default);
conv.session_id = None;
conv.entries.clear();
save_whatsapp_history(&ctx.project_root, &guard);
@@ -219,8 +223,13 @@ pub(super) async fn handle_incoming_message(ctx: &WhatsAppWebhookContext, sender
&ctx.bot_user_id,
) {
let response = match assign_cmd {
crate::chat::transport::matrix::assign::AssignCommand::Assign { story_number, model } => {
slog!("[whatsapp] Handling assign command from {sender}: story {story_number} model {model}");
crate::chat::transport::matrix::assign::AssignCommand::Assign {
story_number,
model,
} => {
slog!(
"[whatsapp] Handling assign command from {sender}: story {story_number} model {model}"
);
crate::chat::transport::matrix::assign::handle_assign(
&ctx.bot_name,
&story_number,
@@ -385,9 +394,7 @@ async fn handle_llm_message(ctx: &WhatsAppWebhookContext, sender: &str, user_mes
Err(e) => {
slog!("[whatsapp] LLM error: {e}");
let err_msg = if let Some(url) = crate::llm::oauth::extract_login_url_from_error(&e) {
format!(
"Authentication required. Log in to Claude here: {url}"
)
format!("Authentication required. Log in to Claude here: {url}")
} else {
format!("Error processing your request: {e}")
};
@@ -434,20 +441,18 @@ async fn handle_llm_message(ctx: &WhatsAppWebhookContext, sender: &str, user_mes
#[cfg(test)]
mod tests {
use crate::agents::AgentPool;
use crate::io::watcher::WatcherEvent;
use crate::chat::transport::matrix::{ConversationEntry, ConversationRole, RoomConversation};
use super::super::history::{MessagingWindowTracker, WhatsAppConversationHistory};
use super::super::WhatsAppWebhookContext;
use super::super::history::{MessagingWindowTracker, WhatsAppConversationHistory};
use super::*;
use crate::agents::AgentPool;
use crate::chat::transport::matrix::{ConversationEntry, ConversationRole, RoomConversation};
use crate::io::watcher::WatcherEvent;
use std::collections::HashMap;
use std::sync::Arc;
use tokio::sync::Mutex as TokioMutex;
/// Build a minimal WhatsAppWebhookContext for allowlist tests.
fn make_ctx_with_allowlist(
allowed_phones: Vec<String>,
) -> Arc<WhatsAppWebhookContext> {
fn make_ctx_with_allowlist(allowed_phones: Vec<String>) -> Arc<WhatsAppWebhookContext> {
struct NullTransport;
#[async_trait::async_trait]
@@ -505,9 +510,15 @@ mod tests {
let err = "OAuth session expired or credentials missing. Please log in: http://localhost:3001/oauth/authorize";
let url = crate::llm::oauth::extract_login_url_from_error(err);
assert!(url.is_some(), "should extract URL from OAuth error");
let msg = format!("Authentication required. Log in to Claude here: {}", url.unwrap());
let msg = format!(
"Authentication required. Log in to Claude here: {}",
url.unwrap()
);
assert!(msg.contains("http://localhost:3001/oauth/authorize"));
assert!(!msg.contains('['), "WhatsApp message should not use Markdown link syntax");
assert!(
!msg.contains('['),
"WhatsApp message should not use Markdown link syntax"
);
}
#[test]
@@ -594,7 +605,10 @@ mod tests {
"Timmy",
"@timmy:home.local",
);
assert!(result.is_none(), "'status' should not be recognised as rebuild");
assert!(
result.is_none(),
"'status' should not be recognised as rebuild"
);
}
// ── reset command extraction ───────────────────────────────────────
@@ -624,14 +638,17 @@ mod tests {
let sender = "+15555550100";
let history: WhatsAppConversationHistory = Arc::new(TokioMutex::new({
let mut m = HashMap::new();
m.insert(sender.to_string(), RoomConversation {
session_id: Some("old-session".to_string()),
entries: vec![ConversationEntry {
role: ConversationRole::User,
sender: sender.to_string(),
content: "previous message".to_string(),
}],
});
m.insert(
sender.to_string(),
RoomConversation {
session_id: Some("old-session".to_string()),
entries: vec![ConversationEntry {
role: ConversationRole::User,
sender: sender.to_string(),
content: "previous message".to_string(),
}],
},
);
m
}));
@@ -641,7 +658,9 @@ mod tests {
{
let mut guard = history.lock().await;
let conv = guard.entry(sender.to_string()).or_insert_with(RoomConversation::default);
let conv = guard
.entry(sender.to_string())
.or_insert_with(RoomConversation::default);
conv.session_id = None;
conv.entries.clear();
save_whatsapp_history(tmp.path(), &guard);
@@ -748,7 +767,10 @@ mod tests {
"Timmy",
"@timmy:home.local",
);
assert!(result.is_none(), "'status' should not be recognised as rmtree");
assert!(
result.is_none(),
"'status' should not be recognised as rmtree"
);
}
// ── assign command extraction ──────────────────────────────────────
@@ -805,6 +827,9 @@ mod tests {
"Timmy",
"@timmy:home.local",
);
assert!(result.is_none(), "'status' should not be recognised as assign");
assert!(
result.is_none(),
"'status' should not be recognised as assign"
);
}
}
+3 -6
View File
@@ -66,14 +66,11 @@ pub fn markdown_to_whatsapp(text: &str) -> String {
LazyLock::new(|| Regex::new(r"(?m)^#{1,6}\s+(.+)$").unwrap());
static RE_BOLD_ITALIC: LazyLock<Regex> =
LazyLock::new(|| Regex::new(r"\*\*\*(.+?)\*\*\*").unwrap());
static RE_BOLD: LazyLock<Regex> =
LazyLock::new(|| Regex::new(r"\*\*(.+?)\*\*").unwrap());
static RE_STRIKETHROUGH: LazyLock<Regex> =
LazyLock::new(|| Regex::new(r"~~(.+?)~~").unwrap());
static RE_BOLD: LazyLock<Regex> = LazyLock::new(|| Regex::new(r"\*\*(.+?)\*\*").unwrap());
static RE_STRIKETHROUGH: LazyLock<Regex> = LazyLock::new(|| Regex::new(r"~~(.+?)~~").unwrap());
static RE_LINK: LazyLock<Regex> =
LazyLock::new(|| Regex::new(r"\[([^\]]+)\]\(([^)]+)\)").unwrap());
static RE_HR: LazyLock<Regex> =
LazyLock::new(|| Regex::new(r"(?m)^---+$").unwrap());
static RE_HR: LazyLock<Regex> = LazyLock::new(|| Regex::new(r"(?m)^---+$").unwrap());
// 1. Protect fenced code blocks by replacing them with placeholders.
let mut code_blocks: Vec<String> = Vec::new();
+6 -2
View File
@@ -2,9 +2,9 @@
use async_trait::async_trait;
use serde::{Deserialize, Serialize};
use super::history::MessagingWindowTracker;
use crate::chat::{ChatTransport, MessageId};
use crate::slog;
use super::history::MessagingWindowTracker;
// ── API base URLs (overridable for tests) ────────────────────────────────
@@ -55,7 +55,11 @@ impl WhatsAppTransport {
}
#[cfg(test)]
pub(crate) fn with_api_base(phone_number_id: String, access_token: String, api_base: String) -> Self {
pub(crate) fn with_api_base(
phone_number_id: String,
access_token: String,
api_base: String,
) -> Self {
Self {
phone_number_id,
access_token,
+3 -4
View File
@@ -13,9 +13,9 @@ pub mod history;
pub mod meta;
pub mod twilio;
pub use history::{load_whatsapp_history, MessagingWindowTracker, WhatsAppConversationHistory};
pub use history::{MessagingWindowTracker, WhatsAppConversationHistory, load_whatsapp_history};
pub use meta::WhatsAppTransport;
pub use twilio::{extract_twilio_text_messages, TwilioWhatsAppTransport};
pub use twilio::{TwilioWhatsAppTransport, extract_twilio_text_messages};
use serde::Deserialize;
use std::collections::{HashMap, HashSet};
@@ -132,8 +132,7 @@ pub struct WhatsAppWebhookContext {
/// Permission requests from the MCP `prompt_permission` tool arrive here.
pub perm_rx: Arc<TokioMutex<tokio::sync::mpsc::UnboundedReceiver<PermissionForward>>>,
/// Pending permission replies keyed by sender phone number.
pub pending_perm_replies:
Arc<TokioMutex<HashMap<String, oneshot::Sender<PermissionDecision>>>>,
pub pending_perm_replies: Arc<TokioMutex<HashMap<String, oneshot::Sender<PermissionDecision>>>>,
/// Seconds before an unanswered permission prompt is auto-denied.
pub permission_timeout_secs: u64,
}
+14 -11
View File
@@ -202,9 +202,7 @@ pub fn normalize_line_breaks(text: &str) -> String {
return true;
}
// Horizontal rules: lines made entirely of -, *, or _ (at least 3 chars).
let all_hr_chars = trimmed
.chars()
.all(|c| matches!(c, '-' | '*' | '_' | ' '));
let all_hr_chars = trimmed.chars().all(|c| matches!(c, '-' | '*' | '_' | ' '));
let hr_char_count = trimmed.chars().filter(|c| !c.is_whitespace()).count();
all_hr_chars && hr_char_count >= 3
}
@@ -389,11 +387,7 @@ mod tests {
#[test]
fn strip_mention_emoji_display_name_no_separator() {
// Display name with emoji, no separator
let rest = strip_bot_mention(
"timmy ⚡️ ambient on",
"timmy ⚡️",
"@timmy:homeserver.local",
);
let rest = strip_bot_mention("timmy ⚡️ ambient on", "timmy ⚡️", "@timmy:homeserver.local");
assert_eq!(rest, "ambient on");
}
@@ -638,9 +632,18 @@ mod tests {
let output = normalize_line_breaks(input);
// Prose sentences before and after the code block get doubled.
// The code block itself is preserved.
assert!(output.contains("First sentence.\n\nSecond sentence."), "prose before code: {output}");
assert!(output.contains("```rust\nlet x = 1;\nlet y = 2;\n```"), "code block preserved: {output}");
assert!(output.contains("Third sentence.\n\nFourth sentence."), "prose after code: {output}");
assert!(
output.contains("First sentence.\n\nSecond sentence."),
"prose before code: {output}"
);
assert!(
output.contains("```rust\nlet x = 1;\nlet y = 2;\n```"),
"code block preserved: {output}"
);
assert!(
output.contains("Third sentence.\n\nFourth sentence."),
"prose after code: {output}"
);
}
#[test]
+45 -42
View File
@@ -101,7 +101,6 @@ fn default_rate_limit_notifications() -> bool {
true
}
#[derive(Debug, Clone, Deserialize)]
#[allow(dead_code)]
pub struct ComponentConfig {
@@ -288,27 +287,28 @@ impl ProjectConfig {
// Parsed successfully but no agents — could be legacy or no agent section.
// Try legacy format.
if let Ok(legacy) = toml::from_str::<LegacyProjectConfig>(content)
&& let Some(agent) = legacy.agent {
slog!(
"[config] Warning: [agent] table is deprecated. \
&& let Some(agent) = legacy.agent
{
slog!(
"[config] Warning: [agent] table is deprecated. \
Use [[agent]] array format instead."
);
let config = ProjectConfig {
component: legacy.component,
agent: vec![agent],
watcher: legacy.watcher,
default_qa: legacy.default_qa,
default_coder_model: legacy.default_coder_model,
max_coders: legacy.max_coders,
max_retries: legacy.max_retries,
base_branch: legacy.base_branch,
rate_limit_notifications: legacy.rate_limit_notifications,
timezone: legacy.timezone,
rendezvous: None,
};
validate_agents(&config.agent)?;
return Ok(config);
}
);
let config = ProjectConfig {
component: legacy.component,
agent: vec![agent],
watcher: legacy.watcher,
default_qa: legacy.default_qa,
default_coder_model: legacy.default_coder_model,
max_coders: legacy.max_coders,
max_retries: legacy.max_retries,
base_branch: legacy.base_branch,
rate_limit_notifications: legacy.rate_limit_notifications,
timezone: legacy.timezone,
rendezvous: None,
};
validate_agents(&config.agent)?;
return Ok(config);
}
// No agent section at all
Ok(config)
}
@@ -411,10 +411,11 @@ impl ProjectConfig {
args.push(model.clone());
}
if let Some(ref tools) = agent.allowed_tools
&& !tools.is_empty() {
args.push("--allowedTools".to_string());
args.push(tools.join(","));
}
&& !tools.is_empty()
{
args.push("--allowedTools".to_string());
args.push(tools.join(","));
}
if let Some(turns) = agent.max_turns {
args.push("--max-turns".to_string());
args.push(turns.to_string());
@@ -443,19 +444,21 @@ fn validate_agents(agents: &[AgentConfig]) -> Result<(), String> {
return Err(format!("Duplicate agent name: '{}'", agent.name));
}
if let Some(budget) = agent.max_budget_usd
&& budget <= 0.0 {
return Err(format!(
"Agent '{}': max_budget_usd must be positive, got {budget}",
agent.name
));
}
&& budget <= 0.0
{
return Err(format!(
"Agent '{}': max_budget_usd must be positive, got {budget}",
agent.name
));
}
if let Some(turns) = agent.max_turns
&& turns == 0 {
return Err(format!(
"Agent '{}': max_turns must be positive, got 0",
agent.name
));
}
&& turns == 0
{
return Err(format!(
"Agent '{}': max_turns must be positive, got 0",
agent.name
));
}
if let Some(ref runtime) = agent.runtime {
match runtime.as_str() {
"claude-code" | "gemini" => {}
@@ -957,10 +960,7 @@ name = "coder"
runtime = "claude-code"
"#;
let config = ProjectConfig::parse(toml_str).unwrap();
assert_eq!(
config.agent[0].runtime,
Some("claude-code".to_string())
);
assert_eq!(config.agent[0].runtime, Some("claude-code".to_string()));
}
#[test]
@@ -1067,7 +1067,10 @@ prompt = "git difftool {{base_branch}}...HEAD"
name = "coder"
"#;
let config = ProjectConfig::parse(toml_str).unwrap();
assert!(config.rate_limit_notifications, "rate_limit_notifications should default to true");
assert!(
config.rate_limit_notifications,
"rate_limit_notifications should default to true"
);
}
#[test]
+75 -60
View File
@@ -20,8 +20,8 @@ use bft_json_crdt::op::ROOT_ID;
use fastcrypto::ed25519::Ed25519KeyPair;
use fastcrypto::traits::ToFromBytes;
use serde_json::json;
use sqlx::sqlite::SqliteConnectOptions;
use sqlx::SqlitePool;
use sqlx::sqlite::SqliteConnectOptions;
use std::path::Path;
use tokio::sync::{broadcast, mpsc};
@@ -218,10 +218,9 @@ pub async fn init(db_path: &Path) -> Result<(), sqlx::Error> {
let mut crdt = BaseCrdt::<PipelineDoc>::new(&keypair);
// Replay persisted ops to reconstruct state.
let rows: Vec<(String,)> =
sqlx::query_as("SELECT op_json FROM crdt_ops ORDER BY rowid ASC")
.fetch_all(&pool)
.await?;
let rows: Vec<(String,)> = sqlx::query_as("SELECT op_json FROM crdt_ops ORDER BY rowid ASC")
.fetch_all(&pool)
.await?;
let mut all_ops_vec = Vec::with_capacity(rows.len());
for (op_json,) in &rows {
@@ -316,7 +315,13 @@ pub fn init_for_test() {
let keypair = make_keypair();
let crdt = BaseCrdt::<PipelineDoc>::new(&keypair);
let (persist_tx, _rx) = mpsc::unbounded_channel();
let state = CrdtState { crdt, keypair, index: HashMap::new(), node_index: HashMap::new(), persist_tx };
let state = CrdtState {
crdt,
keypair,
index: HashMap::new(),
node_index: HashMap::new(),
persist_tx,
};
let _ = lock.set(Mutex::new(state));
}
});
@@ -458,9 +463,7 @@ pub fn write_item(
});
}
if let Some(b) = blocked {
apply_and_persist(&mut state, |s| {
s.crdt.doc.items[idx].blocked.set(b)
});
apply_and_persist(&mut state, |s| s.crdt.doc.items[idx].blocked.set(b));
}
if let Some(d) = depends_on {
apply_and_persist(&mut state, |s| {
@@ -473,14 +476,10 @@ pub fn write_item(
});
}
if let Some(ca) = claimed_at {
apply_and_persist(&mut state, |s| {
s.crdt.doc.items[idx].claimed_at.set(ca)
});
apply_and_persist(&mut state, |s| s.crdt.doc.items[idx].claimed_at.set(ca));
}
if let Some(ma) = merged_at {
apply_and_persist(&mut state, |s| {
s.crdt.doc.items[idx].merged_at.set(ma)
});
apply_and_persist(&mut state, |s| s.crdt.doc.items[idx].merged_at.set(ma));
}
// Broadcast a CrdtEvent if the stage actually changed.
@@ -514,9 +513,7 @@ pub fn write_item(
})
.into();
apply_and_persist(&mut state, |s| {
s.crdt.doc.items.insert(ROOT_ID, item_json)
});
apply_and_persist(&mut state, |s| s.crdt.doc.items.insert(ROOT_ID, item_json));
// Rebuild index after insertion (indices may shift).
state.index = rebuild_index(&state.crdt);
@@ -561,11 +558,9 @@ pub fn apply_remote_op(op: SignedOp) -> bool {
let pre_stages: HashMap<String, String> = state
.index
.iter()
.filter_map(|(sid, &idx)| {
match state.crdt.doc.items[idx].stage.view() {
JsonValue::String(s) => Some((sid.clone(), s)),
_ => None,
}
.filter_map(|(sid, &idx)| match state.crdt.doc.items[idx].stage.view() {
JsonValue::String(s) => Some((sid.clone(), s)),
_ => None,
})
.collect();
@@ -668,9 +663,7 @@ pub fn write_claim(story_id: &str) -> bool {
apply_and_persist(&mut state, |s| {
s.crdt.doc.items[idx].claimed_by.set(node_id.clone())
});
apply_and_persist(&mut state, |s| {
s.crdt.doc.items[idx].claimed_at.set(now)
});
apply_and_persist(&mut state, |s| s.crdt.doc.items[idx].claimed_at.set(now));
true
}
@@ -690,9 +683,7 @@ pub fn release_claim(story_id: &str) {
apply_and_persist(&mut state, |s| {
s.crdt.doc.items[idx].claimed_by.set(String::new())
});
apply_and_persist(&mut state, |s| {
s.crdt.doc.items[idx].claimed_at.set(0.0)
});
apply_and_persist(&mut state, |s| s.crdt.doc.items[idx].claimed_at.set(0.0));
}
/// Check if this node currently holds the claim on a pipeline item.
@@ -725,9 +716,7 @@ pub fn write_node_presence(node_id: &str, address: &str, last_seen: f64, alive:
apply_and_persist(&mut state, |s| {
s.crdt.doc.nodes[idx].last_seen.set(last_seen)
});
apply_and_persist(&mut state, |s| {
s.crdt.doc.nodes[idx].alive.set(alive)
});
apply_and_persist(&mut state, |s| s.crdt.doc.nodes[idx].alive.set(alive));
apply_and_persist(&mut state, |s| {
s.crdt.doc.nodes[idx].address.set(address.to_string())
});
@@ -741,9 +730,7 @@ pub fn write_node_presence(node_id: &str, address: &str, last_seen: f64, alive:
})
.into();
apply_and_persist(&mut state, |s| {
s.crdt.doc.nodes.insert(ROOT_ID, node_json)
});
apply_and_persist(&mut state, |s| s.crdt.doc.nodes.insert(ROOT_ID, node_json));
// Rebuild node index after insertion.
state.node_index = rebuild_node_index(&state.crdt);
@@ -1019,8 +1006,7 @@ pub fn read_item(story_id: &str) -> Option<PipelineItemView> {
/// or an `Err` if the CRDT layer isn't initialised or the story_id is
/// unknown to the in-memory state.
pub fn evict_item(story_id: &str) -> Result<(), String> {
let state_mutex = get_crdt()
.ok_or_else(|| "CRDT layer not initialised".to_string())?;
let state_mutex = get_crdt().ok_or_else(|| "CRDT layer not initialised".to_string())?;
let mut state = state_mutex
.lock()
.map_err(|e| format!("CRDT lock poisoned: {e}"))?;
@@ -1033,12 +1019,10 @@ pub fn evict_item(story_id: &str) -> Result<(), String> {
// Resolve the item's OpId before the closure (the closure will mutably
// borrow `state`, so we can't access `state.crdt.doc.items` from inside).
let item_id = state
.crdt
.doc
.items
.id_at(idx)
.ok_or_else(|| format!("Item index {idx} for '{story_id}' did not resolve to an OpId"))?;
let item_id =
state.crdt.doc.items.id_at(idx).ok_or_else(|| {
format!("Item index {idx} for '{story_id}' did not resolve to an OpId")
})?;
// Write the delete op via the existing apply_and_persist machinery.
// This signs the op, applies it to the in-memory CRDT (marking the item
@@ -1084,9 +1068,7 @@ fn extract_item_view(item: &PipelineItemCrdt) -> Option<PipelineItemView> {
_ => None,
};
let depends_on = match item.depends_on.view() {
JsonValue::String(s) if !s.is_empty() => {
serde_json::from_str::<Vec<u32>>(&s).ok()
}
JsonValue::String(s) if !s.is_empty() => serde_json::from_str::<Vec<u32>>(&s).ok(),
_ => None,
};
@@ -1142,9 +1124,9 @@ pub fn dep_is_done_crdt(dep_number: u32) -> bool {
pub fn dep_is_archived_crdt(dep_number: u32) -> bool {
let prefix = format!("{dep_number}_");
if let Some(items) = read_all_items() {
items.iter().any(|item| {
item.story_id.starts_with(&prefix) && item.stage == "6_archived"
})
items
.iter()
.any(|item| item.story_id.starts_with(&prefix) && item.stage == "6_archived")
} else {
false
}
@@ -1226,8 +1208,14 @@ mod tests {
assert_eq!(view.len(), 1);
let item = &crdt.doc.items[0];
assert_eq!(item.story_id.view(), JsonValue::String("10_story_test".to_string()));
assert_eq!(item.stage.view(), JsonValue::String("2_current".to_string()));
assert_eq!(
item.story_id.view(),
JsonValue::String("10_story_test".to_string())
);
assert_eq!(
item.stage.view(),
JsonValue::String("2_current".to_string())
);
}
#[test]
@@ -1252,7 +1240,10 @@ mod tests {
crdt.apply(insert_op);
// Update stage
let stage_op = crdt.doc.items[0].stage.set("2_current".to_string()).sign(&kp);
let stage_op = crdt.doc.items[0]
.stage
.set("2_current".to_string())
.sign(&kp);
crdt.apply(stage_op);
assert_eq!(
@@ -1283,10 +1274,16 @@ mod tests {
let op1 = crdt1.doc.items.insert(ROOT_ID, item_json).sign(&kp);
crdt1.apply(op1.clone());
let op2 = crdt1.doc.items[0].stage.set("2_current".to_string()).sign(&kp);
let op2 = crdt1.doc.items[0]
.stage
.set("2_current".to_string())
.sign(&kp);
crdt1.apply(op2.clone());
let op3 = crdt1.doc.items[0].name.set("Updated Name".to_string()).sign(&kp);
let op3 = crdt1.doc.items[0]
.name
.set("Updated Name".to_string())
.sign(&kp);
crdt1.apply(op3.clone());
// Replay ops on a fresh CRDT.
@@ -1568,7 +1565,11 @@ mod tests {
"claimed_at": 0.0,
})
.into();
let op = crdt.doc.items.insert(bft_json_crdt::op::ROOT_ID, item).sign(&kp);
let op = crdt
.doc
.items
.insert(bft_json_crdt::op::ROOT_ID, item)
.sign(&kp);
// This uses the global state which may not be initialised in tests.
let _ = apply_remote_op(op);
}
@@ -1591,7 +1592,11 @@ mod tests {
"claimed_at": 0.0,
})
.into();
let op = crdt.doc.items.insert(bft_json_crdt::op::ROOT_ID, item).sign(&kp);
let op = crdt
.doc
.items
.insert(bft_json_crdt::op::ROOT_ID, item)
.sign(&kp);
let json1 = serde_json::to_string(&op).unwrap();
let roundtripped: SignedOp = serde_json::from_str(&json1).unwrap();
@@ -1620,7 +1625,11 @@ mod tests {
"claimed_at": 0.0,
})
.into();
let op = crdt.doc.items.insert(bft_json_crdt::op::ROOT_ID, item).sign(&kp);
let op = crdt
.doc
.items
.insert(bft_json_crdt::op::ROOT_ID, item)
.sign(&kp);
tx.send(op.clone()).unwrap();
let received = rx.try_recv().unwrap();
@@ -1693,7 +1702,10 @@ mod tests {
// Now update the stage. The stage LwwRegisterCrdt for this item starts
// at our_seq=0, so this field op gets seq=1. Crucially: seq=1 < seq=6.
let idx = rebuild_index(&crdt)["511_story_target"];
let stage_op = crdt.doc.items[idx].stage.set("2_current".to_string()).sign(&kp);
let stage_op = crdt.doc.items[idx]
.stage
.set("2_current".to_string())
.sign(&kp);
crdt.apply(stage_op.clone());
// stage_op.inner.seq == 1
@@ -1808,8 +1820,11 @@ mod tests {
apply_and_persist(&mut state, |s| s.crdt.doc.items.insert(ROOT_ID, item_json));
let error_entries = crate::log_buffer::global()
.get_recent_entries(1000, None, Some(&crate::log_buffer::LogLevel::Error));
let error_entries = crate::log_buffer::global().get_recent_entries(
1000,
None,
Some(&crate::log_buffer::LogLevel::Error),
);
assert!(
error_entries.len() > before_errors,
+68 -53
View File
@@ -408,7 +408,9 @@ mod tests {
// Serialise op1 into a SyncMessage::Op.
let op1_json = serde_json::to_string(&op1).unwrap();
let wire_msg = SyncMessage::Op { op: op1_json.clone() };
let wire_msg = SyncMessage::Op {
op: op1_json.clone(),
};
let wire_json = serde_json::to_string(&wire_msg).unwrap();
// ── Node B: receive the op through protocol ──
@@ -517,10 +519,7 @@ mod tests {
.sign(&kp);
crdt_a.apply(op2.clone());
let op3 = crdt_a.doc.items[0]
.stage
.set("3_qa".to_string())
.sign(&kp);
let op3 = crdt_a.doc.items[0].stage.set("3_qa".to_string()).sign(&kp);
crdt_a.apply(op3.clone());
// Serialise all ops as a bulk message (simulates partition heal).
@@ -623,7 +622,10 @@ name = "test"
// Simulate a clean reconnect.
consecutive_failures = 0;
assert_eq!(consecutive_failures, 0, "counter must reset to 0 on success");
assert_eq!(
consecutive_failures, 0,
"counter must reset to 0 on success"
);
// Next error is attempt 1 — well below the ERROR threshold.
consecutive_failures += 1;
@@ -685,7 +687,10 @@ name = "test"
assert_eq!(crdt.doc.items.view().len(), 1);
// Stage update also deduplicated correctly.
let stage_op = crdt.doc.items[0].stage.set("2_current".to_string()).sign(&kp);
let stage_op = crdt.doc.items[0]
.stage
.set("2_current".to_string())
.sign(&kp);
assert_eq!(crdt.apply(stage_op.clone()), OpState::Ok);
assert_eq!(
crdt.doc.items[0].stage.view(),
@@ -806,10 +811,7 @@ name = "test"
.set("2_current".to_string())
.sign(&kp);
crdt_a.apply(op2.clone());
let op3 = crdt_a.doc.items[0]
.stage
.set("3_qa".to_string())
.sign(&kp);
let op3 = crdt_a.doc.items[0].stage.set("3_qa".to_string()).sign(&kp);
crdt_a.apply(op3.clone());
// Receiver applies all ops in the correct order.
@@ -830,7 +832,7 @@ name = "test"
/// pending op is evicted (queue never grows beyond the cap).
#[test]
fn causal_queue_overflow_drops_oldest() {
use bft_json_crdt::json_crdt::{BaseCrdt, OpState, CAUSAL_QUEUE_MAX};
use bft_json_crdt::json_crdt::{BaseCrdt, CAUSAL_QUEUE_MAX, OpState};
use bft_json_crdt::keypair::make_keypair;
use bft_json_crdt::op::ROOT_ID;
use serde_json::json;
@@ -854,11 +856,7 @@ name = "test"
"claimed_at": 0.0,
})
.into();
let phantom_op = source
.doc
.items
.insert(ROOT_ID, phantom_item)
.sign(&kp);
let phantom_op = source.doc.items.insert(ROOT_ID, phantom_item).sign(&kp);
// Receiver never sees phantom_op, so any op declaring it as a dep will
// sit in the causal queue forever (until evicted by overflow).
@@ -871,9 +869,7 @@ name = "test"
for i in 0..CAUSAL_QUEUE_MAX + 5 {
let stage_name = format!("stage_{i}");
// Generate from source so seq numbers are valid.
let op = source
.doc
.items[0]
let op = source.doc.items[0]
.stage
.set(stage_name)
.sign_with_dependencies(&kp, vec![&phantom_op]);
@@ -1006,8 +1002,13 @@ name = "test"
.iter()
.filter_map(|item| {
if let JV::Object(m) = CrdtNode::view(item) {
m.get("story_id")
.and_then(|s| if let JV::String(s) = s { Some(s.clone()) } else { None })
m.get("story_id").and_then(|s| {
if let JV::String(s) = s {
Some(s.clone())
} else {
None
}
})
} else {
None
}
@@ -1194,15 +1195,9 @@ name = "test"
.set("2_current".to_string())
.sign(&kp);
crdt.apply(op2.clone());
let op3 = crdt.doc.items[0]
.stage
.set("3_qa".to_string())
.sign(&kp);
let op3 = crdt.doc.items[0].stage.set("3_qa".to_string()).sign(&kp);
crdt.apply(op3.clone());
let op4 = crdt.doc.items[0]
.stage
.set("4_merge".to_string())
.sign(&kp);
let op4 = crdt.doc.items[0].stage.set("4_merge".to_string()).sign(&kp);
crdt.apply(op4.clone());
// Send more ops than the channel capacity without consuming.
@@ -1245,8 +1240,8 @@ name = "test"
use serde_json::json;
use std::sync::{Arc, Mutex};
use tokio::net::TcpListener;
use tokio_tungstenite::{accept_async, connect_async};
use tokio_tungstenite::tungstenite::Message as TMsg;
use tokio_tungstenite::{accept_async, connect_async};
use crate::crdt_state::PipelineDoc;
@@ -1271,7 +1266,9 @@ name = "test"
// Serialise A's full state as a bulk message.
let op1_json = serde_json::to_string(&op1).unwrap();
let bulk_msg = SyncMessage::Bulk { ops: vec![op1_json] };
let bulk_msg = SyncMessage::Bulk {
ops: vec![op1_json],
};
let bulk_wire = serde_json::to_string(&bulk_msg).unwrap();
// ── Start Node A's WebSocket server on a random port ───────────────
@@ -1349,11 +1346,17 @@ name = "test"
// ── Assert convergence ─────────────────────────────────────────────
// Node B received Node A's item.
assert_eq!(crdt_b.doc.items.view().len(), 2,
"Node B must see both items after sync");
let has_a_item = crdt_b.doc.items.view().iter().any(|item| {
item.story_id.view() == JV::String("508_e2e_convergence".to_string())
});
assert_eq!(
crdt_b.doc.items.view().len(),
2,
"Node B must see both items after sync"
);
let has_a_item = crdt_b
.doc
.items
.view()
.iter()
.any(|item| item.story_id.view() == JV::String("508_e2e_convergence".to_string()));
assert!(has_a_item, "Node B must have Node A's item");
// Node A received Node B's op via the WebSocket.
@@ -1378,8 +1381,8 @@ name = "test"
use futures::{SinkExt, StreamExt};
use serde_json::json;
use tokio::net::TcpListener;
use tokio_tungstenite::{accept_async, connect_async};
use tokio_tungstenite::tungstenite::Message as TMsg;
use tokio_tungstenite::{accept_async, connect_async};
use crate::crdt_state::PipelineDoc;
@@ -1482,10 +1485,7 @@ name = "test"
}
// B sends its bulk state to A.
sink_b
.send(TMsg::Text(b_bulk_wire.into()))
.await
.unwrap();
sink_b.send(TMsg::Text(b_bulk_wire.into())).await.unwrap();
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
@@ -1504,26 +1504,41 @@ name = "test"
// ── Assert convergence ─────────────────────────────────────────────
// Both nodes must have 2 items.
assert_eq!(crdt_a.doc.items.view().len(), 2,
"A must have 2 items after healing");
assert_eq!(crdt_b.doc.items.view().len(), 2,
"B must have 2 items after healing");
assert_eq!(
crdt_a.doc.items.view().len(),
2,
"A must have 2 items after healing"
);
assert_eq!(
crdt_b.doc.items.view().len(),
2,
"B must have 2 items after healing"
);
// A must see B's story.
let b_story_on_a = crdt_a.doc.items.view().iter().any(|item| {
item.story_id.view() == JV::String("508_heal_b".to_string())
});
let b_story_on_a = crdt_a
.doc
.items
.view()
.iter()
.any(|item| item.story_id.view() == JV::String("508_heal_b".to_string()));
assert!(b_story_on_a, "A must have B's story after healing");
// B must see A's stage advance.
let a_story_on_b = crdt_b.doc.items.view().iter().any(|item| {
item.story_id.view() == JV::String("508_heal_a".to_string())
});
let a_story_on_b = crdt_b
.doc
.items
.view()
.iter()
.any(|item| item.story_id.view() == JV::String("508_heal_a".to_string()));
assert!(a_story_on_b, "B must have A's story after healing");
// CRDT views must be byte-identical (convergence).
let view_a = serde_json::to_string(&CrdtNode::view(&crdt_a.doc.items)).unwrap();
let view_b = serde_json::to_string(&CrdtNode::view(&crdt_b.doc.items)).unwrap();
assert_eq!(view_a, view_b, "Both nodes must converge to identical state");
assert_eq!(
view_a, view_b,
"Both nodes must converge to identical state"
);
}
}
+6 -6
View File
@@ -121,7 +121,11 @@ mod tests {
// ── helpers ──────────────────────────────────────────────────────────────
/// Build a fresh CRDT and return its keypair along with a signed insert op.
fn make_insert_op() -> (BaseCrdt<PipelineDoc>, bft_json_crdt::keypair::Ed25519KeyPair, SignedOp) {
fn make_insert_op() -> (
BaseCrdt<PipelineDoc>,
bft_json_crdt::keypair::Ed25519KeyPair,
SignedOp,
) {
let kp = make_keypair();
let mut crdt = BaseCrdt::<PipelineDoc>::new(&kp);
let item: JsonValue = json!({
@@ -172,11 +176,7 @@ mod tests {
fn roundtrip_delete_op() {
let (mut crdt, kp, insert_op) = make_insert_op();
// Delete the inserted item.
let delete_op = crdt
.doc
.items
.delete(insert_op.inner.id)
.sign(&kp);
let delete_op = crdt.doc.items.delete(insert_op.inner.id).sign(&kp);
crdt.apply(delete_op.clone());
let bytes = encode(&delete_op);
+37 -45
View File
@@ -16,8 +16,8 @@
/// no filesystem scan is needed after migration.
use crate::io::story_metadata::parse_front_matter;
use crate::slog;
use sqlx::sqlite::SqliteConnectOptions;
use sqlx::SqlitePool;
use sqlx::sqlite::SqliteConnectOptions;
use std::collections::HashMap;
use std::path::Path;
use std::sync::{Mutex, OnceLock};
@@ -86,14 +86,18 @@ pub fn read_content(story_id: &str) -> Option<String> {
///
/// Updates the in-memory store immediately.
pub fn write_content(story_id: &str, content: &str) {
if let Some(store) = get_content_store() && let Ok(mut map) = store.lock() {
if let Some(store) = get_content_store()
&& let Ok(mut map) = store.lock()
{
map.insert(story_id.to_string(), content.to_string());
}
}
/// Remove a story's content from the in-memory store.
pub fn delete_content(story_id: &str) {
if let Some(store) = get_content_store() && let Ok(mut map) = store.lock() {
if let Some(store) = get_content_store()
&& let Ok(mut map) = store.lock()
{
map.remove(story_id);
}
}
@@ -103,7 +107,9 @@ pub fn delete_content(story_id: &str) {
/// Safe to call multiple times — the `OnceLock` is set at most once.
pub fn ensure_content_store() {
#[cfg(not(test))]
{ let _ = CONTENT_STORE.set(Mutex::new(HashMap::new())); }
{
let _ = CONTENT_STORE.set(Mutex::new(HashMap::new()));
}
#[cfg(test)]
{
@@ -222,11 +228,7 @@ pub async fn init(db_path: &Path) -> Result<(), sqlx::Error> {
///
/// This is the primary write path for the DB-backed pipeline. It updates
/// the CRDT, the in-memory content store, and the SQLite shadow table.
pub fn write_item_with_content(
story_id: &str,
stage: &str,
content: &str,
) {
pub fn write_item_with_content(story_id: &str, stage: &str, content: &str) {
let (name, agent, retry_count, blocked, depends_on) = match parse_front_matter(content) {
Ok(meta) => (
meta.name,
@@ -389,7 +391,9 @@ pub fn next_item_number() -> u32 {
.chars()
.take_while(|c| c.is_ascii_digit())
.collect();
if let Ok(n) = num_str.parse::<u32>() && n > max_num {
if let Ok(n) = num_str.parse::<u32>()
&& n > max_num
{
max_num = n;
}
}
@@ -397,7 +401,9 @@ pub fn next_item_number() -> u32 {
// Also scan the content store (might have items not yet in CRDT).
for id in all_content_ids() {
let num_str: String = id.chars().take_while(|c| c.is_ascii_digit()).collect();
if let Ok(n) = num_str.parse::<u32>() && n > max_num {
if let Ok(n) = num_str.parse::<u32>()
&& n > max_num
{
max_num = n;
}
}
@@ -405,7 +411,6 @@ pub fn next_item_number() -> u32 {
max_num + 1
}
#[cfg(test)]
mod tests {
use super::*;
@@ -427,10 +432,7 @@ mod tests {
.filename(&db_path)
.create_if_missing(true);
let pool = SqlitePool::connect_with(options).await.unwrap();
sqlx::migrate!("./migrations")
.run(&pool)
.await
.unwrap();
sqlx::migrate!("./migrations").run(&pool).await.unwrap();
// Write a story file in a temp stage dir.
let stage_dir = tmp.path().join("2_current");
@@ -472,13 +474,12 @@ mod tests {
.unwrap();
// Query back and verify.
let row: (String, Option<String>, String) = sqlx::query_as(
"SELECT id, name, stage FROM pipeline_items WHERE id = ?1",
)
.bind("10_story_shadow_test")
.fetch_one(&pool)
.await
.unwrap();
let row: (String, Option<String>, String) =
sqlx::query_as("SELECT id, name, stage FROM pipeline_items WHERE id = ?1")
.bind("10_story_shadow_test")
.fetch_one(&pool)
.await
.unwrap();
assert_eq!(row.0, "10_story_shadow_test");
assert_eq!(row.1.as_deref(), Some("Shadow Test"));
@@ -512,10 +513,7 @@ mod tests {
.filename(&db_path)
.create_if_missing(true);
let pool = SqlitePool::connect_with(options).await.unwrap();
sqlx::migrate!("./migrations")
.run(&pool)
.await
.unwrap();
sqlx::migrate!("./migrations").run(&pool).await.unwrap();
// Verify content column exists by inserting a full row.
let now = chrono::Utc::now().to_rfc3339();
@@ -538,13 +536,12 @@ mod tests {
.await
.unwrap();
let row: (Option<String>,) = sqlx::query_as(
"SELECT content FROM pipeline_items WHERE id = ?1",
)
.bind("99_story_col_test")
.fetch_one(&pool)
.await
.unwrap();
let row: (Option<String>,) =
sqlx::query_as("SELECT content FROM pipeline_items WHERE id = ?1")
.bind("99_story_col_test")
.fetch_one(&pool)
.await
.unwrap();
assert_eq!(row.0.as_deref(), Some(content));
}
@@ -556,10 +553,7 @@ mod tests {
.filename(&db_path)
.create_if_missing(true);
let pool = SqlitePool::connect_with(options).await.unwrap();
sqlx::migrate!("./migrations")
.run(&pool)
.await
.unwrap();
sqlx::migrate!("./migrations").run(&pool).await.unwrap();
let now = chrono::Utc::now().to_rfc3339();
@@ -610,12 +604,11 @@ mod tests {
.await
.unwrap();
let row: (String,) =
sqlx::query_as("SELECT stage FROM pipeline_items WHERE id = ?1")
.bind("5_story_move")
.fetch_one(&pool)
.await
.unwrap();
let row: (String,) = sqlx::query_as("SELECT stage FROM pipeline_items WHERE id = ?1")
.bind("5_story_move")
.fetch_one(&pool)
.await
.unwrap();
assert_eq!(row.0, "2_current");
}
@@ -709,5 +702,4 @@ mod tests {
row.map(|r| r.0)
);
}
}
+112 -34
View File
@@ -13,7 +13,7 @@ use poem::web::Data;
use poem::{Body, Request, Response};
use reqwest::Client;
use serde::{Deserialize, Serialize};
use serde_json::{json, Value};
use serde_json::{Value, json};
use std::collections::BTreeMap;
use std::path::Path;
use std::sync::Arc;
@@ -41,8 +41,7 @@ impl GatewayConfig {
pub fn load(path: &Path) -> Result<Self, String> {
let contents = std::fs::read_to_string(path)
.map_err(|e| format!("cannot read {}: {e}", path.display()))?;
toml::from_str(&contents)
.map_err(|e| format!("invalid projects.toml: {e}"))
toml::from_str(&contents).map_err(|e| format!("invalid projects.toml: {e}"))
}
}
@@ -117,11 +116,21 @@ struct JsonRpcError {
impl JsonRpcResponse {
fn success(id: Option<Value>, result: Value) -> Self {
Self { jsonrpc: "2.0", id, result: Some(result), error: None }
Self {
jsonrpc: "2.0",
id,
result: Some(result),
error: None,
}
}
fn error(id: Option<Value>, code: i64, message: String) -> Self {
Self { jsonrpc: "2.0", id, result: None, error: Some(JsonRpcError { code, message }) }
Self {
jsonrpc: "2.0",
id,
result: None,
error: Some(JsonRpcError { code, message }),
}
}
}
@@ -147,22 +156,32 @@ pub async fn gateway_mcp_post_handler(
let content_type = req.header("content-type").unwrap_or("");
if !content_type.is_empty() && !content_type.contains("application/json") {
return to_json_response(JsonRpcResponse::error(
None, -32700, "Unsupported Content-Type; expected application/json".into(),
None,
-32700,
"Unsupported Content-Type; expected application/json".into(),
));
}
let bytes = match body.into_bytes().await {
Ok(b) => b,
Err(_) => return to_json_response(JsonRpcResponse::error(None, -32700, "Parse error".into())),
Err(_) => {
return to_json_response(JsonRpcResponse::error(None, -32700, "Parse error".into()));
}
};
let rpc: JsonRpcRequest = match serde_json::from_slice(&bytes) {
Ok(r) => r,
Err(_) => return to_json_response(JsonRpcResponse::error(None, -32700, "Parse error".into())),
Err(_) => {
return to_json_response(JsonRpcResponse::error(None, -32700, "Parse error".into()));
}
};
if rpc.jsonrpc != "2.0" {
return to_json_response(JsonRpcResponse::error(rpc.id, -32600, "Invalid JSON-RPC version".into()));
return to_json_response(JsonRpcResponse::error(
rpc.id,
-32600,
"Invalid JSON-RPC version".into(),
));
}
// Accept notifications silently.
@@ -185,7 +204,8 @@ pub async fn gateway_mcp_post_handler(
}
}
"tools/call" => {
let tool_name = rpc.params
let tool_name = rpc
.params
.get("name")
.and_then(|v| v.as_str())
.unwrap_or("");
@@ -200,7 +220,9 @@ pub async fn gateway_mcp_post_handler(
.header("Content-Type", "application/json")
.body(Body::from(resp_body)),
Err(e) => to_json_response(JsonRpcResponse::error(
rpc.id, -32603, format!("proxy error: {e}"),
rpc.id,
-32603,
format!("proxy error: {e}"),
)),
}
}
@@ -213,7 +235,9 @@ pub async fn gateway_mcp_post_handler(
.header("Content-Type", "application/json")
.body(Body::from(resp_body)),
Err(e) => to_json_response(JsonRpcResponse::error(
rpc.id, -32603, format!("proxy error: {e}"),
rpc.id,
-32603,
format!("proxy error: {e}"),
)),
}
}
@@ -295,14 +319,17 @@ async fn merge_tools_list(
"params": {}
});
let resp = state.client
let resp = state
.client
.post(&mcp_url)
.json(&rpc_body)
.send()
.await
.map_err(|e| format!("failed to reach {mcp_url}: {e}"))?;
let resp_json: Value = resp.json().await
let resp_json: Value = resp
.json()
.await
.map_err(|e| format!("invalid JSON from upstream: {e}"))?;
let mut tools: Vec<Value> = resp_json
@@ -320,14 +347,12 @@ async fn merge_tools_list(
}
/// Proxy a raw MCP request body to the active project's container.
async fn proxy_mcp_call(
state: &GatewayState,
request_bytes: &[u8],
) -> Result<Vec<u8>, String> {
async fn proxy_mcp_call(state: &GatewayState, request_bytes: &[u8]) -> Result<Vec<u8>, String> {
let url = state.active_url().await?;
let mcp_url = format!("{}/mcp", url.trim_end_matches('/'));
let resp = state.client
let resp = state
.client
.post(&mcp_url)
.header("Content-Type", "application/json")
.body(request_bytes.to_vec())
@@ -374,8 +399,12 @@ async fn handle_switch_project(params: &Value, state: &GatewayState) -> JsonRpcR
if !state.config.projects.contains_key(project) {
let available: Vec<&str> = state.config.projects.keys().map(|s| s.as_str()).collect();
return JsonRpcResponse::error(
None, -32602,
format!("unknown project '{project}'. Available: {}", available.join(", ")),
None,
-32602,
format!(
"unknown project '{project}'. Available: {}",
available.join(", ")
),
);
}
@@ -431,7 +460,9 @@ async fn handle_gateway_status(state: &GatewayState) -> JsonRpcResponse {
}),
)
}
Err(e) => JsonRpcResponse::error(None, -32603, format!("invalid upstream response: {e}")),
Err(e) => {
JsonRpcResponse::error(None, -32603, format!("invalid upstream response: {e}"))
}
}
}
Err(e) => JsonRpcResponse::error(None, -32603, format!("failed to reach {mcp_url}: {e}")),
@@ -500,7 +531,11 @@ pub async fn gateway_health_handler(state: Data<&Arc<GatewayState>>) -> Response
"projects": statuses,
});
let status = if all_healthy { StatusCode::OK } else { StatusCode::SERVICE_UNAVAILABLE };
let status = if all_healthy {
StatusCode::OK
} else {
StatusCode::SERVICE_UNAVAILABLE
};
Response::builder()
.status(status)
.header("Content-Type", "application/json")
@@ -519,7 +554,13 @@ pub async fn run(config_path: &Path, port: u16) -> Result<(), std::io::Error> {
crate::slog!("[gateway] Starting gateway on port {port}, active project: {active}");
crate::slog!(
"[gateway] Registered projects: {}",
state_arc.config.projects.keys().cloned().collect::<Vec<_>>().join(", ")
state_arc
.config
.projects
.keys()
.cloned()
.collect::<Vec<_>>()
.join(", ")
);
let route = poem::Route::new()
@@ -569,15 +610,27 @@ url = "http://localhost:3002"
#[test]
fn gateway_state_rejects_empty_config() {
let config = GatewayConfig { projects: BTreeMap::new() };
let config = GatewayConfig {
projects: BTreeMap::new(),
};
assert!(GatewayState::new(config).is_err());
}
#[test]
fn gateway_state_sets_first_project_active() {
let mut projects = BTreeMap::new();
projects.insert("alpha".into(), ProjectEntry { url: "http://a:3001".into() });
projects.insert("beta".into(), ProjectEntry { url: "http://b:3002".into() });
projects.insert(
"alpha".into(),
ProjectEntry {
url: "http://a:3001".into(),
},
);
projects.insert(
"beta".into(),
ProjectEntry {
url: "http://b:3002".into(),
},
);
let config = GatewayConfig { projects };
let state = GatewayState::new(config).unwrap();
let active = state.active_project.blocking_read().clone();
@@ -587,7 +640,8 @@ url = "http://localhost:3002"
#[test]
fn gateway_tool_definitions_has_expected_tools() {
let defs = gateway_tool_definitions();
let names: Vec<&str> = defs.iter()
let names: Vec<&str> = defs
.iter()
.filter_map(|d| d.get("name").and_then(|n| n.as_str()))
.collect();
assert!(names.contains(&"switch_project"));
@@ -598,8 +652,18 @@ url = "http://localhost:3002"
#[tokio::test]
async fn switch_project_to_known_project() {
let mut projects = BTreeMap::new();
projects.insert("alpha".into(), ProjectEntry { url: "http://a:3001".into() });
projects.insert("beta".into(), ProjectEntry { url: "http://b:3002".into() });
projects.insert(
"alpha".into(),
ProjectEntry {
url: "http://a:3001".into(),
},
);
projects.insert(
"beta".into(),
ProjectEntry {
url: "http://b:3002".into(),
},
);
let config = GatewayConfig { projects };
let state = GatewayState::new(config).unwrap();
@@ -614,7 +678,12 @@ url = "http://localhost:3002"
#[tokio::test]
async fn switch_project_to_unknown_project_fails() {
let mut projects = BTreeMap::new();
projects.insert("alpha".into(), ProjectEntry { url: "http://a:3001".into() });
projects.insert(
"alpha".into(),
ProjectEntry {
url: "http://a:3001".into(),
},
);
let config = GatewayConfig { projects };
let state = GatewayState::new(config).unwrap();
@@ -626,7 +695,12 @@ url = "http://localhost:3002"
#[tokio::test]
async fn active_url_returns_correct_url() {
let mut projects = BTreeMap::new();
projects.insert("myproj".into(), ProjectEntry { url: "http://my:3001".into() });
projects.insert(
"myproj".into(),
ProjectEntry {
url: "http://my:3001".into(),
},
);
let config = GatewayConfig { projects };
let state = GatewayState::new(config).unwrap();
@@ -654,10 +728,14 @@ url = "http://localhost:3002"
fn load_config_from_file() {
let dir = tempfile::tempdir().unwrap();
let path = dir.path().join("projects.toml");
std::fs::write(&path, r#"
std::fs::write(
&path,
r#"
[projects.test]
url = "http://localhost:9999"
"#).unwrap();
"#,
)
.unwrap();
let config = GatewayConfig::load(&path).unwrap();
assert_eq!(config.projects.len(), 1);
+16 -21
View File
@@ -61,9 +61,10 @@ pub async fn agent_stream(
.header("Content-Type", "text/event-stream")
.header("Cache-Control", "no-cache")
.header("Connection", "keep-alive")
.body(Body::from_bytes_stream(
futures::StreamExt::map(stream, |r| r.map(bytes::Bytes::from)),
))
.body(Body::from_bytes_stream(futures::StreamExt::map(
stream,
|r| r.map(bytes::Bytes::from),
)))
}
#[cfg(test)]
@@ -77,10 +78,7 @@ mod tests {
fn test_app(ctx: Arc<AppContext>) -> impl poem::Endpoint {
Route::new()
.at(
"/agents/:story_id/:agent_name/stream",
get(agent_stream),
)
.at("/agents/:story_id/:agent_name/stream", get(agent_stream))
.data(ctx)
}
@@ -123,10 +121,7 @@ mod tests {
});
let cli = poem::test::TestClient::new(test_app(ctx));
let resp = cli
.get("/agents/1_story/coder-1/stream")
.send()
.await;
let resp = cli.get("/agents/1_story/coder-1/stream").send().await;
let body = resp.0.into_body().into_string().await.unwrap();
@@ -178,15 +173,18 @@ mod tests {
});
let cli = poem::test::TestClient::new(test_app(ctx));
let resp = cli
.get("/agents/2_story/coder-1/stream")
.send()
.await;
let resp = cli.get("/agents/2_story/coder-1/stream").send().await;
let body = resp.0.into_body().into_string().await.unwrap();
assert!(body.contains("step 1 output"), "Output must be forwarded: {body}");
assert!(body.contains("\"type\":\"done\""), "Done event must be forwarded: {body}");
assert!(
body.contains("step 1 output"),
"Output must be forwarded: {body}"
);
assert!(
body.contains("\"type\":\"done\""),
"Done event must be forwarded: {body}"
);
}
#[tokio::test]
@@ -195,10 +193,7 @@ mod tests {
let ctx = Arc::new(AppContext::new_test(tmp.path().to_path_buf()));
let cli = poem::test::TestClient::new(test_app(ctx));
let resp = cli
.get("/agents/nonexistent/coder-1/stream")
.send()
.await;
let resp = cli.get("/agents/nonexistent/coder-1/stream").send().await;
assert_eq!(
resp.0.status(),
+5 -4
View File
@@ -198,7 +198,8 @@ mod tests {
fn get_api_key_returns_key_when_set() {
let dir = TempDir::new().unwrap();
let ctx = test_ctx(dir.path());
ctx.store.set(KEY_ANTHROPIC_API_KEY, json!("sk-ant-test123"));
ctx.store
.set(KEY_ANTHROPIC_API_KEY, json!("sk-ant-test123"));
let result = get_anthropic_api_key(&ctx);
assert_eq!(result.unwrap(), "sk-ant-test123");
}
@@ -217,7 +218,8 @@ mod tests {
async fn key_exists_returns_true_when_set() {
let dir = TempDir::new().unwrap();
let ctx = AppContext::new_test(dir.path().to_path_buf());
ctx.store.set(KEY_ANTHROPIC_API_KEY, json!("sk-ant-test123"));
ctx.store
.set(KEY_ANTHROPIC_API_KEY, json!("sk-ant-test123"));
let api = AnthropicApi::new(Arc::new(ctx));
let result = api.get_anthropic_api_key_exists().await.unwrap();
assert!(result.0);
@@ -265,8 +267,7 @@ mod tests {
let dir = TempDir::new().unwrap();
let ctx = AppContext::new_test(dir.path().to_path_buf());
// A header value containing a newline is invalid
ctx.store
.set(KEY_ANTHROPIC_API_KEY, json!("bad\nvalue"));
ctx.store.set(KEY_ANTHROPIC_API_KEY, json!("bad\nvalue"));
let api = AnthropicApi::new(Arc::new(ctx));
let result = api.list_anthropic_models_from("http://127.0.0.1:1").await;
assert!(result.is_err());
+39 -14
View File
@@ -9,8 +9,8 @@
//! their dedicated async handlers. The `reset` command is handled by the frontend
//! (it clears local session state and message history) and is not routed here.
use crate::http::context::{AppContext, OpenApiResult};
use crate::chat::commands::CommandDispatch;
use crate::http::context::{AppContext, OpenApiResult};
use poem::http::StatusCode;
use poem_openapi::{Object, OpenApi, Tags, payload::Json};
use serde::{Deserialize, Serialize};
@@ -55,9 +55,11 @@ impl BotCommandApi {
&self,
body: Json<BotCommandRequest>,
) -> OpenApiResult<Json<BotCommandResponse>> {
let project_root = self.ctx.state.get_project_root().map_err(|e| {
poem::Error::from_string(e, StatusCode::BAD_REQUEST)
})?;
let project_root = self
.ctx
.state
.get_project_root()
.map_err(|e| poem::Error::from_string(e, StatusCode::BAD_REQUEST))?;
let cmd = body.command.trim().to_ascii_lowercase();
let args = body.args.trim();
@@ -135,12 +137,21 @@ async fn dispatch_assign(
let number_str = parts.next().unwrap_or("").trim();
let model_str = parts.next().unwrap_or("").trim();
if number_str.is_empty() || !number_str.chars().all(|c| c.is_ascii_digit()) || model_str.is_empty() {
if number_str.is_empty()
|| !number_str.chars().all(|c| c.is_ascii_digit())
|| model_str.is_empty()
{
return "Usage: `/assign <number> <model>` (e.g. `/assign 42 opus`)".to_string();
}
crate::chat::transport::matrix::assign::handle_assign("web-ui", number_str, model_str, project_root, agents)
.await
crate::chat::transport::matrix::assign::handle_assign(
"web-ui",
number_str,
model_str,
project_root,
agents,
)
.await
}
async fn dispatch_start(
@@ -164,8 +175,14 @@ async fn dispatch_start(
Some(hint_str)
};
crate::chat::transport::matrix::start::handle_start("web-ui", number_str, agent_hint, project_root, agents)
.await
crate::chat::transport::matrix::start::handle_start(
"web-ui",
number_str,
agent_hint,
project_root,
agents,
)
.await
}
async fn dispatch_delete(
@@ -177,7 +194,13 @@ async fn dispatch_delete(
if number_str.is_empty() || !number_str.chars().all(|c| c.is_ascii_digit()) {
return "Usage: `/delete <number>` (e.g. `/delete 42`)".to_string();
}
crate::chat::transport::matrix::delete::handle_delete("web-ui", number_str, project_root, agents).await
crate::chat::transport::matrix::delete::handle_delete(
"web-ui",
number_str,
project_root,
agents,
)
.await
}
async fn dispatch_rebuild(
@@ -197,11 +220,13 @@ async fn dispatch_timer(args: &str, project_root: &std::path::Path) -> String {
"@__web_ui__:localhost",
) {
Some(cmd) => cmd,
None => return "Usage: `/timer list`, `/timer <number> <HH:MM>`, or `/timer cancel <number>`".to_string(),
None => {
return "Usage: `/timer list`, `/timer <number> <HH:MM>`, or `/timer cancel <number>`"
.to_string();
}
};
let store = crate::chat::timer::TimerStore::load(
project_root.join(".huskies").join("timers.json"),
);
let store =
crate::chat::timer::TimerStore::load(project_root.join(".huskies").join("timers.json"));
crate::chat::timer::handle_timer_command(timer_cmd, &store, project_root).await
}
+5 -3
View File
@@ -94,8 +94,7 @@ pub struct AppContext {
///
/// Wrapped in `Arc` so `AppContext` can implement `Clone`.
/// `None` when no Matrix bot is configured.
pub matrix_shutdown_tx:
Option<Arc<tokio::sync::watch::Sender<Option<ShutdownReason>>>>,
pub matrix_shutdown_tx: Option<Arc<tokio::sync::watch::Sender<Option<ShutdownReason>>>>,
/// Shared rate-limit retry timer store.
///
/// Used by MCP tools (`move_story`, `stop_agent`) to cancel pending timers
@@ -168,7 +167,10 @@ mod tests {
fn permission_decision_equality() {
assert_eq!(PermissionDecision::Deny, PermissionDecision::Deny);
assert_eq!(PermissionDecision::Approve, PermissionDecision::Approve);
assert_eq!(PermissionDecision::AlwaysAllow, PermissionDecision::AlwaysAllow);
assert_eq!(
PermissionDecision::AlwaysAllow,
PermissionDecision::AlwaysAllow
);
assert_ne!(PermissionDecision::Deny, PermissionDecision::Approve);
assert_ne!(PermissionDecision::Approve, PermissionDecision::AlwaysAllow);
}
+15 -4
View File
@@ -168,8 +168,16 @@ mod tests {
let entries = &result.0;
assert!(entries.len() >= 2);
assert!(entries.iter().any(|e| e.name == "subdir" && e.kind == "dir"));
assert!(entries.iter().any(|e| e.name == "file.txt" && e.kind == "file"));
assert!(
entries
.iter()
.any(|e| e.name == "subdir" && e.kind == "dir")
);
assert!(
entries
.iter()
.any(|e| e.name == "file.txt" && e.kind == "file")
);
}
#[tokio::test]
@@ -390,7 +398,11 @@ mod tests {
let entries = &result.0;
assert!(entries.iter().any(|e| e.name == "adir" && e.kind == "dir"));
assert!(entries.iter().any(|e| e.name == "bfile.txt" && e.kind == "file"));
assert!(
entries
.iter()
.any(|e| e.name == "bfile.txt" && e.kind == "file")
);
}
#[tokio::test]
@@ -403,5 +415,4 @@ mod tests {
let result = api.list_directory(payload).await;
assert!(result.is_err());
}
}
+115 -109
View File
@@ -5,7 +5,7 @@ use crate::http::context::AppContext;
use crate::http::settings::get_editor_command_from_store;
use crate::slog_warn;
use crate::worktree;
use serde_json::{json, Value};
use serde_json::{Value, json};
pub(super) async fn tool_start_agent(args: &Value, ctx: &AppContext) -> Result<String, String> {
let story_id = args
@@ -72,28 +72,32 @@ pub(super) async fn tool_stop_agent(args: &Value, ctx: &AppContext) -> Result<St
.stop_agent(&project_root, story_id, agent_name)
.await?;
Ok(format!("Agent '{agent_name}' for story '{story_id}' stopped."))
Ok(format!(
"Agent '{agent_name}' for story '{story_id}' stopped."
))
}
pub(super) fn tool_list_agents(ctx: &AppContext) -> Result<String, String> {
let project_root = ctx.agents.get_project_root(&ctx.state).ok();
let agents = ctx.agents.list_agents()?;
serde_json::to_string_pretty(&json!(agents
.iter()
.filter(|a| {
project_root
.as_deref()
.map(|root| !crate::http::agents::story_is_archived(root, &a.story_id))
.unwrap_or(true)
})
.map(|a| json!({
"story_id": a.story_id,
"agent_name": a.agent_name,
"status": a.status.to_string(),
"session_id": a.session_id,
"worktree_path": a.worktree_path,
}))
.collect::<Vec<_>>()))
serde_json::to_string_pretty(&json!(
agents
.iter()
.filter(|a| {
project_root
.as_deref()
.map(|root| !crate::http::agents::story_is_archived(root, &a.story_id))
.unwrap_or(true)
})
.map(|a| json!({
"story_id": a.story_id,
"agent_name": a.agent_name,
"status": a.status.to_string(),
"session_id": a.session_id,
"worktree_path": a.worktree_path,
}))
.collect::<Vec<_>>()
))
.map_err(|e| format!("Serialization error: {e}"))
}
@@ -124,16 +128,12 @@ pub(super) async fn tool_get_agent_output(
let project_root = ctx.agents.get_project_root(&ctx.state)?;
// Collect all matching log files, oldest first.
let log_files =
agent_log::list_story_log_files(&project_root, story_id, agent_name_filter);
let log_files = agent_log::list_story_log_files(&project_root, story_id, agent_name_filter);
let mut all_lines: Vec<String> = Vec::new();
for path in &log_files {
let file_name = path
.file_name()
.and_then(|n| n.to_str())
.unwrap_or("?");
let file_name = path.file_name().and_then(|n| n.to_str()).unwrap_or("?");
all_lines.push(format!("=== {} ===", file_name.trim_end_matches(".log")));
match agent_log::read_log_as_readable_lines(path) {
Ok(lines) => all_lines.extend(lines),
@@ -156,8 +156,7 @@ pub(super) async fn tool_get_agent_output(
let now = chrono::Utc::now().to_rfc3339();
for event in &live_events {
if let Ok(event_value) = serde_json::to_value(event)
&& let Some(line) =
agent_log::format_log_entry_as_text(&now, &event_value)
&& let Some(line) = agent_log::format_log_entry_as_text(&now, &event_value)
{
all_lines.push(line);
}
@@ -201,8 +200,7 @@ pub(super) fn tool_get_agent_config(ctx: &AppContext) -> Result<String, String>
// Collect available (idle) agent names across all stages so the caller can
// see at a glance which agents are free to start (story 190).
let mut available_names: std::collections::HashSet<String> =
std::collections::HashSet::new();
let mut available_names: std::collections::HashSet<String> = std::collections::HashSet::new();
for stage in &[
PipelineStage::Coder,
PipelineStage::Qa,
@@ -214,19 +212,21 @@ pub(super) fn tool_get_agent_config(ctx: &AppContext) -> Result<String, String>
}
}
serde_json::to_string_pretty(&json!(config
.agent
.iter()
.map(|a| json!({
"name": a.name,
"role": a.role,
"model": a.model,
"allowed_tools": a.allowed_tools,
"max_turns": a.max_turns,
"max_budget_usd": a.max_budget_usd,
"available": available_names.contains(&a.name),
}))
.collect::<Vec<_>>()))
serde_json::to_string_pretty(&json!(
config
.agent
.iter()
.map(|a| json!({
"name": a.name,
"role": a.role,
"model": a.model,
"allowed_tools": a.allowed_tools,
"max_turns": a.max_turns,
"max_budget_usd": a.max_budget_usd,
"available": available_names.contains(&a.name),
}))
.collect::<Vec<_>>()
))
.map_err(|e| format!("Serialization error: {e}"))
}
@@ -254,11 +254,13 @@ pub(super) async fn tool_wait_for_agent(args: &Value, ctx: &AppContext) -> Resul
_ => None,
};
let completion = info.completion.as_ref().map(|r| json!({
"summary": r.summary,
"gates_passed": r.gates_passed,
"gate_output": r.gate_output,
}));
let completion = info.completion.as_ref().map(|r| {
json!({
"summary": r.summary,
"gates_passed": r.gates_passed,
"gate_output": r.gate_output,
})
});
serde_json::to_string_pretty(&json!({
"story_id": info.story_id,
@@ -295,13 +297,15 @@ pub(super) fn tool_list_worktrees(ctx: &AppContext) -> Result<String, String> {
let project_root = ctx.agents.get_project_root(&ctx.state)?;
let entries = worktree::list_worktrees(&project_root)?;
serde_json::to_string_pretty(&json!(entries
.iter()
.map(|e| json!({
"story_id": e.story_id,
"path": e.path.to_string_lossy(),
}))
.collect::<Vec<_>>()))
serde_json::to_string_pretty(&json!(
entries
.iter()
.map(|e| json!({
"story_id": e.story_id,
"path": e.path.to_string_lossy(),
}))
.collect::<Vec<_>>()
))
.map_err(|e| format!("Serialization error: {e}"))
}
@@ -332,7 +336,10 @@ pub(super) fn tool_get_editor_command(args: &Value, ctx: &AppContext) -> Result<
/// Run `git log <base>..HEAD --oneline` in the worktree and return the commit
/// summaries, or `None` if git is unavailable or there are no new commits.
pub(super) async fn get_worktree_commits(worktree_path: &str, base_branch: &str) -> Option<Vec<String>> {
pub(super) async fn get_worktree_commits(
worktree_path: &str,
base_branch: &str,
) -> Option<Vec<String>> {
let wt = worktree_path.to_string();
let base = base_branch.to_string();
tokio::task::spawn_blocking(move || {
@@ -382,7 +389,11 @@ mod tests {
let result = tool_get_agent_config(&ctx).unwrap();
let parsed: Vec<Value> = serde_json::from_str(&result).unwrap();
// Default config contains one agent entry with default values
assert_eq!(parsed.len(), 1, "default config should have one fallback agent");
assert_eq!(
parsed.len(),
1,
"default config should have one fallback agent"
);
assert!(parsed[0].get("name").is_some());
assert!(parsed[0].get("role").is_some());
}
@@ -401,12 +412,10 @@ mod tests {
let tmp = tempfile::tempdir().unwrap();
let ctx = test_ctx(tmp.path());
// No agent registered, no log file → returns "no log files found" message
let result = tool_get_agent_output(
&json!({"story_id": "99_nope", "agent_name": "bot"}),
&ctx,
)
.await
.unwrap();
let result =
tool_get_agent_output(&json!({"story_id": "99_nope", "agent_name": "bot"}), &ctx)
.await
.unwrap();
assert!(
result.contains("No log files found"),
"expected 'No log files found' message: {result}"
@@ -418,12 +427,9 @@ mod tests {
let tmp = tempfile::tempdir().unwrap();
let ctx = test_ctx(tmp.path());
// No agent_name provided — should succeed (no error)
let result = tool_get_agent_output(
&json!({"story_id": "99_nope"}),
&ctx,
)
.await
.unwrap();
let result = tool_get_agent_output(&json!({"story_id": "99_nope"}), &ctx)
.await
.unwrap();
assert!(result.contains("No log files found"));
}
@@ -440,13 +446,8 @@ mod tests {
.set("project_root", json!(tmp.path().to_string_lossy().as_ref()));
// Write a log file
let mut writer = AgentLogWriter::new(
tmp.path(),
"42_story_foo",
"coder-1",
"sess-test",
)
.unwrap();
let mut writer =
AgentLogWriter::new(tmp.path(), "42_story_foo", "coder-1", "sess-test").unwrap();
writer
.write_event(&AgentEvent::Output {
story_id: "42_story_foo".to_string(),
@@ -488,13 +489,8 @@ mod tests {
ctx.store
.set("project_root", json!(tmp.path().to_string_lossy().as_ref()));
let mut writer = AgentLogWriter::new(
tmp.path(),
"42_story_bar",
"coder-1",
"sess-tail",
)
.unwrap();
let mut writer =
AgentLogWriter::new(tmp.path(), "42_story_bar", "coder-1", "sess-tail").unwrap();
for i in 0..10 {
writer
.write_event(&AgentEvent::Output {
@@ -514,8 +510,14 @@ mod tests {
.unwrap();
// Should contain "line 7", "line 8", "line 9" but NOT "line 0"
assert!(result.contains("line 9"), "should contain last line: {result}");
assert!(!result.contains("line 0"), "should not contain early lines: {result}");
assert!(
result.contains("line 9"),
"should contain last line: {result}"
);
assert!(
!result.contains("line 0"),
"should not contain early lines: {result}"
);
}
#[tokio::test]
@@ -529,13 +531,8 @@ mod tests {
ctx.store
.set("project_root", json!(tmp.path().to_string_lossy().as_ref()));
let mut writer = AgentLogWriter::new(
tmp.path(),
"42_story_baz",
"coder-1",
"sess-filter",
)
.unwrap();
let mut writer =
AgentLogWriter::new(tmp.path(), "42_story_baz", "coder-1", "sess-filter").unwrap();
writer
.write_event(&AgentEvent::Output {
story_id: "42_story_baz".to_string(),
@@ -559,8 +556,14 @@ mod tests {
.await
.unwrap();
assert!(result.contains("needle"), "filter should keep matching lines: {result}");
assert!(!result.contains("haystack"), "filter should remove non-matching lines: {result}");
assert!(
result.contains("needle"),
"filter should keep matching lines: {result}"
);
assert!(
!result.contains("haystack"),
"filter should remove non-matching lines: {result}"
);
}
#[tokio::test]
@@ -697,10 +700,7 @@ stage = "coder"
fn tool_get_editor_command_no_editor_configured() {
let tmp = tempfile::tempdir().unwrap();
let ctx = test_ctx(tmp.path());
let result = tool_get_editor_command(
&json!({"worktree_path": "/some/path"}),
&ctx,
);
let result = tool_get_editor_command(&json!({"worktree_path": "/some/path"}), &ctx);
assert!(result.is_err());
assert!(result.unwrap_err().contains("No editor configured"));
}
@@ -725,17 +725,14 @@ stage = "coder"
let ctx = test_ctx(tmp.path());
ctx.store.set("editor_command", json!("code"));
let result = tool_get_editor_command(
&json!({"worktree_path": "/path/to/worktree"}),
&ctx,
)
.unwrap();
let result =
tool_get_editor_command(&json!({"worktree_path": "/path/to/worktree"}), &ctx).unwrap();
assert_eq!(result, "code /path/to/worktree");
}
#[test]
fn get_editor_command_in_tools_list() {
use super::super::{handle_tools_list};
use super::super::handle_tools_list;
let resp = handle_tools_list(Some(json!(1)));
let tools = resp.result.unwrap()["tools"].as_array().unwrap().clone();
let tool = tools.iter().find(|t| t["name"] == "get_editor_command");
@@ -769,9 +766,11 @@ stage = "coder"
async fn wait_for_agent_tool_nonexistent_agent_returns_error() {
let tmp = tempfile::tempdir().unwrap();
let ctx = test_ctx(tmp.path());
let result =
tool_wait_for_agent(&json!({"story_id": "99_nope", "agent_name": "bot", "timeout_ms": 50}), &ctx)
.await;
let result = tool_wait_for_agent(
&json!({"story_id": "99_nope", "agent_name": "bot", "timeout_ms": 50}),
&ctx,
)
.await;
// No agent registered — should error
assert!(result.is_err());
}
@@ -802,13 +801,19 @@ stage = "coder"
#[test]
fn wait_for_agent_tool_in_list() {
use super::super::{handle_tools_list};
use super::super::handle_tools_list;
let resp = handle_tools_list(Some(json!(1)));
let tools = resp.result.unwrap()["tools"].as_array().unwrap().clone();
let wait_tool = tools.iter().find(|t| t["name"] == "wait_for_agent");
assert!(wait_tool.is_some(), "wait_for_agent missing from tools list");
assert!(
wait_tool.is_some(),
"wait_for_agent missing from tools list"
);
let t = wait_tool.unwrap();
assert!(t["description"].as_str().unwrap().contains("block") || t["description"].as_str().unwrap().contains("Block"));
assert!(
t["description"].as_str().unwrap().contains("block")
|| t["description"].as_str().unwrap().contains("Block")
);
let required = t["inputSchema"]["required"].as_array().unwrap();
let req_names: Vec<&str> = required.iter().map(|v| v.as_str().unwrap()).collect();
assert!(req_names.contains(&"story_id"));
@@ -821,7 +826,8 @@ stage = "coder"
let tmp = tempfile::tempdir().unwrap();
let cov_dir = tmp.path().join(".huskies/coverage");
fs::create_dir_all(&cov_dir).unwrap();
let json_content = r#"{"data":[{"totals":{"lines":{"count":100,"covered":78,"percent":78.0}}}]}"#;
let json_content =
r#"{"data":[{"totals":{"lines":{"count":100,"covered":78,"percent":78.0}}}]}"#;
fs::write(cov_dir.join("server.json"), json_content).unwrap();
let pct = read_coverage_percent_from_json(tmp.path());
+14 -7
View File
@@ -153,7 +153,8 @@ pub(super) async fn tool_prompt_permission(
// Try to forward to the interactive session (WebSocket/Matrix).
// If no session is active (headless agent), auto-deny the permission.
if ctx.perm_tx
if ctx
.perm_tx
.send(crate::http::context::PermissionForward {
request_id: request_id.clone(),
tool_name: tool_name.clone(),
@@ -321,8 +322,8 @@ pub(super) fn tool_dump_crdt(args: &Value) -> Result<String, String> {
/// MCP tool: return the server version and build hash.
pub(super) fn tool_get_version() -> Result<String, String> {
let build_hash = std::fs::read_to_string(".huskies/build_hash")
.unwrap_or_else(|_| "unknown".to_string());
let build_hash =
std::fs::read_to_string(".huskies/build_hash").unwrap_or_else(|_| "unknown".to_string());
serde_json::to_string_pretty(&json!({
"version": env!("CARGO_PKG_VERSION"),
"build_hash": build_hash.trim(),
@@ -338,7 +339,10 @@ pub(super) fn tool_loc_file(args: &Value, ctx: &AppContext) -> Result<String, St
.ok_or_else(|| "Missing required argument: file_path".to_string())?;
let project_root = ctx.state.get_project_root()?;
Ok(crate::chat::commands::loc::loc_single_file(&project_root, file_path))
Ok(crate::chat::commands::loc::loc_single_file(
&project_root,
file_path,
))
}
#[cfg(test)]
@@ -851,8 +855,7 @@ mod tests {
#[test]
fn tool_dump_crdt_with_story_id_filter_returns_valid_json() {
let result =
tool_dump_crdt(&json!({"story_id": "9999_story_nonexistent"})).unwrap();
let result = tool_dump_crdt(&json!({"story_id": "9999_story_nonexistent"})).unwrap();
let parsed: Value = serde_json::from_str(&result).unwrap();
assert!(parsed["items"].as_array().unwrap().is_empty());
}
@@ -866,7 +869,11 @@ mod tests {
assert!(tool.is_some(), "dump_crdt missing from tools list");
let t = tool.unwrap();
assert!(
t["description"].as_str().unwrap().to_lowercase().contains("debug"),
t["description"]
.as_str()
.unwrap()
.to_lowercase()
.contains("debug"),
"description must mention this is a debug tool"
);
assert!(t["inputSchema"].is_object());
+26 -45
View File
@@ -1,6 +1,6 @@
//! MCP git tools — status, diff, add, commit, and log operations on agent worktrees.
use crate::http::context::AppContext;
use serde_json::{json, Value};
use serde_json::{Value, json};
use std::path::PathBuf;
/// Validates that `worktree_path` exists and is inside the project's
@@ -12,9 +12,7 @@ fn validate_worktree_path(worktree_path: &str, ctx: &AppContext) -> Result<PathB
return Err("worktree_path must be an absolute path".to_string());
}
if !wd.exists() {
return Err(format!(
"worktree_path does not exist: {worktree_path}"
));
return Err(format!("worktree_path does not exist: {worktree_path}"));
}
let project_root = ctx.agents.get_project_root(&ctx.state)?;
@@ -230,11 +228,7 @@ pub(super) async fn tool_git_commit(args: &Value, ctx: &AppContext) -> Result<St
let dir = validate_worktree_path(worktree_path, ctx)?;
let git_args: Vec<String> = vec![
"commit".to_string(),
"--message".to_string(),
message,
];
let git_args: Vec<String> = vec!["commit".to_string(), "--message".to_string(), message];
let output = run_git_owned(git_args, dir).await?;
@@ -412,12 +406,9 @@ mod tests {
.output()
.unwrap();
let result = tool_git_status(
&json!({"worktree_path": story_wt.to_str().unwrap()}),
&ctx,
)
.await
.unwrap();
let result = tool_git_status(&json!({"worktree_path": story_wt.to_str().unwrap()}), &ctx)
.await
.unwrap();
let parsed: serde_json::Value = serde_json::from_str(&result).unwrap();
assert_eq!(parsed["clean"], true);
@@ -446,18 +437,17 @@ mod tests {
// Add untracked file
std::fs::write(story_wt.join("new_file.txt"), "content").unwrap();
let result = tool_git_status(
&json!({"worktree_path": story_wt.to_str().unwrap()}),
&ctx,
)
.await
.unwrap();
let result = tool_git_status(&json!({"worktree_path": story_wt.to_str().unwrap()}), &ctx)
.await
.unwrap();
let parsed: serde_json::Value = serde_json::from_str(&result).unwrap();
assert_eq!(parsed["clean"], false);
let untracked = parsed["untracked"].as_array().unwrap();
assert!(
untracked.iter().any(|v| v.as_str().unwrap().contains("new_file.txt")),
untracked
.iter()
.any(|v| v.as_str().unwrap().contains("new_file.txt")),
"expected new_file.txt in untracked: {parsed}"
);
}
@@ -493,12 +483,9 @@ mod tests {
// Modify file (unstaged)
std::fs::write(story_wt.join("file.txt"), "line1\nline2\n").unwrap();
let result = tool_git_diff(
&json!({"worktree_path": story_wt.to_str().unwrap()}),
&ctx,
)
.await
.unwrap();
let result = tool_git_diff(&json!({"worktree_path": story_wt.to_str().unwrap()}), &ctx)
.await
.unwrap();
let parsed: serde_json::Value = serde_json::from_str(&result).unwrap();
assert!(
@@ -560,11 +547,8 @@ mod tests {
#[tokio::test]
async fn git_add_missing_paths() {
let (_tmp, story_wt, ctx) = setup_worktree();
let result = tool_git_add(
&json!({"worktree_path": story_wt.to_str().unwrap()}),
&ctx,
)
.await;
let result =
tool_git_add(&json!({"worktree_path": story_wt.to_str().unwrap()}), &ctx).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("paths"));
}
@@ -609,7 +593,10 @@ mod tests {
.output()
.unwrap();
let output = String::from_utf8_lossy(&status.stdout);
assert!(output.contains("A file.txt"), "file should be staged: {output}");
assert!(
output.contains("A file.txt"),
"file should be staged: {output}"
);
}
// ── git_commit ────────────────────────────────────────────────────
@@ -626,11 +613,8 @@ mod tests {
#[tokio::test]
async fn git_commit_missing_message() {
let (_tmp, story_wt, ctx) = setup_worktree();
let result = tool_git_commit(
&json!({"worktree_path": story_wt.to_str().unwrap()}),
&ctx,
)
.await;
let result =
tool_git_commit(&json!({"worktree_path": story_wt.to_str().unwrap()}), &ctx).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("message"));
}
@@ -713,12 +697,9 @@ mod tests {
.output()
.unwrap();
let result = tool_git_log(
&json!({"worktree_path": story_wt.to_str().unwrap()}),
&ctx,
)
.await
.unwrap();
let result = tool_git_log(&json!({"worktree_path": story_wt.to_str().unwrap()}), &ctx)
.await
.unwrap();
let parsed: serde_json::Value = serde_json::from_str(&result).unwrap();
assert_eq!(parsed["exit_code"], 0);
+33 -28
View File
@@ -4,7 +4,7 @@ use crate::http::context::AppContext;
use crate::io::story_metadata::write_merge_failure;
use crate::slog;
use crate::slog_warn;
use serde_json::{json, Value};
use serde_json::{Value, json};
pub(super) fn tool_merge_agent_work(args: &Value, ctx: &AppContext) -> Result<String, String> {
let story_id = args
@@ -38,14 +38,12 @@ fn tool_get_merge_status_inner(
job: &crate::agents::merge::MergeJob,
) -> Result<String, String> {
match &job.status {
crate::agents::merge::MergeJobStatus::Running => {
serde_json::to_string_pretty(&json!({
"story_id": story_id,
"status": "running",
"message": "Merge pipeline is still running."
}))
.map_err(|e| format!("Serialization error: {e}"))
}
crate::agents::merge::MergeJobStatus::Running => serde_json::to_string_pretty(&json!({
"story_id": story_id,
"status": "running",
"message": "Merge pipeline is still running."
}))
.map_err(|e| format!("Serialization error: {e}")),
crate::agents::merge::MergeJobStatus::Completed(report) => {
serde_json::to_string_pretty(&json!({
"story_id": story_id,
@@ -58,14 +56,12 @@ fn tool_get_merge_status_inner(
}))
.map_err(|e| format!("Serialization error: {e}"))
}
crate::agents::merge::MergeJobStatus::Failed(err) => {
serde_json::to_string_pretty(&json!({
"story_id": story_id,
"status": "failed",
"error": err,
}))
.map_err(|e| format!("Serialization error: {e}"))
}
crate::agents::merge::MergeJobStatus::Failed(err) => serde_json::to_string_pretty(&json!({
"story_id": story_id,
"status": "failed",
"error": err,
}))
.map_err(|e| format!("Serialization error: {e}")),
}
}
@@ -75,8 +71,9 @@ pub(super) fn tool_get_merge_status(args: &Value, ctx: &AppContext) -> Result<St
.and_then(|v| v.as_str())
.ok_or("Missing required argument: story_id")?;
let job = ctx.agents.get_merge_status(story_id)
.ok_or_else(|| format!("No merge job found for story '{story_id}'. Call merge_agent_work first."))?;
let job = ctx.agents.get_merge_status(story_id).ok_or_else(|| {
format!("No merge job found for story '{story_id}'. Call merge_agent_work first.")
})?;
match &job.status {
crate::agents::merge::MergeJobStatus::Running => {
@@ -127,7 +124,10 @@ pub(super) fn tool_get_merge_status(args: &Value, ctx: &AppContext) -> Result<St
}
}
pub(super) async fn tool_move_story_to_merge(args: &Value, ctx: &AppContext) -> Result<String, String> {
pub(super) async fn tool_move_story_to_merge(
args: &Value,
ctx: &AppContext,
) -> Result<String, String> {
let story_id = args
.get("story_id")
.and_then(|v| v.as_str())
@@ -176,10 +176,12 @@ pub(super) fn tool_report_merge_failure(args: &Value, ctx: &AppContext) -> Resul
// Broadcast the failure so the Matrix notification listener can post an
// error message to configured rooms without coupling this tool to the bot.
let _ = ctx.watcher_tx.send(crate::io::watcher::WatcherEvent::MergeFailure {
story_id: story_id.to_string(),
reason: reason.to_string(),
});
let _ = ctx
.watcher_tx
.send(crate::io::watcher::WatcherEvent::MergeFailure {
story_id: story_id.to_string(),
reason: reason.to_string(),
});
// Persist the failure reason to the story file's front matter so it
// survives server restarts and is visible in the web UI.
@@ -238,7 +240,7 @@ mod tests {
#[test]
fn merge_agent_work_in_tools_list() {
use super::super::{handle_tools_list};
use super::super::handle_tools_list;
let resp = handle_tools_list(Some(json!(1)));
let tools = resp.result.unwrap()["tools"].as_array().unwrap().clone();
let tool = tools.iter().find(|t| t["name"] == "merge_agent_work");
@@ -254,11 +256,14 @@ mod tests {
#[test]
fn move_story_to_merge_in_tools_list() {
use super::super::{handle_tools_list};
use super::super::handle_tools_list;
let resp = handle_tools_list(Some(json!(1)));
let tools = resp.result.unwrap()["tools"].as_array().unwrap().clone();
let tool = tools.iter().find(|t| t["name"] == "move_story_to_merge");
assert!(tool.is_some(), "move_story_to_merge missing from tools list");
assert!(
tool.is_some(),
"move_story_to_merge missing from tools list"
);
let t = tool.unwrap();
assert!(t["description"].is_string());
let required = t["inputSchema"]["required"].as_array().unwrap();
@@ -338,7 +343,7 @@ mod tests {
#[test]
fn report_merge_failure_in_tools_list() {
use super::super::{handle_tools_list};
use super::super::handle_tools_list;
let resp = handle_tools_list(Some(json!(1)));
let tools = resp.result.unwrap()["tools"].as_array().unwrap().clone();
let tool = tools.iter().find(|t| t["name"] == "report_merge_failure");
+48 -28
View File
@@ -1,12 +1,12 @@
//! MCP server — Model Context Protocol endpoint dispatching tool calls to handlers.
use crate::slog_warn;
use crate::http::context::AppContext;
use crate::slog_warn;
use poem::handler;
use poem::http::StatusCode;
use poem::web::Data;
use poem::{Body, Request, Response};
use serde::{Deserialize, Serialize};
use serde_json::{json, Value};
use serde_json::{Value, json};
use std::sync::Arc;
pub mod agent_tools;
@@ -1212,15 +1212,8 @@ fn handle_tools_list(id: Option<Value>) -> JsonRpcResponse {
// ── Tool dispatch ─────────────────────────────────────────────────
async fn handle_tools_call(
id: Option<Value>,
params: &Value,
ctx: &AppContext,
) -> JsonRpcResponse {
let tool_name = params
.get("name")
.and_then(|v| v.as_str())
.unwrap_or("");
async fn handle_tools_call(id: Option<Value>, params: &Value, ctx: &AppContext) -> JsonRpcResponse {
let tool_name = params.get("name").and_then(|v| v.as_str()).unwrap_or("");
let args = params.get("arguments").cloned().unwrap_or(json!({}));
let result = match tool_name {
@@ -1460,7 +1453,12 @@ mod tests {
));
let result = resp.result.unwrap();
assert_eq!(result["isError"], true);
assert!(result["content"][0]["text"].as_str().unwrap().contains("Unknown tool"));
assert!(
result["content"][0]["text"]
.as_str()
.unwrap()
.contains("Unknown tool")
);
}
#[test]
@@ -1572,7 +1570,10 @@ mod tests {
)
.await;
assert!(
body["error"]["message"].as_str().unwrap_or("").contains("version"),
body["error"]["message"]
.as_str()
.unwrap_or("")
.contains("version"),
"expected version error: {body}"
);
}
@@ -1599,9 +1600,7 @@ mod tests {
let resp = cli
.post("/mcp")
.header("content-type", "application/json")
.body(
r#"{"jsonrpc":"2.0","id":null,"method":"notifications/initialized","params":{}}"#,
)
.body(r#"{"jsonrpc":"2.0","id":null,"method":"notifications/initialized","params":{}}"#)
.send()
.await;
assert_eq!(resp.0.status(), poem::http::StatusCode::ACCEPTED);
@@ -1631,7 +1630,10 @@ mod tests {
)
.await;
assert!(
body["error"]["message"].as_str().unwrap_or("").contains("Unknown method"),
body["error"]["message"]
.as_str()
.unwrap_or("")
.contains("Unknown method"),
"expected unknown method error: {body}"
);
}
@@ -1719,14 +1721,21 @@ mod tests {
let body = resp.0.into_body().into_string().await.unwrap();
// Body is SSE-wrapped: "data: {…}\n\n" — strip the prefix and verify it's
// a valid JSON-RPC result (not an error about missing agent_name).
let json_part = body.trim_start_matches("data: ").trim_end_matches("\n\n").trim();
let json_part = body
.trim_start_matches("data: ")
.trim_end_matches("\n\n")
.trim();
let parsed: serde_json::Value = serde_json::from_str(json_part)
.unwrap_or_else(|_| panic!("expected JSON-RPC in SSE body, got: {body}"));
assert!(parsed.get("result").is_some(),
"expected JSON-RPC result (disk-based handler ran): {parsed}");
assert!(
parsed.get("result").is_some(),
"expected JSON-RPC result (disk-based handler ran): {parsed}"
);
// Must NOT be an error about missing agent_name (agent_name is now optional)
assert!(parsed.get("error").is_none(),
"unexpected error when agent_name omitted: {parsed}");
assert!(
parsed.get("error").is_none(),
"unexpected error when agent_name omitted: {parsed}"
);
}
#[tokio::test]
@@ -1749,8 +1758,14 @@ mod tests {
let body = resp.0.into_body().into_string().await.unwrap();
assert!(body.contains("data:"), "expected SSE data prefix: {body}");
// Must NOT return isError — should be a success result with "No log files found"
assert!(!body.contains("isError"), "expected no isError for missing agent: {body}");
assert!(body.contains("No log files found"), "expected not-found message: {body}");
assert!(
!body.contains("isError"),
"expected no isError for missing agent: {body}"
);
assert!(
body.contains("No log files found"),
"expected not-found message: {body}"
);
}
#[tokio::test]
@@ -1760,8 +1775,7 @@ mod tests {
// Agent has exited (not in pool) but wrote logs to disk.
let tmp = tempfile::tempdir().unwrap();
let root = tmp.path();
let mut writer =
AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-sse").unwrap();
let mut writer = AgentLogWriter::new(root, "42_story_foo", "coder-1", "sess-sse").unwrap();
writer
.write_event(&AgentEvent::Output {
story_id: "42_story_foo".to_string(),
@@ -1781,7 +1795,13 @@ mod tests {
.send()
.await;
let body = resp.0.into_body().into_string().await.unwrap();
assert!(body.contains("disk output"), "expected disk log content in SSE response: {body}");
assert!(!body.contains("isError"), "expected no error for exited agent with logs: {body}");
assert!(
body.contains("disk output"),
"expected disk log content in SSE response: {body}"
);
assert!(
!body.contains("isError"),
"expected no error for exited agent with logs: {body}"
);
}
}
+16 -13
View File
@@ -1,5 +1,7 @@
//! MCP QA tools — request, approve, and reject QA reviews for stories.
use crate::agents::{move_story_to_done, move_story_to_merge, move_story_to_qa, reject_story_from_qa};
use crate::agents::{
move_story_to_done, move_story_to_merge, move_story_to_qa, reject_story_from_qa,
};
use crate::http::context::AppContext;
use crate::slog;
use crate::slog_warn;
@@ -63,11 +65,10 @@ pub(super) async fn tool_approve_qa(args: &Value, ctx: &AppContext) -> Result<St
let root = project_root.clone();
let br = branch.clone();
let sid = story_id.to_string();
let merge_ok = tokio::task::spawn_blocking(move || {
merge_spike_branch_to_master(&root, &br, &sid)
})
.await
.map_err(|e| format!("Merge task panicked: {e}"))??;
let merge_ok =
tokio::task::spawn_blocking(move || merge_spike_branch_to_master(&root, &br, &sid))
.await
.map_err(|e| format!("Merge task panicked: {e}"))??;
move_story_to_done(&project_root, story_id)?;
@@ -77,12 +78,8 @@ pub(super) async fn tool_approve_qa(args: &Value, ctx: &AppContext) -> Result<St
let wt_path = crate::worktree::worktree_path(&project_root, story_id);
if wt_path.exists() {
let config = crate::config::ProjectConfig::load(&project_root).unwrap_or_default();
let _ = crate::worktree::remove_worktree_by_story_id(
&project_root,
story_id,
&config,
)
.await;
let _ = crate::worktree::remove_worktree_by_story_id(&project_root, story_id, &config)
.await;
}
pool.auto_assign_available_work(&project_root).await;
@@ -222,7 +219,13 @@ pub(super) async fn tool_reject_qa(args: &Value, ctx: &AppContext) -> Result<Str
);
if let Err(e) = ctx
.agents
.start_agent(&project_root, story_id, Some(agent_name), Some(&context), None)
.start_agent(
&project_root,
story_id,
Some(agent_name),
Some(&context),
None,
)
.await
{
slog_warn!("[qa] Failed to restart coder for '{story_id}' after rejection: {e}");
+49 -48
View File
@@ -3,7 +3,7 @@ use crate::http::context::AppContext;
use bytes::Bytes;
use futures::StreamExt;
use poem::{Body, Response};
use serde_json::{json, Value};
use serde_json::{Value, json};
use std::path::PathBuf;
const DEFAULT_TIMEOUT_SECS: u64 = 120;
@@ -25,13 +25,7 @@ static BLOCKED_PATTERNS: &[&str] = &[
/// Binaries that are unconditionally blocked.
static BLOCKED_BINARIES: &[&str] = &[
"sudo",
"su",
"shutdown",
"reboot",
"halt",
"poweroff",
"mkfs",
"sudo", "su", "shutdown", "reboot", "halt", "poweroff", "mkfs",
];
/// Returns an error message if the command matches a blocked pattern or binary.
@@ -153,15 +147,13 @@ pub(super) async fn tool_run_command(args: &Value, ctx: &AppContext) -> Result<S
}
Ok(Err(e)) => Err(format!("Task join error: {e}")),
Ok(Ok(Err(e))) => Err(format!("Failed to execute command: {e}")),
Ok(Ok(Ok(output))) => {
serde_json::to_string_pretty(&json!({
"stdout": String::from_utf8_lossy(&output.stdout),
"stderr": String::from_utf8_lossy(&output.stderr),
"exit_code": output.status.code().unwrap_or(-1),
"timed_out": false,
}))
.map_err(|e| format!("Serialization error: {e}"))
}
Ok(Ok(Ok(output))) => serde_json::to_string_pretty(&json!({
"stdout": String::from_utf8_lossy(&output.stdout),
"stderr": String::from_utf8_lossy(&output.stderr),
"exit_code": output.status.code().unwrap_or(-1),
"timed_out": false,
}))
.map_err(|e| format!("Serialization error: {e}")),
}
}
@@ -172,7 +164,7 @@ pub(super) fn handle_run_command_sse(
params: &Value,
ctx: &AppContext,
) -> Response {
use super::{to_sse_response, JsonRpcResponse};
use super::{JsonRpcResponse, to_sse_response};
let args = params.get("arguments").cloned().unwrap_or(json!({}));
@@ -183,7 +175,7 @@ pub(super) fn handle_run_command_sse(
id,
-32602,
"Missing required argument: command".into(),
))
));
}
};
@@ -194,7 +186,7 @@ pub(super) fn handle_run_command_sse(
id,
-32602,
"Missing required argument: working_dir".into(),
))
));
}
};
@@ -326,9 +318,7 @@ pub(super) fn handle_run_command_sse(
.status(poem::http::StatusCode::OK)
.header("Content-Type", "text/event-stream")
.header("Cache-Control", "no-cache")
.body(Body::from_bytes_stream(stream.map(|r| {
r.map(Bytes::from)
})))
.body(Body::from_bytes_stream(stream.map(|r| r.map(Bytes::from))))
}
/// Truncate output to at most `max_lines` lines, keeping the tail.
@@ -364,7 +354,11 @@ fn parse_test_counts(output: &str) -> (u64, u64) {
fn extract_count(line: &str, label: &str) -> Option<u64> {
let pos = line.find(label)?;
let before = line[..pos].trim_end();
let num_str: String = before.chars().rev().take_while(|c| c.is_ascii_digit()).collect();
let num_str: String = before
.chars()
.rev()
.take_while(|c| c.is_ascii_digit())
.collect();
if num_str.is_empty() {
return None;
}
@@ -391,10 +385,7 @@ pub(super) async fn tool_run_tests(args: &Value, ctx: &AppContext) -> Result<Str
let script_path = working_dir.join("script").join("test");
if !script_path.exists() {
return Err(format!(
"Test script not found: {}",
script_path.display()
));
return Err(format!("Test script not found: {}", script_path.display()));
}
// Kill any existing test job for this worktree.
@@ -503,10 +494,7 @@ const TEST_POLL_BLOCK_SECS: u64 = 20;
/// Blocks for up to 15 seconds, checking every second. Returns immediately
/// when the test finishes, or after 15s with `{"status": "running"}`.
/// This server-side blocking prevents agents from wasting turns polling.
pub(super) async fn tool_get_test_result(
args: &Value,
ctx: &AppContext,
) -> Result<String, String> {
pub(super) async fn tool_get_test_result(args: &Value, ctx: &AppContext) -> Result<String, String> {
let project_root = ctx.agents.get_project_root(&ctx.state)?;
let working_dir = match args.get("worktree_path").and_then(|v| v.as_str()) {
@@ -703,9 +691,7 @@ pub(super) async fn tool_run_lint(args: &Value, ctx: &AppContext) -> Result<Stri
}
/// Format a `TestJobResult` as the JSON string returned to the agent.
fn format_test_result(
result: &crate::http::context::TestJobResult,
) -> Result<String, String> {
fn format_test_result(result: &crate::http::context::TestJobResult) -> Result<String, String> {
serde_json::to_string_pretty(&json!({
"passed": result.passed,
"exit_code": result.exit_code,
@@ -854,11 +840,8 @@ mod tests {
async fn tool_run_command_blocks_dangerous_command() {
let tmp = tempfile::tempdir().unwrap();
let ctx = test_ctx(tmp.path());
let result = tool_run_command(
&json!({"command": "rm -rf /", "working_dir": "/tmp"}),
&ctx,
)
.await;
let result =
tool_run_command(&json!({"command": "rm -rf /", "working_dir": "/tmp"}), &ctx).await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("blocked"));
}
@@ -1017,7 +1000,10 @@ mod tests {
let ctx = test_ctx(tmp.path());
// No script/test in tmp — should return Err
let result = tool_run_tests(&json!({}), &ctx).await;
assert!(result.is_err(), "expected error for missing script: {result:?}");
assert!(
result.is_err(),
"expected error for missing script: {result:?}"
);
assert!(
result.unwrap_err().contains("not found"),
"error should mention 'not found'"
@@ -1073,8 +1059,11 @@ mod tests {
std::fs::create_dir_all(&wt_dir).unwrap();
let ctx = test_ctx(tmp.path());
// tmp.path() itself is outside worktrees → should fail validation
let result =
tool_run_tests(&json!({"worktree_path": tmp.path().to_str().unwrap()}), &ctx).await;
let result = tool_run_tests(
&json!({"worktree_path": tmp.path().to_str().unwrap()}),
&ctx,
)
.await;
assert!(result.is_err());
assert!(
result.unwrap_err().contains("worktrees"),
@@ -1118,8 +1107,11 @@ mod tests {
let wt_dir = tmp.path().join(".huskies").join("worktrees");
std::fs::create_dir_all(&wt_dir).unwrap();
let ctx = test_ctx(tmp.path());
let result =
tool_run_build(&json!({"worktree_path": tmp.path().to_str().unwrap()}), &ctx).await;
let result = tool_run_build(
&json!({"worktree_path": tmp.path().to_str().unwrap()}),
&ctx,
)
.await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("worktrees"));
}
@@ -1184,9 +1176,18 @@ mod tests {
let lines: Vec<String> = (1..=200).map(|i| format!("line {i}")).collect();
let text = lines.join("\n");
let result = truncate_output(&text, 50);
assert!(result.contains("line 200"), "should keep last line: {result}");
assert!(result.contains("omitted"), "should note omitted lines: {result}");
assert!(!result.contains("line 1\n"), "should not keep first line: {result}");
assert!(
result.contains("line 200"),
"should keep last line: {result}"
);
assert!(
result.contains("omitted"),
"should note omitted lines: {result}"
);
assert!(
!result.contains("line 1\n"),
"should not keep first line: {result}"
);
}
// ── parse_test_counts ─────────────────────────────────────────────
+8 -10
View File
@@ -20,7 +20,10 @@ fn parse_ac_items(contents: &str) -> Vec<(String, bool)> {
break;
}
if in_ac_section {
if let Some(rest) = trimmed.strip_prefix("- [x] ").or(trimmed.strip_prefix("- [X] ")) {
if let Some(rest) = trimmed
.strip_prefix("- [x] ")
.or(trimmed.strip_prefix("- [X] "))
{
items.push((rest.to_string(), true));
} else if let Some(rest) = trimmed.strip_prefix("- [ ] ") {
items.push((rest.to_string(), false));
@@ -33,10 +36,7 @@ fn parse_ac_items(contents: &str) -> Vec<(String, bool)> {
/// Find the most recent log file for any agent under `.huskies/logs/{story_id}/`.
fn find_most_recent_log(project_root: &Path, story_id: &str) -> Option<PathBuf> {
let dir = project_root
.join(".huskies")
.join("logs")
.join(story_id);
let dir = project_root.join(".huskies").join("logs").join(story_id);
if !dir.is_dir() {
return None;
@@ -68,8 +68,7 @@ fn find_most_recent_log(project_root: &Path, story_id: &str) -> Option<PathBuf>
/// Return the last N raw lines from a file.
fn last_n_lines(path: &Path, n: usize) -> Result<Vec<String>, String> {
let content =
fs::read_to_string(path).map_err(|e| format!("Failed to read log file: {e}"))?;
let content = fs::read_to_string(path).map_err(|e| format!("Failed to read log file: {e}"))?;
let lines: Vec<String> = content
.lines()
.rev()
@@ -172,9 +171,8 @@ pub(super) async fn tool_status(args: &Value, ctx: &AppContext) -> Result<String
));
}
let contents = crate::db::read_content(story_id).ok_or_else(|| {
format!("Story '{story_id}' has no content in the content store.")
})?;
let contents = crate::db::read_content(story_id)
.ok_or_else(|| format!("Story '{story_id}' has no content in the content store."))?;
// --- Front matter ---
let mut front_matter = serde_json::Map::new();
+85 -51
View File
@@ -8,7 +8,9 @@ use crate::http::workflow::{
create_spike_file, create_story_file, list_bug_files, list_refactor_files, load_pipeline_state,
load_upcoming_stories, update_story_in_file, validate_story_dirs,
};
use crate::io::story_metadata::{check_archived_deps, check_archived_deps_from_list, parse_front_matter, parse_unchecked_todos};
use crate::io::story_metadata::{
check_archived_deps, check_archived_deps_from_list, parse_front_matter, parse_unchecked_todos,
};
use crate::slog_warn;
use crate::workflow::{TestCaseResult, TestStatus, evaluate_acceptance_with_coverage};
use serde_json::{Value, json};
@@ -496,7 +498,10 @@ pub(super) fn tool_unblock_story(args: &Value, ctx: &AppContext) -> Result<Strin
.filter(|s| !s.is_empty() && s.chars().all(|c| c.is_ascii_digit()))
.ok_or_else(|| format!("Invalid story_id format: '{story_id}'. Expected a numeric prefix (e.g. '42_story_foo')."))?;
Ok(crate::chat::commands::unblock::unblock_by_number(&root, story_number))
Ok(crate::chat::commands::unblock::unblock_by_number(
&root,
story_number,
))
}
pub(super) async fn tool_delete_story(args: &Value, ctx: &AppContext) -> Result<String, String> {
@@ -549,8 +554,7 @@ pub(super) async fn tool_delete_story(args: &Value, ctx: &AppContext) -> Result<
// 3. Remove worktree (best-effort).
if let Ok(config) = crate::config::ProjectConfig::load(&project_root) {
match crate::worktree::remove_worktree_by_story_id(&project_root, story_id, &config).await
{
match crate::worktree::remove_worktree_by_story_id(&project_root, story_id, &config).await {
Ok(()) => slog_warn!("[delete_story] Removed worktree for '{story_id}'"),
Err(e) => slog_warn!("[delete_story] Worktree removal for '{story_id}': {e}"),
}
@@ -573,7 +577,10 @@ pub(super) async fn tool_delete_story(args: &Value, ctx: &AppContext) -> Result<
// 5. Delete from database content store and shadow table.
let found_in_db = crate::db::read_content(story_id).is_some()
|| crate::pipeline_state::read_typed(story_id).ok().flatten().is_some();
|| crate::pipeline_state::read_typed(story_id)
.ok()
.flatten()
.is_some();
crate::db::delete_item(story_id);
slog_warn!("[delete_story] Deleted '{story_id}' from content store / shadow table");
@@ -599,7 +606,9 @@ pub(super) async fn tool_delete_story(args: &Value, ctx: &AppContext) -> Result<
deleted_from_fs = true;
}
Err(e) => {
slog_warn!("[delete_story] Failed to delete filesystem shadow '{story_id}' from work/{stage}/: {e}");
slog_warn!(
"[delete_story] Failed to delete filesystem shadow '{story_id}' from work/{stage}/: {e}"
);
failed_steps.push(format!("delete_filesystem({stage}): {e}"));
}
}
@@ -820,7 +829,10 @@ mod tests {
.unwrap();
assert!(result.contains("Created story:"));
let story_id = result.trim_start_matches("Created story: ").trim().to_string();
let story_id = result
.trim_start_matches("Created story: ")
.trim()
.to_string();
let content = crate::db::read_content(&story_id).expect("story content should exist");
assert!(
content.contains("## Description"),
@@ -844,11 +856,7 @@ mod tests {
("4_merge", "9940_story_merge", "Merge Story"),
("5_done", "9950_story_done", "Done Story"),
] {
crate::db::write_item_with_content(
id,
stage,
&format!("---\nname: \"{name}\"\n---\n"),
);
crate::db::write_item_with_content(id, stage, &format!("---\nname: \"{name}\"\n---\n"));
}
let ctx = test_ctx(tmp.path());
@@ -869,7 +877,9 @@ mod tests {
// Backlog should contain our item
let backlog = parsed["backlog"].as_array().unwrap();
assert!(
backlog.iter().any(|b| b["story_id"] == "9910_story_upcoming"),
backlog
.iter()
.any(|b| b["story_id"] == "9910_story_upcoming"),
"expected 9910_story_upcoming in backlog: {backlog:?}"
);
}
@@ -896,7 +906,9 @@ mod tests {
let parsed: Value = serde_json::from_str(&result).unwrap();
let active = parsed["active"].as_array().unwrap();
let item = active.iter().find(|i| i["story_id"] == "9921_story_active")
let item = active
.iter()
.find(|i| i["story_id"] == "9921_story_active")
.expect("expected 9921_story_active in active items");
assert_eq!(item["stage"], "current");
assert!(!item["agent"].is_null(), "agent should be present");
@@ -1115,7 +1127,10 @@ mod tests {
)
.unwrap();
assert!(result.contains("_bug_login_crash"), "result should contain bug ID: {result}");
assert!(
result.contains("_bug_login_crash"),
"result should contain bug ID: {result}"
);
// Extract the actual bug ID from the result message (format: "Created bug: <id>").
let bug_id = result.trim_start_matches("Created bug: ").trim();
// Bug content should exist in the CRDT content store.
@@ -1157,11 +1172,15 @@ mod tests {
let result = tool_list_bugs(&ctx).unwrap();
let parsed: Vec<Value> = serde_json::from_str(&result).unwrap();
assert!(
parsed.iter().any(|b| b["bug_id"] == "9902_bug_crash" && b["name"] == "App Crash"),
parsed
.iter()
.any(|b| b["bug_id"] == "9902_bug_crash" && b["name"] == "App Crash"),
"expected 9902_bug_crash in bugs list: {parsed:?}"
);
assert!(
parsed.iter().any(|b| b["bug_id"] == "9903_bug_typo" && b["name"] == "Typo in Header"),
parsed
.iter()
.any(|b| b["bug_id"] == "9903_bug_typo" && b["name"] == "Typo in Header"),
"expected 9903_bug_typo in bugs list: {parsed:?}"
);
}
@@ -1252,12 +1271,14 @@ mod tests {
)
.unwrap();
assert!(result.contains("_spike_compare_encoders"), "result should contain spike ID: {result}");
assert!(
result.contains("_spike_compare_encoders"),
"result should contain spike ID: {result}"
);
// Extract the actual spike ID from the result message (format: "Created spike: <id>").
let spike_id = result.trim_start_matches("Created spike: ").trim();
// Spike content should exist in the CRDT content store.
let contents = crate::db::read_content(spike_id)
.expect("expected spike content in CRDT");
let contents = crate::db::read_content(spike_id).expect("expected spike content in CRDT");
assert!(contents.starts_with("---\nname: \"Compare Encoders\"\n---"));
assert!(contents.contains("Which encoder is fastest?"));
}
@@ -1268,13 +1289,15 @@ mod tests {
let ctx = test_ctx(tmp.path());
let result = tool_create_spike(&json!({"name": "My Spike"}), &ctx).unwrap();
assert!(result.contains("_spike_my_spike"), "result should contain spike ID: {result}");
assert!(
result.contains("_spike_my_spike"),
"result should contain spike ID: {result}"
);
// Extract the actual spike ID from the result message (format: "Created spike: <id>").
let spike_id = result.trim_start_matches("Created spike: ").trim();
// Spike content should exist in the CRDT content store.
let contents = crate::db::read_content(spike_id)
.expect("expected spike content in CRDT");
let contents = crate::db::read_content(spike_id).expect("expected spike content in CRDT");
assert!(contents.starts_with("---\nname: \"My Spike\"\n---"));
assert!(contents.contains("## Question\n\n- TBD\n"));
}
@@ -1326,7 +1349,9 @@ mod tests {
let ctx = test_ctx(tmp.path());
let result = tool_validate_stories(&ctx).unwrap();
let parsed: Vec<Value> = serde_json::from_str(&result).unwrap();
let item = parsed.iter().find(|v| v["story_id"] == "9907_test")
let item = parsed
.iter()
.find(|v| v["story_id"] == "9907_test")
.expect("expected 9907_test in validation results");
assert_eq!(item["valid"], true);
}
@@ -1336,16 +1361,14 @@ mod tests {
let tmp = tempfile::tempdir().unwrap();
crate::db::ensure_content_store();
crate::db::write_item_with_content(
"9908_test",
"2_current",
"## No front matter at all\n",
);
crate::db::write_item_with_content("9908_test", "2_current", "## No front matter at all\n");
let ctx = test_ctx(tmp.path());
let result = tool_validate_stories(&ctx).unwrap();
let parsed: Vec<Value> = serde_json::from_str(&result).unwrap();
let item = parsed.iter().find(|v| v["story_id"] == "9908_test")
let item = parsed
.iter()
.find(|v| v["story_id"] == "9908_test")
.expect("expected 9908_test in validation results");
assert_eq!(item["valid"], false);
}
@@ -1551,11 +1574,7 @@ mod tests {
let current_dir = tmp.path().join(".huskies/work/2_current");
std::fs::create_dir_all(&current_dir).unwrap();
let content = "---\nname: No Branch\n---\n";
std::fs::write(
current_dir.join("51_story_no_branch.md"),
content,
)
.unwrap();
std::fs::write(current_dir.join("51_story_no_branch.md"), content).unwrap();
crate::db::ensure_content_store();
crate::db::write_content("51_story_no_branch", content);
@@ -1594,8 +1613,14 @@ mod tests {
assert!(result.is_ok(), "Expected ok: {result:?}");
let content = crate::db::read_content("504_bool_test").unwrap();
assert!(content.contains("blocked: false"), "bool should be unquoted: {content}");
assert!(!content.contains("blocked: \"false\""), "bool must not be quoted: {content}");
assert!(
content.contains("blocked: false"),
"bool should be unquoted: {content}"
);
assert!(
!content.contains("blocked: \"false\""),
"bool must not be quoted: {content}"
);
}
#[test]
@@ -1615,8 +1640,14 @@ mod tests {
assert!(result.is_ok(), "Expected ok: {result:?}");
let content = crate::db::read_content("504_num_test").unwrap();
assert!(content.contains("retry_count: 3"), "number should be unquoted: {content}");
assert!(!content.contains("retry_count: \"3\""), "number must not be quoted: {content}");
assert!(
content.contains("retry_count: 3"),
"number should be unquoted: {content}"
);
assert!(
!content.contains("retry_count: \"3\""),
"number must not be quoted: {content}"
);
}
#[test]
@@ -1637,8 +1668,14 @@ mod tests {
let content = crate::db::read_content("504_arr_test").unwrap();
// YAML inline sequences use spaces after commas
assert!(content.contains("depends_on: [490, 491]"), "array should be unquoted YAML: {content}");
assert!(!content.contains("depends_on: \""), "array must not be quoted: {content}");
assert!(
content.contains("depends_on: [490, 491]"),
"array should be unquoted YAML: {content}"
);
assert!(
!content.contains("depends_on: \""),
"array must not be quoted: {content}"
);
// The YAML must be parseable as a vec
let meta = crate::io::story_metadata::parse_front_matter(&content)
@@ -1677,8 +1714,10 @@ mod tests {
);
let ctx = test_ctx(tmp.path());
let result =
tool_check_criterion(&json!({"story_id": "9904_test", "criterion_index": 0}), &ctx);
let result = tool_check_criterion(
&json!({"story_id": "9904_test", "criterion_index": 0}),
&ctx,
);
assert!(result.is_ok(), "Expected ok: {result:?}");
assert!(result.unwrap().contains("Criterion 0 checked"));
}
@@ -1719,11 +1758,8 @@ mod tests {
assert_eq!(ctx.timer_store.list().len(), 1);
// Delete the story.
let result = tool_delete_story(
&json!({"story_id": "478_story_rate_limit_repro"}),
&ctx,
)
.await;
let result =
tool_delete_story(&json!({"story_id": "478_story_rate_limit_repro"}), &ctx).await;
assert!(result.is_ok(), "delete_story failed: {result:?}");
// Timer must be gone — fast-forwarding past the scheduled time should
@@ -1741,9 +1777,7 @@ mod tests {
// Filesystem shadow must also be gone.
assert!(
!backlog
.join("478_story_rate_limit_repro.md")
.exists(),
!backlog.join("478_story_rate_limit_repro.md").exists(),
"filesystem shadow was not removed"
);
}

Some files were not shown because too many files have changed in this diff Show More