diff --git a/README.md b/README.md index 6fe09162..7348cc83 100644 --- a/README.md +++ b/README.md @@ -79,6 +79,13 @@ cd frontend && npm install && npm run dev Configuration lives in `.huskies/project.toml`. See `.huskies/bot.toml.*.example` for transport setup. +## Architecture + +Internal architecture documentation lives in [`docs/architecture/`](docs/architecture/): + +- [Service module conventions](docs/architecture/service-modules.md) — layout, layering rules, and patterns for `server/src/service/` +- [Future extraction targets](docs/architecture/future-extractions.md) — recommended order for remaining handler extractions + ## Releasing Requires a Gitea API token in `.env` (`GITEA_TOKEN=your_token`). diff --git a/docs/architecture/future-extractions.md b/docs/architecture/future-extractions.md new file mode 100644 index 00000000..97559d7d --- /dev/null +++ b/docs/architecture/future-extractions.md @@ -0,0 +1,29 @@ +# Future Service Module Extractions + +Recommended order for extracting remaining HTTP handlers into `service//` +modules, following the conventions in [service-modules.md](service-modules.md). + +## Recommended Order + +1. **`settings`** — small surface, few dependencies, good warm-up +2. **`oauth`** — reads/writes token files; pure validation logic separates cleanly +3. **`wizard`** — stateless generation logic is already mostly pure; thin I/O layer +4. **`project`** — project scaffolding; wraps `io::fs::scaffold`, clean separation +5. **`io`** (search/shell) — wraps `io::search` and `io::shell`; pure query-building separable +6. **`anthropic`** — token-proxy handler; pure request-shaping + thin HTTP I/O +7. **`stories`** (workflow) — CRDT-backed story ops; typed errors for 400/404/409/500 +8. **`events`** — SSE handler; mostly framework wiring, but event filtering is pure + +## Special Case: `ws` + +The WebSocket handler (`http/ws.rs`) is a **dedicated harder extraction** because +it mixes multiple concerns (chat dispatch, permission forwarding, SSE bridging) +and depends on long-lived async streams. Extract it last, after the above list +is complete and the service module pattern is well-established. + +## Notes + +- Each extraction should link back to `docs/architecture/service-modules.md` + in the story description to maintain consistency. +- The `agents` extraction (story 604) is the reference implementation every + future extraction should follow. diff --git a/docs/architecture/service-modules.md b/docs/architecture/service-modules.md new file mode 100644 index 00000000..a0c1e609 --- /dev/null +++ b/docs/architecture/service-modules.md @@ -0,0 +1,191 @@ +# Service Module Conventions + +This document defines the layout, layering rules, and patterns for all service +modules under `server/src/service/`. Every extraction from the HTTP handlers to +a service module **must** follow these conventions. + +--- + +## 1. Directory Layout + +``` +server/src/service// + mod.rs — public API, typed Error, orchestration, integration tests + io.rs — every side-effectful call; the ONLY file that may touch the + filesystem, spawn processes, or call external crates that do + .rs — pure logic for a named concern within the domain; no I/O +``` + +### Rules + +- `` matches the HTTP handler filename (e.g. `agents`, `settings`, + `oauth`). +- **No file named `logic.rs`** — use a descriptive domain name instead + (e.g. `selection.rs`, `token.rs`, `validation.rs`). +- New topic files are added when a pure concern grows beyond ~50 lines or when + it has independent test coverage needs. + +--- + +## 2. The Functional-Core / Imperative-Shell Rule + +``` +io.rs (imperative shell) ←→ mod.rs (orchestrator) ←→ .rs (functional core) +``` + +| Layer | Allowed | Forbidden | +|-------|---------|-----------| +| `.rs` | Pure Rust, data-transformation, branching logic, pattern matching | Any I/O | +| `io.rs` | `std::fs`, `std::process`, `tokio::fs`, network calls, `SystemTime::now` | Business logic beyond a thin wrapper | +| `mod.rs` | Calls into `io.rs` and `.rs`; owns the `Error` type | Direct I/O without going through `io.rs` | + +**Grep-enforceable check:** The following must NOT appear in any `service//` file other than `io.rs`: + +- `std::fs` +- `std::process` +- `std::thread::sleep` +- `tokio::fs` +- `reqwest` +- `SystemTime::now` + +--- + +## 3. Error Type Pattern + +Each service domain declares its own typed error enum in `mod.rs`: + +```rust +/// Errors returned by `service::agents` operations. +#[derive(Debug)] +pub enum Error { + ProjectRootNotConfigured, + AgentNotFound(String), + WorkItemNotFound(String), + WorktreeError(String), + ConfigError(String), + IoError(String), +} + +impl std::fmt::Display for Error { ... } +``` + +HTTP handlers map service errors to **specific** HTTP status codes: + +| Error variant | HTTP status | +|--------------|-------------| +| `ProjectRootNotConfigured` | 400 Bad Request | +| `AgentNotFound` | 404 Not Found | +| `WorkItemNotFound` | 404 Not Found | +| `WorktreeError` | 400 Bad Request | +| `ConfigError` | 400 Bad Request | +| `IoError` | 500 Internal Server Error | + +**No generic `bad_request` for everything** — distinguish 400 vs 404 vs 500. + +--- + +## 4. Test Pattern + +### Pure topic files (`.rs`) + +```rust +#[cfg(test)] +mod tests { + use super::*; + + // Unit tests MUST: + // - Use no tempdir, tokio runtime, or filesystem + // - Cover every branch of every public function + #[test] + fn filter_removes_archived_agents() { ... } +} +``` + +### `io.rs` + +```rust +#[cfg(test)] +mod tests { + use super::*; + use tempfile::TempDir; + + // IO tests MAY use tempdirs and real filesystem. + // Keep them few and focused on the thin I/O wrapper contract. + #[test] + fn is_archived_returns_true_when_in_done() { ... } +} +``` + +### `mod.rs` + +```rust +#[cfg(test)] +mod tests { + use super::*; + + // Integration tests compose io + pure layers end-to-end. + // May use tempdirs. Keep the count small — they are integration-level. + #[tokio::test] + async fn list_agents_excludes_archived() { ... } +} +``` + +--- + +## 5. Dependency Injection Pattern + +Service functions take **only the dependencies they actually use**: + +```rust +// Good — takes only what it needs +pub async fn start_agent( + pool: &AgentPool, + project_root: &Path, + story_id: &str, + agent_name: Option<&str>, +) -> Result { ... } + +// Bad — takes the whole AppContext +pub async fn start_agent(ctx: &AppContext, ...) -> Result { ... } +``` + +Standard injected dependencies for `service::agents`: + +| Type | Purpose | +|------|---------| +| `&AgentPool` | Agent lifecycle operations | +| `&Path` (`project_root`) | Filesystem operations scoped to the project | +| `&WorkflowState` | In-memory test result cache | + +**The dependency set chosen for `agents` is the reference pattern for all future +service module extractions.** + +--- + +## 6. HTTP Handler Contract + +After extraction, HTTP handlers are thin adapters: + +```rust +async fn start_agent(&self, payload: Json) -> OpenApiResult<...> { + let project_root = self.ctx.agents.get_project_root(&self.ctx.state) + .map_err(|e| bad_request(e))?; // extract from AppContext + let info = service::agents::start_agent( // call service + &self.ctx.agents, &project_root, &payload.story_id, payload.agent_name.as_deref(), + ).await.map_err(map_service_error)?; // map typed error → HTTP + Ok(Json(AgentInfoResponse { ... })) // shape DTO +} +``` + +Handlers must contain **no**: +- `std::fs` / file reads +- `std::process` invocations +- Inline load-mutate-save sequences +- Inline validation that belongs in the service layer + +--- + +## 7. Follow-up Extractions + +See [future-extractions.md](future-extractions.md) for the recommended order +and rationale for remaining extraction targets. diff --git a/server/src/http/agents.rs b/server/src/http/agents.rs index 4f090947..2eae605a 100644 --- a/server/src/http/agents.rs +++ b/server/src/http/agents.rs @@ -1,11 +1,14 @@ -//! HTTP agent endpoints — REST API for listing, starting, stopping, and inspecting agents. -use crate::config::ProjectConfig; +//! HTTP agent endpoints — thin adapters over `service::agents`. +//! +//! Each handler: extracts payload → calls `service::agents::X` → shapes +//! response DTO → returns HTTP result. No filesystem access, no inline +//! validation, no process invocations. use crate::http::context::{AppContext, OpenApiResult, bad_request, not_found}; +use crate::service::agents::{self as svc, AgentConfigEntry, WorkItemContent}; use crate::workflow::{StoryTestResults, TestCaseResult, TestStatus}; -use crate::worktree; +use poem::http::StatusCode; use poem_openapi::{Object, OpenApi, Tags, param::Path, payload::Json}; use serde::Serialize; -use std::path; use std::sync::Arc; #[derive(Tags)] @@ -45,6 +48,20 @@ struct AgentConfigInfoResponse { max_budget_usd: Option, } +impl From for AgentConfigInfoResponse { + fn from(e: AgentConfigEntry) -> Self { + Self { + name: e.name, + role: e.role, + stage: e.stage, + model: e.model, + allowed_tools: e.allowed_tools, + max_turns: e.max_turns, + max_budget_usd: e.max_budget_usd, + } + } +} + #[derive(Object)] struct CreateWorktreePayload { story_id: String, @@ -73,6 +90,17 @@ struct WorkItemContentResponse { agent: Option, } +impl From for WorkItemContentResponse { + fn from(w: WorkItemContent) -> Self { + Self { + content: w.content, + stage: w.stage, + name: w.name, + agent: w.agent, + } + } +} + /// A single test case result for the OpenAPI response. #[derive(Object, Serialize)] struct TestCaseResultResponse { @@ -153,15 +181,23 @@ struct AllTokenUsageResponse { records: Vec, } -/// Returns true if the story file exists in `work/5_done/` or `work/6_archived/`. -/// -/// Used to exclude agents for already-archived stories from the `list_agents` -/// response so the agents panel is not cluttered with old completed items on -/// frontend startup. -pub fn story_is_archived(project_root: &path::Path, story_id: &str) -> bool { - let work = project_root.join(".huskies").join("work"); - let filename = format!("{story_id}.md"); - work.join("5_done").join(&filename).exists() || work.join("6_archived").join(&filename).exists() +/// Map a `service::agents::Error` to a Poem HTTP error with the correct status. +fn map_svc_error(err: svc::Error) -> poem::Error { + match err { + svc::Error::AgentNotFound(_) => { + poem::Error::from_string(err.to_string(), StatusCode::NOT_FOUND) + } + svc::Error::WorkItemNotFound(_) => { + poem::Error::from_string(err.to_string(), StatusCode::NOT_FOUND) + } + svc::Error::Worktree(_) => { + poem::Error::from_string(err.to_string(), StatusCode::BAD_REQUEST) + } + svc::Error::Config(_) => poem::Error::from_string(err.to_string(), StatusCode::BAD_REQUEST), + svc::Error::Io(_) => { + poem::Error::from_string(err.to_string(), StatusCode::INTERNAL_SERVER_ERROR) + } + } } pub struct AgentsApi { @@ -183,18 +219,16 @@ impl AgentsApi { .get_project_root(&self.ctx.state) .map_err(bad_request)?; - let info = self - .ctx - .agents - .start_agent( - &project_root, - &payload.0.story_id, - payload.0.agent_name.as_deref(), - None, - None, - ) - .await - .map_err(bad_request)?; + let info = svc::start_agent( + &self.ctx.agents, + &project_root, + &payload.0.story_id, + payload.0.agent_name.as_deref(), + None, + None, + ) + .await + .map_err(map_svc_error)?; Ok(Json(AgentInfoResponse { story_id: info.story_id, @@ -214,11 +248,14 @@ impl AgentsApi { .get_project_root(&self.ctx.state) .map_err(bad_request)?; - self.ctx - .agents - .stop_agent(&project_root, &payload.0.story_id, &payload.0.agent_name) - .await - .map_err(bad_request)?; + svc::stop_agent( + &self.ctx.agents, + &project_root, + &payload.0.story_id, + &payload.0.agent_name, + ) + .await + .map_err(map_svc_error)?; Ok(Json(true)) } @@ -231,17 +268,12 @@ impl AgentsApi { #[oai(path = "/agents", method = "get")] async fn list_agents(&self) -> OpenApiResult>> { let project_root = self.ctx.agents.get_project_root(&self.ctx.state).ok(); - let agents = self.ctx.agents.list_agents().map_err(bad_request)?; + let agents = + svc::list_agents(&self.ctx.agents, project_root.as_deref()).map_err(map_svc_error)?; Ok(Json( agents .into_iter() - .filter(|info| { - project_root - .as_deref() - .map(|root| !story_is_archived(root, &info.story_id)) - .unwrap_or(true) - }) .map(|info| AgentInfoResponse { story_id: info.story_id, agent_name: info.agent_name, @@ -262,21 +294,11 @@ impl AgentsApi { .get_project_root(&self.ctx.state) .map_err(bad_request)?; - let config = ProjectConfig::load(&project_root).map_err(bad_request)?; - + let entries = svc::get_agent_config(&project_root).map_err(map_svc_error)?; Ok(Json( - config - .agent - .iter() - .map(|a| AgentConfigInfoResponse { - name: a.name.clone(), - role: a.role.clone(), - stage: a.stage.clone(), - model: a.model.clone(), - allowed_tools: a.allowed_tools.clone(), - max_turns: a.max_turns, - max_budget_usd: a.max_budget_usd, - }) + entries + .into_iter() + .map(AgentConfigInfoResponse::from) .collect(), )) } @@ -290,21 +312,11 @@ impl AgentsApi { .get_project_root(&self.ctx.state) .map_err(bad_request)?; - let config = ProjectConfig::load(&project_root).map_err(bad_request)?; - + let entries = svc::reload_config(&project_root).map_err(map_svc_error)?; Ok(Json( - config - .agent - .iter() - .map(|a| AgentConfigInfoResponse { - name: a.name.clone(), - role: a.role.clone(), - stage: a.stage.clone(), - model: a.model.clone(), - allowed_tools: a.allowed_tools.clone(), - max_turns: a.max_turns, - max_budget_usd: a.max_budget_usd, - }) + entries + .into_iter() + .map(AgentConfigInfoResponse::from) .collect(), )) } @@ -321,12 +333,9 @@ impl AgentsApi { .get_project_root(&self.ctx.state) .map_err(bad_request)?; - let info = self - .ctx - .agents - .create_worktree(&project_root, &payload.0.story_id) + let info = svc::create_worktree(&self.ctx.agents, &project_root, &payload.0.story_id) .await - .map_err(bad_request)?; + .map_err(map_svc_error)?; Ok(Json(WorktreeInfoResponse { story_id: payload.0.story_id, @@ -345,7 +354,7 @@ impl AgentsApi { .get_project_root(&self.ctx.state) .map_err(bad_request)?; - let entries = worktree::list_worktrees(&project_root).map_err(bad_request)?; + let entries = svc::list_worktrees(&project_root).map_err(map_svc_error)?; Ok(Json( entries @@ -373,64 +382,12 @@ impl AgentsApi { .get_project_root(&self.ctx.state) .map_err(bad_request)?; - let stages = [ - ("1_backlog", "backlog"), - ("2_current", "current"), - ("3_qa", "qa"), - ("4_merge", "merge"), - ("5_done", "done"), - ("6_archived", "archived"), - ]; + let item = svc::get_work_item_content(&project_root, &story_id.0).map_err(|e| match e { + svc::Error::WorkItemNotFound(_) => not_found(e.to_string()), + other => map_svc_error(other), + })?; - let work_dir = project_root.join(".huskies").join("work"); - let filename = format!("{}.md", story_id.0); - - for (stage_dir, stage_name) in &stages { - let file_path = work_dir.join(stage_dir).join(&filename); - if file_path.exists() { - let content = std::fs::read_to_string(&file_path) - .map_err(|e| bad_request(format!("Failed to read work item: {e}")))?; - let metadata = crate::io::story_metadata::parse_front_matter(&content).ok(); - let name = metadata.as_ref().and_then(|m| m.name.clone()); - let agent = metadata.and_then(|m| m.agent); - return Ok(Json(WorkItemContentResponse { - content, - stage: stage_name.to_string(), - name, - agent, - })); - } - } - - // Filesystem miss — fall back to CRDT-only path (story exists in the CRDT - // but has no corresponding .md file on disk). - if let Some(content) = crate::db::read_content(&story_id.0) { - let item = crate::pipeline_state::read_typed(&story_id.0) - .map_err(|e| bad_request(format!("Pipeline read error: {e}")))?; - let stage = item - .as_ref() - .map(|i| match &i.stage { - crate::pipeline_state::Stage::Backlog => "backlog", - crate::pipeline_state::Stage::Coding => "current", - crate::pipeline_state::Stage::Qa => "qa", - crate::pipeline_state::Stage::Merge { .. } => "merge", - crate::pipeline_state::Stage::Done { .. } => "done", - crate::pipeline_state::Stage::Archived { .. } => "archived", - }) - .unwrap_or("unknown") - .to_string(); - let metadata = crate::io::story_metadata::parse_front_matter(&content).ok(); - let name = metadata.as_ref().and_then(|m| m.name.clone()); - let agent = metadata.and_then(|m| m.agent); - return Ok(Json(WorkItemContentResponse { - content, - stage, - name, - agent, - })); - } - - Err(not_found(format!("Work item not found: {}", story_id.0))) + Ok(Json(WorkItemContentResponse::from(item))) } /// Get test results for a work item by its story_id. @@ -442,30 +399,37 @@ impl AgentsApi { &self, story_id: Path, ) -> OpenApiResult>> { - // Try in-memory workflow state first. - let workflow = self - .ctx - .workflow - .lock() - .map_err(|e| bad_request(format!("Lock error: {e}")))?; - - if let Some(results) = workflow.results.get(&story_id.0) { - return Ok(Json(Some(TestResultsResponse::from_story_results(results)))); + // Fast path: return from in-memory state without requiring project_root. + let in_memory = { + let workflow = self + .ctx + .workflow + .lock() + .map_err(|e| bad_request(format!("Lock error: {e}")))?; + workflow.results.get(&story_id.0).cloned() + }; + if let Some(results) = in_memory { + return Ok(Json(Some(TestResultsResponse::from_story_results( + &results, + )))); } - drop(workflow); - // Fall back to file-persisted results. + // Slow path: fall back to results persisted in the story file. let project_root = self .ctx .agents .get_project_root(&self.ctx.state) .map_err(bad_request)?; - let file_results = - crate::http::workflow::read_test_results_from_story_file(&project_root, &story_id.0); + let workflow = self + .ctx + .workflow + .lock() + .map_err(|e| bad_request(format!("Lock error: {e}")))?; + let results = svc::get_test_results(&project_root, &story_id.0, &workflow); Ok(Json( - file_results.map(|r| TestResultsResponse::from_story_results(&r)), + results.map(|r| TestResultsResponse::from_story_results(&r)), )) } @@ -486,26 +450,8 @@ impl AgentsApi { .get_project_root(&self.ctx.state) .map_err(bad_request)?; - let log_path = crate::agent_log::find_latest_log(&project_root, &story_id.0, &agent_name.0); - - let Some(path) = log_path else { - return Ok(Json(AgentOutputResponse { - output: String::new(), - })); - }; - - let entries = crate::agent_log::read_log(&path).map_err(bad_request)?; - - let output: String = entries - .iter() - .filter(|e| e.event.get("type").and_then(|t| t.as_str()) == Some("output")) - .filter_map(|e| { - e.event - .get("text") - .and_then(|t| t.as_str()) - .map(str::to_owned) - }) - .collect(); + let output = svc::get_agent_output(&project_root, &story_id.0, &agent_name.0) + .map_err(map_svc_error)?; Ok(Json(AgentOutputResponse { output })) } @@ -519,10 +465,9 @@ impl AgentsApi { .get_project_root(&self.ctx.state) .map_err(bad_request)?; - let config = ProjectConfig::load(&project_root).map_err(bad_request)?; - worktree::remove_worktree_by_story_id(&project_root, &story_id.0, &config) + svc::remove_worktree(&project_root, &story_id.0) .await - .map_err(bad_request)?; + .map_err(map_svc_error)?; Ok(Json(true)) } @@ -542,39 +487,25 @@ impl AgentsApi { .get_project_root(&self.ctx.state) .map_err(bad_request)?; - let all_records = crate::agents::token_usage::read_all(&project_root) - .map_err(|e| bad_request(format!("Failed to read token usage: {e}")))?; + let summary = + svc::get_work_item_token_cost(&project_root, &story_id.0).map_err(map_svc_error)?; - let mut agent_map: std::collections::HashMap = - std::collections::HashMap::new(); - - let mut total_cost_usd = 0.0_f64; - - for record in all_records.into_iter().filter(|r| r.story_id == story_id.0) { - total_cost_usd += record.usage.total_cost_usd; - let entry = agent_map - .entry(record.agent_name.clone()) - .or_insert_with(|| AgentCostEntry { - agent_name: record.agent_name.clone(), - model: record.model.clone(), - input_tokens: 0, - output_tokens: 0, - cache_creation_input_tokens: 0, - cache_read_input_tokens: 0, - total_cost_usd: 0.0, - }); - entry.input_tokens += record.usage.input_tokens; - entry.output_tokens += record.usage.output_tokens; - entry.cache_creation_input_tokens += record.usage.cache_creation_input_tokens; - entry.cache_read_input_tokens += record.usage.cache_read_input_tokens; - entry.total_cost_usd += record.usage.total_cost_usd; - } - - let mut agents: Vec = agent_map.into_values().collect(); - agents.sort_by(|a, b| a.agent_name.cmp(&b.agent_name)); + let agents = summary + .agents + .into_iter() + .map(|a| AgentCostEntry { + agent_name: a.agent_name, + model: a.model, + input_tokens: a.input_tokens, + output_tokens: a.output_tokens, + cache_creation_input_tokens: a.cache_creation_input_tokens, + cache_read_input_tokens: a.cache_read_input_tokens, + total_cost_usd: a.total_cost_usd, + }) + .collect(); Ok(Json(TokenCostResponse { - total_cost_usd, + total_cost_usd: summary.total_cost_usd, agents, })) } @@ -590,8 +521,7 @@ impl AgentsApi { .get_project_root(&self.ctx.state) .map_err(bad_request)?; - let records = crate::agents::token_usage::read_all(&project_root) - .map_err(|e| bad_request(format!("Failed to read token usage: {e}")))?; + let records = svc::get_all_token_usage(&project_root).map_err(map_svc_error)?; let response_records: Vec = records .into_iter() @@ -618,6 +548,7 @@ impl AgentsApi { mod tests { use super::*; use crate::agents::AgentStatus; + use std::path; use tempfile::TempDir; fn make_work_dirs(tmp: &TempDir) -> path::PathBuf { @@ -632,7 +563,7 @@ mod tests { fn story_is_archived_false_when_file_absent() { let tmp = TempDir::new().unwrap(); let root = make_work_dirs(&tmp); - assert!(!story_is_archived(&root, "79_story_foo")); + assert!(!svc::is_archived(&root, "79_story_foo")); } #[test] @@ -644,7 +575,7 @@ mod tests { "---\nname: test\n---\n", ) .unwrap(); - assert!(story_is_archived(&root, "79_story_foo")); + assert!(svc::is_archived(&root, "79_story_foo")); } #[test] @@ -656,7 +587,7 @@ mod tests { "---\nname: test\n---\n", ) .unwrap(); - assert!(story_is_archived(&root, "79_story_foo")); + assert!(svc::is_archived(&root, "79_story_foo")); } #[tokio::test] diff --git a/server/src/http/mcp/agent_tools.rs b/server/src/http/mcp/agent_tools.rs index 22ca0806..9315a9ae 100644 --- a/server/src/http/mcp/agent_tools.rs +++ b/server/src/http/mcp/agent_tools.rs @@ -86,7 +86,7 @@ pub(super) fn tool_list_agents(ctx: &AppContext) -> Result { .filter(|a| { project_root .as_deref() - .map(|root| !crate::http::agents::story_is_archived(root, &a.story_id)) + .map(|root| !crate::service::agents::is_archived(root, &a.story_id)) .unwrap_or(true) }) .map(|a| json!({ diff --git a/server/src/main.rs b/server/src/main.rs index fc94154f..d54290f6 100644 --- a/server/src/main.rs +++ b/server/src/main.rs @@ -20,6 +20,7 @@ mod llm; pub mod log_buffer; pub(crate) mod pipeline_state; pub mod rebuild; +mod service; mod state; mod store; mod workflow; diff --git a/server/src/service/agents/io.rs b/server/src/service/agents/io.rs new file mode 100644 index 00000000..4588ec5d --- /dev/null +++ b/server/src/service/agents/io.rs @@ -0,0 +1,190 @@ +//! Agent I/O wrappers — the ONLY place in `service/agents/` that may perform +//! filesystem reads, process invocations, or other side effects. +//! +//! Every function here is a thin adapter over an existing lower-level call. +//! No business logic lives here; all branching belongs in the pure topic files +//! or in `mod.rs`. +use crate::agent_log::{self, LogEntry}; +use crate::agents::token_usage::{self, TokenUsageRecord}; +use crate::config::ProjectConfig; +use crate::worktree::{self, WorktreeListEntry}; +use std::path::Path; + +use super::Error; + +/// Return `true` if the story's `.md` file exists in `5_done/` or `6_archived/`. +pub fn is_archived(project_root: &Path, story_id: &str) -> bool { + let work = project_root.join(".huskies").join("work"); + let filename = format!("{story_id}.md"); + work.join("5_done").join(&filename).exists() || work.join("6_archived").join(&filename).exists() +} + +/// Read and return all log entries for the most recent session of an agent. +/// +/// Returns `Ok(vec![])` when no log file exists yet. +pub fn read_agent_log( + project_root: &Path, + story_id: &str, + agent_name: &str, +) -> Result, Error> { + let log_path = agent_log::find_latest_log(project_root, story_id, agent_name); + let Some(path) = log_path else { + return Ok(Vec::new()); + }; + agent_log::read_log(&path).map_err(Error::Io) +} + +/// Read all token usage records from the persistent JSONL file. +/// +/// Returns an empty vec when the file does not yet exist. +pub fn read_token_records(project_root: &Path) -> Result, Error> { + token_usage::read_all(project_root).map_err(Error::Io) +} + +/// Load the project configuration from `project.toml`. +/// +/// Falls back to default config when the file is absent. +pub fn load_config(project_root: &Path) -> Result { + ProjectConfig::load(project_root).map_err(Error::Config) +} + +/// List all worktrees under `.huskies/worktrees/`. +pub fn list_worktrees(project_root: &Path) -> Result, Error> { + worktree::list_worktrees(project_root).map_err(Error::Io) +} + +/// Remove the git worktree for a story by ID. +/// +/// Loads the project config to honour teardown commands. Returns an error if +/// the worktree directory does not exist. +pub async fn remove_worktree(project_root: &Path, story_id: &str) -> Result<(), Error> { + let config = load_config(project_root)?; + worktree::remove_worktree_by_story_id(project_root, story_id, &config) + .await + .map_err(Error::Worktree) +} + +/// Read test results persisted in a story's markdown file. +/// +/// Returns `None` when the story has no test results section. +pub fn read_test_results_from_file( + project_root: &Path, + story_id: &str, +) -> Option { + crate::http::workflow::read_test_results_from_story_file(project_root, story_id) +} + +/// Read a work item file from a pipeline stage directory. +/// +/// Returns `Ok(Some(content))` when found, `Ok(None)` when absent. +pub fn read_work_item_from_stage( + work_dir: &std::path::Path, + stage_dir: &str, + filename: &str, +) -> Result, Error> { + let file_path = work_dir.join(stage_dir).join(filename); + if file_path.exists() { + let content = std::fs::read_to_string(&file_path) + .map_err(|e| Error::Io(format!("Failed to read work item: {e}")))?; + Ok(Some(content)) + } else { + Ok(None) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use tempfile::TempDir; + + fn make_work_dirs(tmp: &TempDir) { + for stage in &["5_done", "6_archived"] { + std::fs::create_dir_all(tmp.path().join(".huskies").join("work").join(stage)).unwrap(); + } + } + + // ── is_archived ─────────────────────────────────────────────────────────── + + #[test] + fn is_archived_false_when_file_absent() { + let tmp = TempDir::new().unwrap(); + make_work_dirs(&tmp); + assert!(!is_archived(tmp.path(), "42_story_foo")); + } + + #[test] + fn is_archived_true_when_in_5_done() { + let tmp = TempDir::new().unwrap(); + make_work_dirs(&tmp); + std::fs::write( + tmp.path().join(".huskies/work/5_done/42_story_foo.md"), + "---\nname: test\n---\n", + ) + .unwrap(); + assert!(is_archived(tmp.path(), "42_story_foo")); + } + + #[test] + fn is_archived_true_when_in_6_archived() { + let tmp = TempDir::new().unwrap(); + make_work_dirs(&tmp); + std::fs::write( + tmp.path().join(".huskies/work/6_archived/42_story_foo.md"), + "---\nname: test\n---\n", + ) + .unwrap(); + assert!(is_archived(tmp.path(), "42_story_foo")); + } + + // ── read_agent_log ──────────────────────────────────────────────────────── + + #[test] + fn read_agent_log_returns_empty_when_no_log() { + let tmp = TempDir::new().unwrap(); + let entries = read_agent_log(tmp.path(), "42_story_foo", "coder-1").unwrap(); + assert!(entries.is_empty()); + } + + // ── read_token_records ──────────────────────────────────────────────────── + + #[test] + fn read_token_records_returns_empty_when_no_file() { + let tmp = TempDir::new().unwrap(); + let records = read_token_records(tmp.path()).unwrap(); + assert!(records.is_empty()); + } + + // ── load_config ─────────────────────────────────────────────────────────── + + #[test] + fn load_config_returns_default_when_no_file() { + let tmp = TempDir::new().unwrap(); + std::fs::create_dir_all(tmp.path().join(".huskies")).unwrap(); + let config = load_config(tmp.path()).unwrap(); + // Default config has one "default" agent + assert_eq!(config.agent.len(), 1); + assert_eq!(config.agent[0].name, "default"); + } + + // ── list_worktrees ──────────────────────────────────────────────────────── + + #[test] + fn list_worktrees_empty_when_no_dir() { + let tmp = TempDir::new().unwrap(); + let entries = list_worktrees(tmp.path()).unwrap(); + assert!(entries.is_empty()); + } + + #[test] + fn list_worktrees_returns_subdirs() { + let tmp = TempDir::new().unwrap(); + let wt_dir = tmp.path().join(".huskies").join("worktrees"); + std::fs::create_dir_all(wt_dir.join("42_story_foo")).unwrap(); + std::fs::create_dir_all(wt_dir.join("43_story_bar")).unwrap(); + let mut entries = list_worktrees(tmp.path()).unwrap(); + entries.sort_by(|a, b| a.story_id.cmp(&b.story_id)); + assert_eq!(entries.len(), 2); + assert_eq!(entries[0].story_id, "42_story_foo"); + assert_eq!(entries[1].story_id, "43_story_bar"); + } +} diff --git a/server/src/service/agents/mod.rs b/server/src/service/agents/mod.rs new file mode 100644 index 00000000..79f4c833 --- /dev/null +++ b/server/src/service/agents/mod.rs @@ -0,0 +1,476 @@ +//! Agent service — public API for the agent domain. +//! +//! This module orchestrates calls to `io.rs` (side effects) and the pure +//! topic modules (`selection`, `token`) to implement the full agent service +//! surface. HTTP handlers call these functions instead of reaching directly +//! into `AgentPool` or the filesystem. +//! +//! Conventions: `docs/architecture/service-modules.md` +mod io; +pub mod selection; +pub mod token; + +use crate::agents::AgentInfo; +use crate::agents::AgentPool; +use crate::agents::token_usage::TokenUsageRecord; +use crate::config::ProjectConfig; +use crate::workflow::StoryTestResults; +use crate::worktree::{WorktreeInfo, WorktreeListEntry}; +use std::path::Path; + +pub use io::is_archived; +pub use token::TokenCostSummary; + +// ── Error type ──────────────────────────────────────────────────────────────── + +/// Typed errors returned by `service::agents` functions. +/// +/// HTTP handlers map these to specific status codes — see the conventions doc +/// for the full mapping table. +#[derive(Debug)] +pub enum Error { + /// No agent with the given name/story exists in the pool. + AgentNotFound(String), + /// No work item found for the requested story ID. + WorkItemNotFound(String), + /// A worktree operation failed. + Worktree(String), + /// Project configuration could not be loaded. + Config(String), + /// A filesystem or I/O operation failed. + Io(String), +} + +impl std::fmt::Display for Error { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + match self { + Self::AgentNotFound(msg) => write!(f, "Agent not found: {msg}"), + Self::WorkItemNotFound(msg) => write!(f, "Work item not found: {msg}"), + Self::Worktree(msg) => write!(f, "Worktree error: {msg}"), + Self::Config(msg) => write!(f, "Config error: {msg}"), + Self::Io(msg) => write!(f, "I/O error: {msg}"), + } + } +} + +// ── Shared service types ───────────────────────────────────────────────────── + +/// Content and metadata for a work-item (story) file. +#[derive(Debug, Clone)] +pub struct WorkItemContent { + pub content: String, + pub stage: String, + pub name: Option, + pub agent: Option, +} + +/// A single entry in the project's configured agent roster. +#[derive(Debug, Clone)] +pub struct AgentConfigEntry { + pub name: String, + pub role: String, + pub stage: Option, + pub model: Option, + pub allowed_tools: Option>, + pub max_turns: Option, + pub max_budget_usd: Option, +} + +// ── Public API ──────────────────────────────────────────────────────────────── + +/// Start an agent for a story. +/// +/// Takes only what it needs: the pool (for spawning) and the project root +/// (for config and worktree creation). Does not touch `AppContext`. +pub async fn start_agent( + pool: &AgentPool, + project_root: &Path, + story_id: &str, + agent_name: Option<&str>, + resume_context: Option<&str>, + session_id_to_resume: Option, +) -> Result { + pool.start_agent( + project_root, + story_id, + agent_name, + resume_context, + session_id_to_resume, + ) + .await + .map_err(Error::AgentNotFound) +} + +/// Stop a running agent. +pub async fn stop_agent( + pool: &AgentPool, + project_root: &Path, + story_id: &str, + agent_name: &str, +) -> Result<(), Error> { + pool.stop_agent(project_root, story_id, agent_name) + .await + .map_err(Error::AgentNotFound) +} + +/// List all agents, optionally filtering out those belonging to archived stories. +/// +/// When `project_root` is `None` the archive filter is skipped and all agents +/// are returned (safe default when the server is not yet fully configured). +pub fn list_agents(pool: &AgentPool, project_root: Option<&Path>) -> Result, Error> { + let agents = pool.list_agents().map_err(Error::Io)?; + match project_root { + Some(root) => Ok(selection::filter_non_archived(agents, |id| { + io::is_archived(root, id) + })), + None => Ok(agents), + } +} + +/// Create a git worktree for a story. +pub async fn create_worktree( + pool: &AgentPool, + project_root: &Path, + story_id: &str, +) -> Result { + pool.create_worktree(project_root, story_id) + .await + .map_err(Error::Worktree) +} + +/// List all worktrees under `.huskies/worktrees/`. +pub fn list_worktrees(project_root: &Path) -> Result, Error> { + io::list_worktrees(project_root) +} + +/// Remove the git worktree for a story. +pub async fn remove_worktree(project_root: &Path, story_id: &str) -> Result<(), Error> { + io::remove_worktree(project_root, story_id).await +} + +/// Get the configured agent roster from `project.toml`. +pub fn get_agent_config(project_root: &Path) -> Result, Error> { + let config = io::load_config(project_root)?; + Ok(config_to_entries(&config)) +} + +/// Reload and return the project's agent configuration. +/// +/// Semantically identical to `get_agent_config`; provided as a distinct +/// function so callers can express intent (UI "Reload" button). +pub fn reload_config(project_root: &Path) -> Result, Error> { + get_agent_config(project_root) +} + +/// Get the concatenated output text for an agent's most recent session. +/// +/// Returns an empty string when no log file exists yet. +pub fn get_agent_output( + project_root: &Path, + story_id: &str, + agent_name: &str, +) -> Result { + let entries = io::read_agent_log(project_root, story_id, agent_name)?; + Ok(selection::collect_output_text(&entries)) +} + +/// Get the markdown content and metadata for a work item. +/// +/// Searches all pipeline stage directories, falling back to the CRDT content +/// store when no file is present on disk. Returns `Error::WorkItemNotFound` +/// when neither source has the item. +pub fn get_work_item_content( + project_root: &Path, + story_id: &str, +) -> Result { + let stages = [ + ("1_backlog", "backlog"), + ("2_current", "current"), + ("3_qa", "qa"), + ("4_merge", "merge"), + ("5_done", "done"), + ("6_archived", "archived"), + ]; + + let work_dir = project_root.join(".huskies").join("work"); + let filename = format!("{story_id}.md"); + + for (stage_dir, stage_name) in &stages { + if let Some(content) = io::read_work_item_from_stage(&work_dir, stage_dir, &filename)? { + let metadata = crate::io::story_metadata::parse_front_matter(&content).ok(); + return Ok(WorkItemContent { + content, + stage: stage_name.to_string(), + name: metadata.as_ref().and_then(|m| m.name.clone()), + agent: metadata.and_then(|m| m.agent), + }); + } + } + + // CRDT-only fallback + if let Some(content) = crate::db::read_content(story_id) { + let item = crate::pipeline_state::read_typed(story_id) + .map_err(|e| Error::Io(format!("Pipeline read error: {e}")))?; + let stage = item + .as_ref() + .map(|i| match &i.stage { + crate::pipeline_state::Stage::Backlog => "backlog", + crate::pipeline_state::Stage::Coding => "current", + crate::pipeline_state::Stage::Qa => "qa", + crate::pipeline_state::Stage::Merge { .. } => "merge", + crate::pipeline_state::Stage::Done { .. } => "done", + crate::pipeline_state::Stage::Archived { .. } => "archived", + }) + .unwrap_or("unknown") + .to_string(); + let metadata = crate::io::story_metadata::parse_front_matter(&content).ok(); + return Ok(WorkItemContent { + content, + stage, + name: metadata.as_ref().and_then(|m| m.name.clone()), + agent: metadata.and_then(|m| m.agent), + }); + } + + Err(Error::WorkItemNotFound(format!( + "Work item not found: {story_id}" + ))) +} + +/// Get test results for a work item. +/// +/// Checks in-memory workflow state first (fast path), then falls back to +/// results persisted in the story file. +pub fn get_test_results( + project_root: &Path, + story_id: &str, + workflow: &crate::workflow::WorkflowState, +) -> Option { + if let Some(results) = workflow.results.get(story_id) { + return Some(results.clone()); + } + io::read_test_results_from_file(project_root, story_id) +} + +/// Get the aggregated token cost for a specific story. +pub fn get_work_item_token_cost( + project_root: &Path, + story_id: &str, +) -> Result { + let records = io::read_token_records(project_root)?; + Ok(token::aggregate_for_story(&records, story_id)) +} + +/// Get all token usage records across all stories. +pub fn get_all_token_usage(project_root: &Path) -> Result, Error> { + io::read_token_records(project_root) +} + +// ── Helpers ─────────────────────────────────────────────────────────────────── + +fn config_to_entries(config: &ProjectConfig) -> Vec { + config + .agent + .iter() + .map(|a| AgentConfigEntry { + name: a.name.clone(), + role: a.role.clone(), + stage: a.stage.clone(), + model: a.model.clone(), + allowed_tools: a.allowed_tools.clone(), + max_turns: a.max_turns, + max_budget_usd: a.max_budget_usd, + }) + .collect() +} + +// ── Integration tests ───────────────────────────────────────────────────────── + +#[cfg(test)] +mod tests { + use super::*; + use crate::agents::AgentStatus; + use std::sync::Arc; + use tempfile::TempDir; + + fn make_pool(tmp: &TempDir) -> Arc { + let (tx, _) = tokio::sync::broadcast::channel(64); + let pool = AgentPool::new(3001, tx); + let state = crate::state::SessionState::default(); + *state.project_root.lock().unwrap() = Some(tmp.path().to_path_buf()); + Arc::new(pool) + } + + fn make_work_dirs(tmp: &TempDir) { + for stage in &["5_done", "6_archived"] { + std::fs::create_dir_all(tmp.path().join(".huskies").join("work").join(stage)).unwrap(); + } + } + + fn make_stage_dirs(tmp: &TempDir) { + for stage in &[ + "1_backlog", + "2_current", + "3_qa", + "4_merge", + "5_done", + "6_archived", + ] { + std::fs::create_dir_all(tmp.path().join(".huskies").join("work").join(stage)).unwrap(); + } + } + + fn make_project_toml(tmp: &TempDir, content: &str) { + let sk_dir = tmp.path().join(".huskies"); + std::fs::create_dir_all(&sk_dir).unwrap(); + std::fs::write(sk_dir.join("project.toml"), content).unwrap(); + } + + // ── list_agents ─────────────────────────────────────────────────────────── + + #[tokio::test] + async fn list_agents_excludes_archived_stories() { + let tmp = TempDir::new().unwrap(); + make_work_dirs(&tmp); + std::fs::write( + tmp.path() + .join(".huskies/work/6_archived/79_story_archived.md"), + "---\nname: archived\n---\n", + ) + .unwrap(); + + let pool = make_pool(&tmp); + pool.inject_test_agent("79_story_archived", "coder-1", AgentStatus::Completed); + pool.inject_test_agent("80_story_active", "coder-1", AgentStatus::Running); + + let agents = list_agents(&pool, Some(tmp.path())).unwrap(); + assert!(!agents.iter().any(|a| a.story_id == "79_story_archived")); + assert!(agents.iter().any(|a| a.story_id == "80_story_active")); + } + + #[tokio::test] + async fn list_agents_includes_all_when_no_project_root() { + let tmp = TempDir::new().unwrap(); + let pool = make_pool(&tmp); + pool.inject_test_agent("42_story_whatever", "coder-1", AgentStatus::Completed); + + let agents = list_agents(&pool, None).unwrap(); + assert!(agents.iter().any(|a| a.story_id == "42_story_whatever")); + } + + // ── get_agent_config ────────────────────────────────────────────────────── + + #[test] + fn get_agent_config_returns_default_when_no_toml() { + let tmp = TempDir::new().unwrap(); + std::fs::create_dir_all(tmp.path().join(".huskies")).unwrap(); + let entries = get_agent_config(tmp.path()).unwrap(); + assert_eq!(entries.len(), 1); + assert_eq!(entries[0].name, "default"); + } + + #[test] + fn get_agent_config_returns_configured_agents() { + let tmp = TempDir::new().unwrap(); + make_project_toml( + &tmp, + r#" +[[agent]] +name = "coder-1" +role = "Full-stack engineer" +model = "sonnet" +max_turns = 30 +max_budget_usd = 5.0 +"#, + ); + let entries = get_agent_config(tmp.path()).unwrap(); + assert_eq!(entries.len(), 1); + assert_eq!(entries[0].name, "coder-1"); + assert_eq!(entries[0].model, Some("sonnet".to_string())); + assert_eq!(entries[0].max_turns, Some(30)); + } + + // ── get_agent_output ────────────────────────────────────────────────────── + + #[test] + fn get_agent_output_returns_empty_when_no_log() { + let tmp = TempDir::new().unwrap(); + let output = get_agent_output(tmp.path(), "42_story_foo", "coder-1").unwrap(); + assert_eq!(output, ""); + } + + // ── get_work_item_content ───────────────────────────────────────────────── + + #[test] + fn get_work_item_content_reads_from_backlog() { + let tmp = TempDir::new().unwrap(); + make_stage_dirs(&tmp); + std::fs::write( + tmp.path().join(".huskies/work/1_backlog/42_story_foo.md"), + "---\nname: \"Foo Story\"\n---\n\nSome content.", + ) + .unwrap(); + let item = get_work_item_content(tmp.path(), "42_story_foo").unwrap(); + assert!(item.content.contains("Some content.")); + assert_eq!(item.stage, "backlog"); + assert_eq!(item.name, Some("Foo Story".to_string())); + } + + #[test] + fn get_work_item_content_returns_not_found_for_absent_story() { + let tmp = TempDir::new().unwrap(); + make_stage_dirs(&tmp); + let result = get_work_item_content(tmp.path(), "99_story_nonexistent"); + assert!(matches!(result, Err(Error::WorkItemNotFound(_)))); + } + + // ── get_work_item_token_cost ────────────────────────────────────────────── + + #[test] + fn get_work_item_token_cost_returns_zero_when_no_records() { + let tmp = TempDir::new().unwrap(); + let summary = get_work_item_token_cost(tmp.path(), "42_story_foo").unwrap(); + assert_eq!(summary.total_cost_usd, 0.0); + assert!(summary.agents.is_empty()); + } + + // ── get_all_token_usage ─────────────────────────────────────────────────── + + #[test] + fn get_all_token_usage_returns_empty_when_no_file() { + let tmp = TempDir::new().unwrap(); + let records = get_all_token_usage(tmp.path()).unwrap(); + assert!(records.is_empty()); + } + + // ── get_test_results ────────────────────────────────────────────────────── + + #[test] + fn get_test_results_returns_none_when_no_results() { + let tmp = TempDir::new().unwrap(); + let workflow = crate::workflow::WorkflowState::default(); + let result = get_test_results(tmp.path(), "42_story_foo", &workflow); + assert!(result.is_none()); + } + + #[test] + fn get_test_results_returns_in_memory_results_first() { + let tmp = TempDir::new().unwrap(); + let mut workflow = crate::workflow::WorkflowState::default(); + workflow + .record_test_results_validated( + "42_story_foo".to_string(), + vec![crate::workflow::TestCaseResult { + name: "test1".to_string(), + status: crate::workflow::TestStatus::Pass, + details: None, + }], + vec![], + ) + .unwrap(); + let result = + get_test_results(tmp.path(), "42_story_foo", &workflow).expect("should have results"); + assert_eq!(result.unit.len(), 1); + assert_eq!(result.unit[0].name, "test1"); + } +} diff --git a/server/src/service/agents/selection.rs b/server/src/service/agents/selection.rs new file mode 100644 index 00000000..fb7bafa4 --- /dev/null +++ b/server/src/service/agents/selection.rs @@ -0,0 +1,171 @@ +//! Pure agent selection and filtering logic — no I/O, no side effects. +//! +//! All functions in this module are pure: they take data, transform it, and +//! return a result without touching the filesystem, network, or any mutable +//! global state. This makes them fast to test without tempdirs or async runtimes. +use crate::agent_log::LogEntry; +use crate::agents::AgentInfo; + +/// Filter a list of agents, removing any whose story is archived. +/// +/// `is_archived` is a predicate injected by the caller — typically a closure +/// over the project root that calls `io::is_archived`. This keeps the function +/// pure: it never touches the filesystem itself. +pub fn filter_non_archived(agents: Vec, is_archived: F) -> Vec +where + F: Fn(&str) -> bool, +{ + agents + .into_iter() + .filter(|info| !is_archived(&info.story_id)) + .collect() +} + +/// Concatenate the text of all `output` events from an agent log. +/// +/// Non-output events (status, done, error, agent_json, thinking) are silently +/// skipped. Returns an empty string when `entries` is empty or contains no +/// output events. +pub fn collect_output_text(entries: &[LogEntry]) -> String { + entries + .iter() + .filter(|e| e.event.get("type").and_then(|t| t.as_str()) == Some("output")) + .filter_map(|e| { + e.event + .get("text") + .and_then(|t| t.as_str()) + .map(str::to_owned) + }) + .collect() +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::agents::AgentStatus; + + fn make_agent(story_id: &str) -> AgentInfo { + AgentInfo { + story_id: story_id.to_string(), + agent_name: "coder-1".to_string(), + status: AgentStatus::Running, + session_id: None, + worktree_path: None, + base_branch: None, + completion: None, + log_session_id: None, + throttled: false, + } + } + + fn make_log_entry(event_type: &str, text: Option<&str>) -> LogEntry { + let mut obj = serde_json::Map::new(); + obj.insert( + "type".to_string(), + serde_json::Value::String(event_type.to_string()), + ); + if let Some(t) = text { + obj.insert("text".to_string(), serde_json::Value::String(t.to_string())); + } + LogEntry { + timestamp: "2024-01-01T00:00:00Z".to_string(), + event: serde_json::Value::Object(obj), + } + } + + // ── filter_non_archived ─────────────────────────────────────────────────── + + #[test] + fn filter_keeps_non_archived_agents() { + let agents = vec![make_agent("10_active"), make_agent("11_active")]; + let result = filter_non_archived(agents, |_| false); + assert_eq!(result.len(), 2); + } + + #[test] + fn filter_removes_archived_agents() { + let agents = vec![make_agent("10_archived"), make_agent("11_active")]; + let result = filter_non_archived(agents, |id| id == "10_archived"); + assert_eq!(result.len(), 1); + assert_eq!(result[0].story_id, "11_active"); + } + + #[test] + fn filter_removes_all_when_all_archived() { + let agents = vec![make_agent("10_a"), make_agent("11_b")]; + let result = filter_non_archived(agents, |_| true); + assert!(result.is_empty()); + } + + #[test] + fn filter_returns_empty_for_empty_input() { + let result = filter_non_archived(vec![], |_| false); + assert!(result.is_empty()); + } + + #[test] + fn filter_preserves_order() { + let agents = vec![ + make_agent("1_a"), + make_agent("2_b"), + make_agent("3_c"), + make_agent("4_d"), + ]; + let result = filter_non_archived(agents, |id| id == "2_b"); + assert_eq!(result.len(), 3); + assert_eq!(result[0].story_id, "1_a"); + assert_eq!(result[1].story_id, "3_c"); + assert_eq!(result[2].story_id, "4_d"); + } + + // ── collect_output_text ─────────────────────────────────────────────────── + + #[test] + fn collect_output_text_empty_entries() { + let result = collect_output_text(&[]); + assert_eq!(result, ""); + } + + #[test] + fn collect_output_text_skips_non_output_events() { + let entries = vec![ + make_log_entry("status", Some("running")), + make_log_entry("done", None), + ]; + let result = collect_output_text(&entries); + assert_eq!(result, ""); + } + + #[test] + fn collect_output_text_concatenates_output_events() { + let entries = vec![ + make_log_entry("output", Some("Hello ")), + make_log_entry("output", Some("world\n")), + ]; + let result = collect_output_text(&entries); + assert_eq!(result, "Hello world\n"); + } + + #[test] + fn collect_output_text_skips_output_without_text_field() { + let entry = LogEntry { + timestamp: "2024-01-01T00:00:00Z".to_string(), + event: serde_json::json!({"type": "output"}), + }; + let result = collect_output_text(&[entry]); + assert_eq!(result, ""); + } + + #[test] + fn collect_output_text_mixed_event_types() { + let entries = vec![ + make_log_entry("status", Some("running")), + make_log_entry("output", Some("line1\n")), + make_log_entry("agent_json", None), + make_log_entry("output", Some("line2\n")), + make_log_entry("done", None), + ]; + let result = collect_output_text(&entries); + assert_eq!(result, "line1\nline2\n"); + } +} diff --git a/server/src/service/agents/token.rs b/server/src/service/agents/token.rs new file mode 100644 index 00000000..8e3b2328 --- /dev/null +++ b/server/src/service/agents/token.rs @@ -0,0 +1,160 @@ +//! Pure token usage aggregation — no I/O, no side effects. +//! +//! Functions here take slices of `TokenUsageRecord` (already loaded by `io.rs`) +//! and compute summaries. Tests cover every branch without touching the filesystem. +use crate::agents::token_usage::TokenUsageRecord; +use std::collections::HashMap; + +/// Per-agent cost breakdown entry. +#[derive(Debug, Clone, PartialEq)] +pub struct AgentTokenCost { + pub agent_name: String, + pub model: Option, + pub input_tokens: u64, + pub output_tokens: u64, + pub cache_creation_input_tokens: u64, + pub cache_read_input_tokens: u64, + pub total_cost_usd: f64, +} + +/// Aggregated token cost for a story. +#[derive(Debug, Clone, PartialEq)] +pub struct TokenCostSummary { + pub total_cost_usd: f64, + pub agents: Vec, +} + +/// Aggregate token usage records for a single story. +/// +/// Records for other stories are ignored. The returned `agents` list is sorted +/// alphabetically by `agent_name` for deterministic output. Returns a zero-cost +/// summary when no records match the given `story_id`. +pub fn aggregate_for_story(records: &[TokenUsageRecord], story_id: &str) -> TokenCostSummary { + let mut agent_map: HashMap = HashMap::new(); + let mut total_cost_usd = 0.0_f64; + + for record in records.iter().filter(|r| r.story_id == story_id) { + total_cost_usd += record.usage.total_cost_usd; + let entry = agent_map + .entry(record.agent_name.clone()) + .or_insert_with(|| AgentTokenCost { + agent_name: record.agent_name.clone(), + model: record.model.clone(), + input_tokens: 0, + output_tokens: 0, + cache_creation_input_tokens: 0, + cache_read_input_tokens: 0, + total_cost_usd: 0.0, + }); + entry.input_tokens += record.usage.input_tokens; + entry.output_tokens += record.usage.output_tokens; + entry.cache_creation_input_tokens += record.usage.cache_creation_input_tokens; + entry.cache_read_input_tokens += record.usage.cache_read_input_tokens; + entry.total_cost_usd += record.usage.total_cost_usd; + } + + let mut agents: Vec = agent_map.into_values().collect(); + agents.sort_by(|a, b| a.agent_name.cmp(&b.agent_name)); + + TokenCostSummary { + total_cost_usd, + agents, + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::agents::TokenUsage; + + fn make_record(story_id: &str, agent: &str, cost: f64) -> TokenUsageRecord { + TokenUsageRecord { + story_id: story_id.to_string(), + agent_name: agent.to_string(), + timestamp: "2024-01-01T00:00:00Z".to_string(), + model: None, + usage: TokenUsage { + input_tokens: 100, + output_tokens: 50, + cache_creation_input_tokens: 10, + cache_read_input_tokens: 20, + total_cost_usd: cost, + }, + } + } + + #[test] + fn aggregate_returns_zero_when_no_records() { + let summary = aggregate_for_story(&[], "42_story_foo"); + assert_eq!(summary.total_cost_usd, 0.0); + assert!(summary.agents.is_empty()); + } + + #[test] + fn aggregate_filters_to_story_id() { + let records = vec![ + make_record("42_story_foo", "coder-1", 1.0), + make_record("99_story_other", "coder-1", 5.0), + ]; + let summary = aggregate_for_story(&records, "42_story_foo"); + assert!((summary.total_cost_usd - 1.0).abs() < f64::EPSILON); + assert_eq!(summary.agents.len(), 1); + } + + #[test] + fn aggregate_sums_tokens_per_agent() { + let records = vec![ + make_record("42_story_foo", "coder-1", 1.0), + make_record("42_story_foo", "coder-1", 2.0), + ]; + let summary = aggregate_for_story(&records, "42_story_foo"); + assert!((summary.total_cost_usd - 3.0).abs() < f64::EPSILON); + assert_eq!(summary.agents.len(), 1); + assert_eq!(summary.agents[0].input_tokens, 200); + assert_eq!(summary.agents[0].output_tokens, 100); + assert!((summary.agents[0].total_cost_usd - 3.0).abs() < f64::EPSILON); + } + + #[test] + fn aggregate_splits_by_agent() { + let records = vec![ + make_record("42_story_foo", "coder-1", 1.0), + make_record("42_story_foo", "qa", 0.5), + ]; + let summary = aggregate_for_story(&records, "42_story_foo"); + assert!((summary.total_cost_usd - 1.5).abs() < f64::EPSILON); + assert_eq!(summary.agents.len(), 2); + // sorted alphabetically + assert_eq!(summary.agents[0].agent_name, "coder-1"); + assert_eq!(summary.agents[1].agent_name, "qa"); + } + + #[test] + fn aggregate_sorts_agents_alphabetically() { + let records = vec![ + make_record("42_story_foo", "z-agent", 1.0), + make_record("42_story_foo", "a-agent", 1.0), + make_record("42_story_foo", "m-agent", 1.0), + ]; + let summary = aggregate_for_story(&records, "42_story_foo"); + assert_eq!(summary.agents[0].agent_name, "a-agent"); + assert_eq!(summary.agents[1].agent_name, "m-agent"); + assert_eq!(summary.agents[2].agent_name, "z-agent"); + } + + #[test] + fn aggregate_returns_zero_when_no_matching_story() { + let records = vec![make_record("99_other", "coder-1", 5.0)]; + let summary = aggregate_for_story(&records, "42_story_foo"); + assert_eq!(summary.total_cost_usd, 0.0); + assert!(summary.agents.is_empty()); + } + + #[test] + fn aggregate_preserves_model_from_first_record() { + let mut r = make_record("42_story_foo", "coder-1", 1.0); + r.model = Some("claude-sonnet".to_string()); + let summary = aggregate_for_story(&[r], "42_story_foo"); + assert_eq!(summary.agents[0].model, Some("claude-sonnet".to_string())); + } +} diff --git a/server/src/service/mod.rs b/server/src/service/mod.rs new file mode 100644 index 00000000..003b2b3f --- /dev/null +++ b/server/src/service/mod.rs @@ -0,0 +1,8 @@ +//! Service layer — domain logic extracted from HTTP handlers. +//! +//! Each sub-module follows the conventions documented in +//! `docs/architecture/service-modules.md`: +//! - `mod.rs` orchestrates and owns the typed `Error` type +//! - `io.rs` is the only file that performs side effects +//! - Topic-named pure files contain branching logic with no I/O +pub mod agents;