From 785f4900ab02e7246c47fc3f64e54a022766358a Mon Sep 17 00:00:00 2001 From: Dave Date: Tue, 24 Feb 2026 12:02:01 +0000 Subject: [PATCH] story-kit: create 132_story_fix_toctou_race_in_agent_check_and_insert --- ...132_story_fix_toctou_race_in_agent_check_and_insert.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/.story_kit/work/1_upcoming/132_story_fix_toctou_race_in_agent_check_and_insert.md b/.story_kit/work/1_upcoming/132_story_fix_toctou_race_in_agent_check_and_insert.md index 54fa1a3..95d2a54 100644 --- a/.story_kit/work/1_upcoming/132_story_fix_toctou_race_in_agent_check_and_insert.md +++ b/.story_kit/work/1_upcoming/132_story_fix_toctou_race_in_agent_check_and_insert.md @@ -10,14 +10,14 @@ As a user running multiple agents, I want the agent pool to correctly enforce si ## Acceptance Criteria -- [ ] The lock in start_agent (agents.rs ~lines 262-324) is held continuously from the availability check through the HashMap insert — no lock release between check and insert -- [ ] The lock in auto_assign_available_work (agents.rs ~lines 1196-1228) is held from find_free_agent_for_stage through the start_agent call, preventing a concurrent auto_assign from selecting the same agent +- [ ] The lock in start_agent (server/src/agents.rs ~lines 262-324) is held continuously from the availability check through the HashMap insert — no lock release between check and insert +- [ ] The lock in auto_assign_available_work (server/src/agents.rs ~lines 1196-1228) is held from find_free_agent_for_stage through the start_agent call, preventing a concurrent auto_assign from selecting the same agent - [ ] A test demonstrates that concurrent start_agent calls for the same agent name on different stories result in exactly one running agent and one rejection - [ ] A test demonstrates that concurrent auto_assign_available_work calls do not produce duplicate assignments ## Analysis -### Race 1: start_agent check-then-insert (agents.rs) +### Race 1: start_agent check-then-insert (server/src/agents.rs) The single-instance check at ~lines 262-296 acquires the mutex, checks for duplicate agents, then **releases the lock**. The HashMap insert happens later at ~line 324 after **re-acquiring the lock**. Between release and reacquire, a concurrent call can pass the same check: @@ -32,7 +32,7 @@ The composite key at ~line 27 is `format!("{story_id}:{agent_name}")`, so `86:co **Fix**: Hold the lock from the check (~line 264) through the insert (~line 324). This means the worktree setup and process spawn (~lines 297-322) must either happen inside the lock (blocking other callers) or the entry must be inserted as `Pending` before releasing the lock, with the process spawn happening after. -### Race 2: auto_assign_available_work (agents.rs) +### Race 2: auto_assign_available_work (server/src/agents.rs) At ~lines 1196-1215, the function locks the mutex, calls `find_free_agent_for_stage` to pick an available agent name, then **releases the lock**. It then calls `start_agent` at ~line 1228, which re-acquires the lock. Two concurrent `auto_assign` calls can both select the same free agent for different stories (or the same story) in this window.