story-kit: queue 132_story_fix_toctou_race_in_agent_check_and_insert for QA
This commit is contained in:
@@ -1,48 +0,0 @@
|
||||
---
|
||||
name: "Fix TOCTOU race in agent check-and-insert"
|
||||
---
|
||||
|
||||
# Story 132: Fix TOCTOU race in agent check-and-insert
|
||||
|
||||
## User Story
|
||||
|
||||
As a user running multiple agents, I want the agent pool to correctly enforce single-instance-per-agent so that two agents never end up running on the same story or the same agent name running on two stories concurrently.
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
- [ ] The lock in start_agent (server/src/agents.rs ~lines 262-324) is held continuously from the availability check through the HashMap insert — no lock release between check and insert
|
||||
- [ ] The lock in auto_assign_available_work (server/src/agents.rs ~lines 1196-1228) is held from find_free_agent_for_stage through the start_agent call, preventing a concurrent auto_assign from selecting the same agent
|
||||
- [ ] A test demonstrates that concurrent start_agent calls for the same agent name on different stories result in exactly one running agent and one rejection
|
||||
- [ ] A test demonstrates that concurrent auto_assign_available_work calls do not produce duplicate assignments
|
||||
|
||||
## Analysis
|
||||
|
||||
### Race 1: start_agent check-then-insert (server/src/agents.rs)
|
||||
|
||||
The single-instance check at ~lines 262-296 acquires the mutex, checks for duplicate agents, then **releases the lock**. The HashMap insert happens later at ~line 324 after **re-acquiring the lock**. Between release and reacquire, a concurrent call can pass the same check:
|
||||
|
||||
```
|
||||
Thread A: lock → check coder-1 available? YES → unlock
|
||||
Thread B: lock → check coder-1 available? YES → unlock → lock → insert "86:coder-1"
|
||||
Thread A: lock → insert "130:coder-1"
|
||||
Result: both coder-1 entries exist, two processes spawned
|
||||
```
|
||||
|
||||
The composite key at ~line 27 is `format!("{story_id}:{agent_name}")`, so `86:coder-1` and `130:coder-1` are different keys. The name-only check at ~lines 277-295 iterates the HashMap looking for a Running/Pending agent with the same name — but both threads read the HashMap before either has inserted, so both pass.
|
||||
|
||||
**Fix**: Hold the lock from the check (~line 264) through the insert (~line 324). This means the worktree setup and process spawn (~lines 297-322) must either happen inside the lock (blocking other callers) or the entry must be inserted as `Pending` before releasing the lock, with the process spawn happening after.
|
||||
|
||||
### Race 2: auto_assign_available_work (server/src/agents.rs)
|
||||
|
||||
At ~lines 1196-1215, the function locks the mutex, calls `find_free_agent_for_stage` to pick an available agent name, then **releases the lock**. It then calls `start_agent` at ~line 1228, which re-acquires the lock. Two concurrent `auto_assign` calls can both select the same free agent for different stories (or the same story) in this window.
|
||||
|
||||
**Fix**: Either hold the lock across the full loop iteration, or restructure so that `start_agent` receives a reservation/guard rather than just an agent name string.
|
||||
|
||||
### Observed symptoms
|
||||
|
||||
- Both `coder-1` and `coder-2` showing as "running" on the same story
|
||||
- `coder-1` appearing on story 86 immediately after completing on bug 130, due to pipeline advancement calling `auto_assign_available_work` concurrently with other state transitions
|
||||
|
||||
## Out of Scope
|
||||
|
||||
- TBD
|
||||
Reference in New Issue
Block a user