feat(521): evict_item primitive + purge_story MCP tool
Adds the foundational capability to clear a story from the running
server's in-memory CRDT state without restarting the process. This is
story 521, motivated by the 2026-04-09 incident where stories 478 and
503 kept resurrecting from in-memory CRDT after every sqlite delete /
worktree removal / timers.json clear. The only previous remedy was a
full docker restart.
Changes:
- server/src/crdt_state.rs: new `pub fn evict_item(story_id: &str)`.
Looks up the item's CRDT OpId via the visible-index map, calls the
bft-json-crdt list `delete()` primitive to construct a tombstone op,
runs it through the existing `apply_and_persist` machinery (which
signs, applies to the in-memory CRDT, and queues for persistence to
crdt_ops), rebuilds the story_id → visible_index map, and drops the
in-memory CONTENT_STORE entry. The tombstone survives a restart
because it's persisted as a real CRDT op.
- server/src/http/mcp/story_tools.rs: new `tool_purge_story` MCP
handler that takes a story_id and calls evict_item. Deliberately
minimal — does NOT touch agents, worktrees, pipeline_items shadow
table, timers.json, or filesystem shadows. Compose with stop_agent,
remove_worktree, etc. for a full purge. Story 514 (delete_story
full cleanup) is the future "do it all" tool.
- server/src/http/mcp/mod.rs: registers the `purge_story` tool in the
tools list and dispatch table.
Usage:
mcp__huskies__purge_story story_id="<full_story_id>"
Returns a string confirming the eviction. The story will no longer
appear in get_pipeline_status, list_agents, or any other API that
reads from the in-memory CRDT view, and on the next server restart
the persisted tombstone op will keep it from being reconstructed.
This is a prerequisite for story 514 (delete_story full cleanup) and
useful for any "kill it with fire" operator need.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -500,6 +500,69 @@ pub fn read_item(story_id: &str) -> Option<PipelineItemView> {
|
||||
extract_item_view(&state.crdt.doc.items[idx])
|
||||
}
|
||||
|
||||
/// Mark a story as deleted in the in-memory CRDT and persist a tombstone op.
|
||||
///
|
||||
/// This is the eviction primitive for story 521 — it lets external callers
|
||||
/// (e.g. the `purge_story` MCP tool, or operator scripts during incident
|
||||
/// response) clear an item from the running server's in-memory state
|
||||
/// without needing a full process restart.
|
||||
///
|
||||
/// Specifically:
|
||||
/// 1. Looks up the item's CRDT `OpId` via the visible-index map.
|
||||
/// 2. Constructs a delete op via the bft-json-crdt list `delete()` primitive.
|
||||
/// 3. Signs it with the local node's keypair and applies it to the in-memory
|
||||
/// CRDT (marking the item `is_deleted = true` so subsequent
|
||||
/// `read_all_items` / `read_item` calls skip it).
|
||||
/// 4. Persists the signed delete op to `crdt_ops` via the existing
|
||||
/// `apply_and_persist` channel — so the eviction survives a restart.
|
||||
/// 5. Rebuilds the `story_id → visible_index` map (visible indices shift
|
||||
/// when an item is marked deleted).
|
||||
/// 6. Drops the in-memory content-store entry for the story so the cached
|
||||
/// body doesn't outlive the CRDT entry.
|
||||
///
|
||||
/// Returns `Ok(())` if the item was found and a tombstone op was queued,
|
||||
/// or an `Err` if the CRDT layer isn't initialised or the story_id is
|
||||
/// unknown to the in-memory state.
|
||||
pub fn evict_item(story_id: &str) -> Result<(), String> {
|
||||
let state_mutex = CRDT_STATE
|
||||
.get()
|
||||
.ok_or_else(|| "CRDT layer not initialised".to_string())?;
|
||||
let mut state = state_mutex
|
||||
.lock()
|
||||
.map_err(|e| format!("CRDT lock poisoned: {e}"))?;
|
||||
|
||||
let idx = state
|
||||
.index
|
||||
.get(story_id)
|
||||
.copied()
|
||||
.ok_or_else(|| format!("Story '{story_id}' not found in in-memory CRDT"))?;
|
||||
|
||||
// Resolve the item's OpId before the closure (the closure will mutably
|
||||
// borrow `state`, so we can't access `state.crdt.doc.items` from inside).
|
||||
let item_id = state
|
||||
.crdt
|
||||
.doc
|
||||
.items
|
||||
.id_at(idx)
|
||||
.ok_or_else(|| format!("Item index {idx} for '{story_id}' did not resolve to an OpId"))?;
|
||||
|
||||
// Write the delete op via the existing apply_and_persist machinery.
|
||||
// This signs the op, applies it to the in-memory CRDT (marking the item
|
||||
// is_deleted), and sends it to the persistence task.
|
||||
apply_and_persist(&mut state, |s| s.crdt.doc.items.delete(item_id));
|
||||
|
||||
// Rebuild the story_id → visible_index map; the deleted item is no
|
||||
// longer counted by the iter that rebuild_index uses.
|
||||
state.index = rebuild_index(&state.crdt);
|
||||
|
||||
// Drop the content-store entry so the cached body doesn't outlive the
|
||||
// CRDT entry. (Bug 521 follow-up: when CONTENT_STORE becomes a true
|
||||
// lazy cache, this explicit eviction can go away.)
|
||||
crate::db::delete_content(story_id);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Extract a `PipelineItemView` from a `PipelineItemCrdt`.
|
||||
fn extract_item_view(item: &PipelineItemCrdt) -> Option<PipelineItemView> {
|
||||
let story_id = match item.story_id.view() {
|
||||
|
||||
Reference in New Issue
Block a user