A mid-merge server restart used to silently kill the merge: the
in-flight tokio task died with the process, reap_stale_merge_jobs ran
on the new boot, saw the Running entry from the previous boot, and
simply deleted it. Mergemaster polling `get_merge_status` then saw
"Merge job disappeared", treated it as a strike, and after three
restarts escalated the story to MergeFailureFinal — even though no
real merge failure ever happened (this is what trapped story 998
during the bug 1001 iteration cycle).
Reap now also fires a `WatcherEvent::WorkItem reassign` for the
cleared story so the auto-assign watcher loop re-runs
start_merge_agent_work on the fresh boot. The story is still in
4_merge/; the merge resumes automatically. The change is contained to
the reap path — start_merge_agent_work's own behaviour is unchanged.
Added regression test
reap_stale_merge_jobs_emits_reassign_watcher_event that asserts the
new event fires. Existing
reap_stale_merge_jobs_removes_old_running_entry_without_merge still
passes (the "without_merge" guarantee is about agent spawning, not
about absence of watcher events).
Also exposes AgentPool::watcher_tx() as pub(crate) so the merge
runner can fan out re-dispatch events.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
MergeFailureFinal was unreachable from move_story: the only transitions
out were Freeze (→ Frozen) and a self-loop on MergemasterAttempted, so
once mergemaster exhausted its 3-retry budget the only way to get a
story coding again was to delete + recreate it.
The respawn budget is a mergemaster bookkeeping detail, not a hard
ceiling. A human operator inspecting a Final story can reasonably
decide the gate failure is fixable, so this adds the same
FixupRequested → Coding edge that already exists for plain
MergeFailure.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The recovery tool was a one-shot migration aid for the half-written
items that existed before the Stage 1 allocator fix. The three live
orphans (989/1000/1001) have been migrated; the Stage 1 fix prevents
new half-writes; the tool's job is done.
Removes the MCP wrapper, schema, dispatch case, and tools-list
assertion. The db::recover module itself stays in-process (under
`#[allow(dead_code)]`) so it can be re-exposed quickly if the bug
ever resurfaces — its regression tests still run as part of the
default suite.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The first dry-run against the live pipeline surfaced 735 orphans (35
tombstoned half-writes, 700 stale content rows with no CRDT entry —
mostly artefacts of the pre-numeric-id era). Bulk-recovering would
resurrect a lot of stories the user deliberately purged in the past.
Add an optional `story_ids` filter that restricts both discovery (in
dry-run) and recovery to a named subset, so the operator can target
the specific recent half-writes without touching anything else. The
new test asserts the filter is honoured.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds db::recover, a discovery + recovery layer for pipeline items that
got half-written before the Stage 1 fix landed (content in content
store + SQLite shadow, no live CRDT entry). For each orphan, the
content body is re-anchored to a fresh non-tombstoned id and the old
id's content row is cleared.
Exposed as the recover_half_written_items MCP tool. dry_run defaults
to true so the caller can review what would change before mutating.
YAML front-matter parsing is hand-rolled and scoped to the three
fields the create_*_file path emits (name, type, depends_on). It
tolerates missing or malformed lines by falling back to safe
defaults; the orphan is recovered with the best metadata we can pull
from the body and the rest is left to the operator to fix up.
The discovery step is read-only and idempotent. Recovery is also
idempotent in the sense that once an orphan is lifted, the next
discovery pass won't see it.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Root cause: db::next_item_number scanned the visible CRDT index and the
content store but not the tombstone set, so it would hand out a numeric
ID whose CRDT entry had been tombstoned. crdt_state::write_item then
silently no-op'd the insert (tombstone-match guard) while the content
store and SQLite shadow happily accepted the row, producing a split-
brain half-write that was invisible to every CRDT-driven read path and
couldn't be cleaned up by delete_story / purge_story.
This change closes the loop:
- crdt_state::read::{is_tombstoned, tombstoned_ids} expose the
tombstone set so callers outside crdt_state can consult it.
- db::next_item_number now scans tombstoned_ids() too. The allocator
skips past tombstoned numeric IDs instead of treating their slots as
free.
- write_item logs a WARN when it rejects a write for a tombstoned ID
(was silent). The warn is a tripwire — if the allocator ever lets one
slip through again we'll see it in the log.
- create_item_in_backlog adds two defence-in-depth checks:
(a) before any write, reject if the allocator returned a
tombstoned ID;
(b) after the writes, call read_item to confirm the CRDT entry
materialised. If not, roll back the content-store + shadow-DB
rows via db::delete_item and return Err.
Regression tests cover the allocator skip, the is_tombstoned accessor,
and the create_item_in_backlog rollback path.
Out of scope for this commit:
- Recovery of the already-half-written items currently in the running
pipeline (989, 1000, 1001) — Stage 2/3 of the plan, handled
separately.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Distinct icons in StagePanel/GatewayPanel/render.rs status output for
blocked-with-running-recovery (robot), blocked-with-queued-recovery (hourglass),
and blocked-cold (red circle). All 2822 tests pass.
keepalive_connection_survives_with_pong_responses set ping_ms=100,
timeout_ms=250, so the server's pong-deadline fired ~560ms after the
first ping — only ~60ms past the end of the test's 500ms await window.
Under CI scheduler jitter that 60ms slack was insufficient and the
server timer fired inside the test window, closing the connection
mid-await and producing a flake.
Bump timeout_ms to 2000ms so the pong-deadline cannot fire within
the test window under any realistic jitter. ping_ms stays at 100ms
so the test still exercises multiple ping/pong rounds in the same
wall-clock budget.
Test still passes locally; was hitting 964's merge gate as a flake.
Two stories landed at the merge gate today (961, 962) with everything
else green, killed by a single missing `///` doc comment. The merge
gate runs source-map-check; script/check (and therefore the
mcp__huskies__run_check tool that coders use during iteration) did
not. So coders only saw the failure when the merge gate fired —
minutes after they thought they were done.
Chain source-map-check after cargo check so every iteration loop
catches a missing `///` instantly. AGENT.md already calls this out as
a required pre-commit step; making it part of the fast feedback loop
removes the "I forgot" failure mode.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two stories today (961, 962) passed every other gate and got bounced
at the merge step on a single missing `///` on a `pub mod` line.
Sonnet keeps treating the doc comment as optional when the rule says
"add doc comments to new modules and pub functions/structs/enums."
Promote the rule to its own loud section with no-exceptions wording
and a concrete reminder to run source-map-check before committing.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Reorders the gate so fmt --check, duplicate-module scan, clippy, and
doc-coverage run before the frontend build and the multi-minute test
suites. set -euo pipefail short-circuits on the first failure, so a
fmt or clippy drift now fails in seconds instead of after a 30s
frontend build.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The "living document" rule was soft and got ignored — coders wrote
PLAN.md once at session start and then drifted away from it. Tie the
update to a trigger they already do (the wip/final commit), and call
out stale "Current state" as a process failure.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>