Commit Graph

338 Commits

Author SHA1 Message Date
dave 734597902f huskies: merge 915 2026-05-12 15:38:25 +00:00
dave a34c9796b5 huskies: merge 913 2026-05-12 15:30:23 +00:00
dave 379ff16d3e huskies: merge 905 2026-05-12 15:02:58 +00:00
Timmy ddc4228b10 feat(904): MCP progress notifications + SSE for long-running tool calls
Follow-up to bug 903. The attach fix made run_tests retries safe, but
agents still observed the underlying MCP transport timeout as a
tool-call error and had to handle it via retry. Implement the proper
fix: MCP `notifications/progress` events keep the client's transport
timer alive so the call never errors from the agent's perspective.

What changed:

server/src/http/mcp/progress.rs (new)
  - `ProgressEmitter` (progressToken + mpsc sender) installed in a
    `tokio::task_local!` scope by the SSE response path.
  - `emit_progress(progress, total, message)` builds a JSON-RPC
    `notifications/progress` message and sends it via the channel.
    No-op when no emitter is in scope (plain JSON path / tests / API
    runtimes), so tool handlers can call it unconditionally.

server/src/http/mcp/mod.rs
  - mcp_post_handler now detects `Accept: text/event-stream` AND a
    `params._meta.progressToken` on tools/call. When both are present,
    routes through `sse_tools_call` instead of the plain JSON path.
  - sse_tools_call: spawns the dispatch task with the emitter installed,
    builds an SSE stream that interleaves incoming progress events with
    the final JSON-RPC response, with a 15s keep-alive interval as a
    backstop for tools that don't emit their own progress.
  - Plain JSON behaviour is unchanged for non-SSE clients and for
    everything other than tools/call.

server/src/http/mcp/shell_tools/script.rs
  - tool_run_tests poll loop emits `notifications/progress` every 25s
    of elapsed time (well below the typical ~60s MCP transport
    timeout). Attached callers (the bug 903 fix path) also emit so
    their MCP socket stays alive while waiting for the in-flight job.
  - Output filtering: on a passing run the response now returns a
    one-line summary ("All N tests passed.") instead of the full
    `cargo test` stdout, which was pure noise that burned agent
    tokens. Failure output is unchanged (truncated tail with the
    `failures:` section and final test_result line). CRDT entry
    stores the same filtered value so attached callers see it too.

Tests (3 new):
  - emit_progress_no_op_without_emitter — calling outside scope is safe
  - emit_progress_sends_notification_when_emitter_installed — full path
  - emit_progress_omits_optional_fields — total/message optional

Not changed: coder system_prompts still tell agents to retry on
transport-timeout errors. That advice is now belt-and-braces — if
claude-code's HTTP MCP client honours progress notifications, no agent
will ever observe the error; if not, retry is still safe post-903. We
can drop the retry advice once we've observed the SSE path working in
the field.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-12 15:05:04 +01:00
Timmy d64f1e94ff fix(903): run_tests attaches to in-flight job instead of kill+respawn
Bug 903: every `run_tests` MCP call killed the prior `cargo test` child
for the same worktree and spawned a fresh one. Combined with the
~60s MCP client-side timeout and the 896 agent prompt that told agents
to "call run_tests again — it attaches to the in-flight test job",
this produced a respawn loop: agent calls, MCP times out at 60s, agent
retries, run_tests kills the running build and starts a new one. The
test suite never reaches the finish line.

Server log evidence: "Started test job for <worktree> (pid N)" with a
new PID every ~60-90s for the same worktree.

Fix: when `run_tests` is called and a job is already in flight for that
worktree, ATTACH to it instead of killing+respawning. The original job's
poll loop already writes the final status to the CRDT `test_jobs`
collection; attached callers just poll that CRDT entry (the same
pattern `get_test_result` uses) and return the result when the
in-flight job transitions out of "running". The 896 prompt's claim is
now actually true.

Worktrees remain isolated from each other and may run `cargo test`
concurrently — there is no cross-worktree serialisation. The single
invariant is "at most one test job per worktree at a time".

New test: `tool_run_tests_concurrent_calls_attach_to_single_job`
spawns two concurrent calls on the same worktree against a 2s
`sleep`-based script and asserts total elapsed stays close to 2s
(attach) rather than 4s (respawn).

Note: the cross-worktree linker-OOM symptom Timmy reported in the
field was downstream of the respawn loop. Killed-but-not-fully-reaped
cargo invocations stack memory pressure beyond the nominal N
worktrees. With the attach fix, each worktree runs exactly one
in-flight build at a time and old builds finish cleanly.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-12 14:22:35 +01:00
dave 9be438e6d3 huskies: merge 865 2026-05-08 14:29:06 +00:00
dave 61cf7684de huskies: merge 864 2026-04-30 22:27:51 +00:00
dave 1251b869a6 style: cargo fmt on today's new code (883/884/886/opus-pin)
The mergemaster gates run rustfmt and rejected 864's merge because
several files I added/touched in master today had not been fmt'd.
Six files affected, mostly trivial line-wrapping nits. Fixes the
formatting gate for the next 864 merge attempt.
2026-04-30 22:15:37 +00:00
dave 7a0c186d94 fix(886): parse cargo diagnostics in run_check/run_build/run_lint
Before: tool_run_check (and run_build/run_lint via run_script_tool)
returned the entire cargo log verbatim in `output`. For runs with many
errors the response routinely exceeded the MCP token cap, was dumped
to a tool-results file, and the agent had to scrape it with python3
just to see the error list — burning many turns on file archaeology
for what should be a one-look operation. Real example: 864's coder
hit `result (143,708 characters) exceeds maximum allowed tokens` and
spent ~8 turns extracting 3 errors.

Now:
- New `service::shell::parse_diagnostics` parses `error[CODE]:` /
  `warning[CODE]:` headers + their `--> file:line` markers into
  structured `Diagnostic { kind, code, message, file, line }`.
- `tool_run_check` (and the run_build/run_lint shared body) returns
  `{ passed, exit_code, errors: [...], warnings: [...], summary }`.
  Raw `output` is dropped from the default response.
- New `verbose: bool` argument (default false) restores the raw
  output for callers who actually need it.
- Updated the existing tool_run_check test to assert the new
  contract (150 errors → 150 structured entries, response < 50KB).

Skipped run_tests in this pass — its parser would need to recognise
test-runner output (different format from cargo); will land separately.

Closes 886.
2026-04-30 15:06:02 +00:00
dave 7ac3fc2e3e feat(884): persistent perm_rx lock-holder for Matrix bot
Before: handle_message.rs acquired services.perm_rx only while processing
one chat message and dropped it on chat_fut completion. The moment the
bot wasn't actively responding, prompt_permission auto-denied any spawned
coder bash call as "no interactive session" — making unattended coder
work impossible.

Now: a permission_listener task is spawned at bot startup and holds
perm_rx for the bot's lifetime. Permission requests are forwarded to
the first configured Matrix room, replies resolved by the existing
on_room_message handler via pending_perm_replies. Per-message acquire is
gone from handle_message.rs (chat_fut just awaits cleanly).

- New module: chat/transport/matrix/bot/permission_listener.rs.
- Wired into run_bot before BotContext construction; bot_sent_event_ids
  is hoisted out so the listener and the rest of the bot share it.
- handle_message.rs no longer touches perm_rx.
- diagnostics/permission.rs comment updated to reflect the new reality.
- Regression test asserts the listener forwards a PermissionForward to
  the target room and records the pending reply key — exactly the path
  that was broken when no chat_fut was in flight.

Discord/Slack/WhatsApp transports still acquire perm_rx per message
(commands.rs:368 / commands/llm.rs:83 / commands/llm.rs:82). They are
not the active transport in this deployment so their per-message acquire
remains dormant; the same listener pattern should be applied to them as
follow-up work in 884 phase 2.
2026-04-30 13:53:46 +00:00
Timmy 3a9ff5e740 fix(mcp): restore HTTP /mcp endpoint after 855 regression
855 deleted the HTTP /mcp route and pointed agents at ws://...crdt-sync,
but Claude Code's .mcp.json doesn't speak ws:// and the rendezvous WS
never had MCP method handlers wired up — so every spawned Claude Code
agent (gateway-routed and local) booted with zero huskies tools and
died on --permission-prompt-tool=mcp__huskies__prompt_permission.

Restore mcp_post_handler / mcp_get_handler / handle_initialize, re-add
the /mcp route, and revert all three .mcp.json writers to emit
http://localhost:{port}/mcp with explicit "type": "http". Reuses the
already-extracted gateway::jsonrpc types and the surviving
dispatch_tool_call / list_tools surfaces — net add ~140 lines.

Federation work is unaffected: /crdt-sync continues to do CRDT sync,
which is what it was actually doing. MCP-over-WebSocket for cross-LAN
agents was never wired up by 855 and can be done as a proper follow-up
with a regression test that boots a real claude and verifies tool
registration.

Verified end-to-end: /mcp initialize, tools/list (74 tools incl.
prompt_permission), and tools/call all respond correctly from inside
the rebuilt container.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 14:03:16 +01:00
dave a796bd933f huskies: merge 879 2026-04-30 00:26:35 +00:00
dave 8fc581ad6b huskies: merge 878 2026-04-29 23:53:15 +00:00
dave 1d86202abb huskies: merge 868 2026-04-29 23:34:24 +00:00
dave 9a3f60d5d3 huskies: merge 866 2026-04-29 22:47:53 +00:00
dave a49f668b5a huskies: merge 867 2026-04-29 22:17:08 +00:00
dave e56bd2d834 huskies: merge 877 2026-04-29 22:10:47 +00:00
dave 7e2f122d36 huskies: merge 880 2026-04-29 21:46:12 +00:00
dave 4d24b5b661 huskies: merge 855 2026-04-29 21:41:03 +00:00
dave c0801c3894 huskies: merge 875 2026-04-29 18:44:50 +00:00
dave a956a98197 huskies: merge 847 2026-04-29 18:40:08 +00:00
dave 320be659c0 huskies: merge 816 2026-04-29 17:57:34 +00:00
dave fc86774618 huskies: merge 857 2026-04-29 17:45:51 +00:00
dave 8a42839b37 huskies: merge 820 2026-04-29 17:20:32 +00:00
dave 9bd3c10a09 huskies: merge 872 2026-04-29 15:59:37 +00:00
dave 7f8467b068 huskies: merge 871 2026-04-29 15:45:54 +00:00
dave db65271587 huskies: merge 842 2026-04-29 15:10:11 +00:00
dave f3e4d5d072 huskies: merge 869 2026-04-29 14:58:11 +00:00
dave 59b626d3ba huskies: merge 824 2026-04-29 13:42:58 +00:00
dave b4854cf693 huskies: merge 862 2026-04-29 13:28:37 +00:00
dave 11d111360d huskies: merge 858 2026-04-29 10:47:18 +00:00
dave 4ed1fb5110 huskies: merge 854 2026-04-29 09:29:32 +00:00
dave a65cd86c8f huskies: merge 798 2026-04-28 16:25:33 +00:00
dave 1e40215c3e huskies: merge 797 2026-04-28 16:06:50 +00:00
dave 32a3465fc4 fix: tell the truth about run_tests being blocking
`tool_run_tests` in `server/src/http/mcp/shell_tools/script.rs` is fully
blocking server-side: it spawns the test child, polls every 1s server-side
until exit (or `TEST_TIMEOUT_SECS = 1200s`), and returns the full
{passed, exit_code, output} directly. There is NO async/started-status
return path.

But two places told agents the wrong story:
1. `tools_list/system_tools.rs` description claimed "Returns immediately
   with status: started. Poll get_test_result..." — agents read tool
   descriptions for protocol semantics, so they followed this and burned
   turns polling get_test_result.
2. `agents.toml` had been correctly saying it blocks, but my last commit
   (776aad38) "fixed" it the wrong way based on a misread of the code.

Now both say: run_tests blocks server-side, returns the full result, do
not poll get_test_result. get_test_result remains for external observers
(UI checking on a job another caller started).

Reverts the prompt change in 776aad38 with the correct text.
2026-04-28 15:59:06 +00:00
dave f63464852b huskies: merge 770 2026-04-28 15:38:34 +00:00
dave 1946709681 huskies: merge 788 2026-04-28 15:28:31 +00:00
dave aed29b952c huskies: merge 769 2026-04-28 13:42:47 +00:00
dave b7db6d6aae huskies: merge 775 2026-04-28 12:25:59 +00:00
dave e9ed58502a huskies: merge 771 2026-04-28 12:08:44 +00:00
dave 05d057a40a huskies: merge 782 2026-04-28 11:02:02 +00:00
dave 01169332b3 huskies: merge 774 2026-04-28 10:51:59 +00:00
dave 0c2789b2c1 huskies: merge 768 2026-04-28 10:12:27 +00:00
dave fb5a21cfbb huskies: merge 778 2026-04-28 10:01:10 +00:00
dave 38e828979c huskies: merge 766 2026-04-28 08:59:13 +00:00
dave d1a2393b32 huskies: merge 760 2026-04-28 00:22:29 +00:00
dave 63ce7b9ec3 huskies: merge 759 2026-04-28 00:07:04 +00:00
dave bf1393fa60 huskies: merge 741 2026-04-27 23:44:32 +00:00
dave dffa05d703 huskies: merge 689 2026-04-27 23:30:55 +00:00
dave 1388658ae8 huskies: merge 730_story_use_numeric_only_story_ids_across_mcp_worktrees_git_branches_and_log_paths 2026-04-27 20:22:47 +00:00