The gateway proxy was sending every message's first word to the project server's /api/bot/command endpoint, then displaying the "Unknown command" response before falling through to the LLM. Now the proxy only fires when the first word matches a known bot command. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Huskies
A story-driven development server that manages work items, spawns coding agents, and runs them through a pipeline from backlog to done. Ships as a single Rust binary with an embedded React frontend. Can also be run in WhatsApp, Matrix, and Slack chats.
Getting started with Claude Code
-
Download the huskies binary (or build from source — see below).
-
From your project directory, scaffold and start the server:
huskies init --port 3000
This creates a .huskies/ directory with the pipeline structure, project.toml, and .mcp.json. The .mcp.json file lets Claude Code discover huskies' MCP tools automatically.
Huskies also ships an embedded React frontend. Once the server is running, open http://localhost:3000 to see the pipeline board, agent status, and chat interface.
-
Open a Claude Code session in the same project directory, or visit http://localhost:3000/.
-
Tell Claude: "help me set up this project with huskies." Claude will walk you through the setup wizard — generating project context, tech stack docs, and test/release scripts. Review each step and confirm or ask to retry.
Once setup is complete, Claude can create stories, start agents, check status, and manage the full pipeline via MCP tools — no commands to memorize.
Chat transports
Huskies can be controlled via bot commands in Matrix, WhatsApp, and Slack. Configure a transport in .huskies/bot.toml — see the example files:
.huskies/bot.toml.matrix.example.huskies/bot.toml.whatsapp-meta.example.huskies/bot.toml.whatsapp-twilio.example.huskies/bot.toml.slack.example
Prerequisites for building
- Rust (2024 edition)
- Node.js and npm
- Docker (for Linux cross-compilation and container deployment)
cross(cargo install cross) optional, for Linux static builds. Only needed if you are building for a different architecture, e.g. if you want to build a Linux binary from a Mac.
You only need these installed if you want to build Huskies yourself. Alternately, you can just download and run the huskies binary for your system from https://code.crashlabs.io/crashlabs/huskies/releases
Building for production
cargo build --release
The release binary embeds the frontend via rust-embed. Output: target/release/huskies.
For a static Linux binary (musl, zero dynamic deps):
cross build --release --target x86_64-unknown-linux-musl
Docker:
script/docker_rebuild
# or
script/docker_restart
Running in development
# Run tests
script/test
# Run the server
cargo run -- --port 3000
# In another terminal, run the frontend dev server
cd frontend && npm install && npm run dev
Configuration lives in .huskies/project.toml. See .huskies/bot.toml.*.example for transport setup.
Releasing
Requires a Gitea API token in .env (GITEA_TOKEN=your_token).
script/release 0.7.1
This bumps version in Cargo.toml and package.json, builds macOS arm64 and Linux amd64 binaries, tags the repo, and publishes a Gitea release with changelog and binaries attached.
Multi-node CRDT sync (rendezvous)
Huskies nodes can replicate pipeline state in real-time over WebSocket. Add a
rendezvous field to .huskies/project.toml to configure a peer:
rendezvous = "ws://other-host:3001/crdt-sync"
On startup, this node opens an outbound WebSocket connection to the configured URL and exchanges CRDT ops bidirectionally. The connection is fully symmetric: both sides send a bulk state dump on connect, then stream individual ops as they are applied locally.
Reconnect behaviour
If the peer is unreachable on startup (or the connection drops mid-session), the client retries with exponential backoff starting at 1 s and capping at 30 s. Failures are logged at WARN; after 10 consecutive failures the level escalates to ERROR to surface persistent connectivity problems.
Deployment topologies
Peer-to-peer (two nodes pointing at each other):
Node A ←→ Node B
Configure each node with the other's /crdt-sync URL. Both nodes exchange ops
directly. Supported by this story — ops propagate in both directions and both
nodes converge to the same state. Works well for two machines collaborating on
the same project.
Hub-and-spoke (many clients → one central rendezvous):
Client 1 ──┐
Client 2 ──┤── Hub node
Client 3 ──┘
Point multiple client nodes at a single "hub" node. The hub receives ops from
all clients and re-broadcasts them. Clients do not connect to each other —
convergence is mediated through the hub. The hub itself runs a normal huskies
instance with rendezvous unset (it only accepts inbound connections).
Caveat: Hub-to-client relay depends on the hub's
/crdt-syncinbound WebSocket handler re-broadcasting every received op to all other connected peers. That broadcast happens automatically via the sharedSYNC_TXchannel (each locally-applied remote op is re-emitted), so hub-and-spoke works today but has not been load-tested. Follow-up work may be needed for large fan-out (many spoke clients) to avoid broadcast-channel lag.
Startup reconcile pass
On startup, after CRDT replay and database initialisation, huskies runs a reconcile pass that compares pipeline state across three sources:
- In-memory CRDT — the primary source of truth, reconstructed from
crdt_opson startup. pipeline_itemstable — a shadow/materialised view written alongside CRDT updates, used for fast DB queries.- Filesystem shadows (
.huskies/work/N_stage/*.md) — legacy rendering still written by some paths and read by agent worktrees.
Any disagreement between these sources is drift. The reconcile pass logs a structured line for each drifted item:
[reconcile] DRIFT story=X crdt_stage=Y db_stage=Z fs_stage=W
(MISSING is used where a source has no record for that story.)
Drift types
| Type | Meaning |
|---|---|
CRDT-only |
Story present in CRDT but absent from pipeline_items |
DB-only |
Story present in pipeline_items but absent from CRDT |
FS-only |
Story on the filesystem but absent from both CRDT and DB |
stage-mismatch |
Story present in both CRDT and DB but with different stage values |
Note: a filesystem shadow that lags behind the CRDT/DB stage (both of which agree) is not treated as drift — the FS is a best-effort rendering and is allowed to lag.
If any drift is detected, the Matrix/Slack/WhatsApp bot startup announcement includes a count and a suggestion to check the server logs.
Opt-out
Set reconcile_on_startup = false in .huskies/project.toml to disable the
pass during the migration window if it produces noise.
Debugging
Inspecting the in-memory CRDT state
When diagnosing state issues, use the dump_crdt MCP tool or the /debug/crdt HTTP endpoint to inspect the raw in-memory CRDT state directly. These surfaces show the ground truth that the running server holds — not a summarised pipeline view and not the persisted SQLite ops.
MCP tool (from Claude Code or any MCP client):
mcp__huskies__dump_crdt
# dump everything
{}
# restrict to a single item
{"story_id": "42_story_my_feature"}
HTTP endpoint (browser or curl):
# dump everything
curl http://localhost:3001/debug/crdt
# restrict to a single item
curl "http://localhost:3001/debug/crdt?story_id=42_story_my_feature"
Both return a JSON document with:
metadata—in_memory_state_loaded,total_items,total_ops_in_list,max_seq_in_list,persisted_ops_count,pending_persist_ops_countitems— one entry per CRDT list item (including tombstoned/deleted entries), each withstory_id,stage,name,agent,retry_count,blocked,depends_on,content_index(hex OpId for cross-referencing withcrdt_ops), andis_deleted
This is a debug tool. For normal pipeline introspection use
get_pipeline_statusorGET /api/pipelineinstead.
Source Map
See .huskies/specs/tech/STACK.md for the full source map.
GPL-3.0. See LICENSE.