Rename all references from storkit to huskies across the codebase:
- .storkit/ directory → .huskies/
- Binary name, Cargo package name, Docker image references
- Server code, frontend code, config files, scripts
- Fix script/test to build frontend before cargo clippy/test
so merge worktrees have frontend/dist available for RustEmbed
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The timer tick loop now calls move_story_to_current() before
start_agent(), so stories scheduled from the backlog are moved into the
pipeline automatically when the timer fires. The timer bot command also
accepts backlog stories (previously required current).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
strip_mention_separator now skips all non-ASCII-alphanumeric chars
(emoji, colons, spaces) and returns a slice starting at the first
command character. Fixes mention pills with emoji display names
(e.g. "timmy ⚡️ status") not matching bot commands.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The /help test expected the help overlay to appear, but /help now goes
through botCommand like other slash commands. Updated the test to match.
Also added reader thread join and child.wait() calls to
claude_code.rs to prevent PTY master fd leaks from web UI chat sessions.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The reader thread spawned in run_agent_pty_blocking was never joined,
leaving a cloned PTY master fd open after the agent exited. When the
pipeline restarted the agent on the same worktree, the stale fd from
the previous session interfered with the new PTY allocation, causing
Claude Code's bundled ripgrep to crash with:
fatal runtime error: assertion failed: output.write(&bytes).is_ok()
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Builds aarch64-unknown-linux-musl via cross alongside the existing
x86_64 Linux and macOS arm64 targets.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This returns the full tool catalog (create stories, spawn agents, record tests, manage worktrees, etc.). Familiarize yourself with the available tools before proceeding. These tools allow you to directly manipulate the workflow and spawn subsidiary agents without manual file manipulation.
3. **Read Context:** Check `.storkit/specs/00_CONTEXT.md` for high-level project goals.
4. **Read Stack:** Check `.storkit/specs/tech/STACK.md` for technical constraints and patterns.
5. **Check Work Items:** Look at `.storkit/work/1_backlog/` and `.storkit/work/2_current/` to see what work is pending.
3. **Read Context:** Check `.huskies/specs/00_CONTEXT.md` for high-level project goals.
4. **Read Stack:** Check `.huskies/specs/tech/STACK.md` for technical constraints and patterns.
5. **Check Work Items:** Look at `.huskies/work/1_backlog/` and `.huskies/work/2_current/` to see what work is pending.
---
@@ -238,7 +238,7 @@ If a user hands you this document and says "Apply this process to my project":
Story Kit includes a chat bot that can be connected to one messaging platform at a time. The bot handles commands, LLM conversations, and pipeline notifications.
**Only one transport can be active at a time.** To configure the bot, copy the appropriate example file to `.storkit/bot.toml`:
**Only one transport can be active at a time.** To configure the bot, copy the appropriate example file to `.huskies/bot.toml`:
| Transport | Example file | Webhook endpoint |
|-----------|-------------|-----------------|
@@ -248,7 +248,7 @@ Story Kit includes a chat bot that can be connected to one messaging platform at
@@ -118,8 +118,8 @@ To support both Remote and Local models, the system implements a `ModelProvider`
Multiple instances can run simultaneously in different worktrees. To avoid port conflicts:
- **Backend:** Set `STORKIT_PORT` to a unique port (default is 3001). Example: `STORKIT_PORT=3002 cargo run`
- **Frontend:** Run `npm run dev` from `frontend/`. It auto-selects the next unused port. It reads `STORKIT_PORT` to know which backend to talk to, so export it before running: `export STORKIT_PORT=3002 && cd frontend && npm run dev`
- **Backend:** Set `HUSKIES_PORT` to a unique port (default is 3001). Example: `HUSKIES_PORT=3002 cargo run`
- **Frontend:** Run `npm run dev` from `frontend/`. It auto-selects the next unused port. It reads `HUSKIES_PORT` to know which backend to talk to, so export it before running: `export HUSKIES_PORT=3002 && cd frontend && npm run dev`
When running in a worktree, use a port that won't conflict with the main instance (3001). Ports 3002+ are good choices.
As a storkit user with multiple Claude Max subscriptions, I want the system to automatically rotate to a different account when one gets rate limited, so that agents and chat don't stall out waiting for limits to reset.
As a huskies user with multiple Claude Max subscriptions, I want the system to automatically rotate to a different account when one gets rate limited, so that agents and chat don't stall out waiting for limits to reset.
## Acceptance Criteria
- [ ] OAuth login flow stores credentials per-account (keyed by email), not overwriting previous accounts
- [ ] GET /oauth/status returns all stored accounts and their status (active, rate-limited, expired)
- [ ] When the active account hits a rate limit, storkit automatically swaps to the next available account's refresh token, refreshes, and retries
- [ ] When the active account hits a rate limit, huskies automatically swaps to the next available account's refresh token, refreshes, and retries
- [ ] The bot sends a notification in Matrix/WhatsApp when it swaps accounts
- [ ] If all accounts are rate limited, the bot surfaces a clear message with the time until the earliest reset
- [ ] A new /oauth/authorize login adds to the account pool rather than replacing the current credentials
name: "Rename project from \"huskies\" to \"huskies\""
---
# Story 455: Rename project from "huskies" to "huskies"
## User Story
As a project maintainer, I want to rename the project from \"huskies\" to \"huskies\" so that the product has its new identity throughout the codebase, tooling, and documentation. The new domain is huskies.dev — update all references to huskies.dev accordingly (website, contact email hello@huskies.dev, etc).
## Acceptance Criteria
- [ ] Rust crate name in server/Cargo.toml changed from 'huskies' to 'huskies'
- [ ] Binary name changed to 'huskies' (Dockerfile CMD, release script binary names)
- [ ] Docker service name, container_name, image name, and volume names updated in docker-compose.yml
- [ ] Docker user/group renamed from 'huskies' to 'huskies' in Dockerfile (groupadd, useradd, home dir /home/huskies/.claude)
- [ ] MCP server registration renamed from 'huskies' to 'huskies' in scaffold-generated .mcp.json and in server/src/http/mcp/mod.rs serverInfo name
- [ ] All 35+ MCP tool permission patterns updated from mcp__huskies__* to mcp__huskies__* across code and permission configs
- [ ] The .huskies/ project directory marker renamed to .huskies/ throughout all Rust source (paths.rs, config.rs, scaffold.rs, watcher.rs, prompts.rs, and all agent/pipeline code)
- [ ] Release script updated: Gitea repo path dave/huskies → dave/huskies, changelog regex updated to match ^(huskies|huskies|story-kit): for backwards-compatible history parsing, binary artifact names updated
- [ ] Git commit prefix convention updated from 'huskies:' to 'huskies:' in huskies README and agent prompts
- [ ] Website updated: page title, headings, and contact email (hello@huskies.dev) if domain changes
- [ ] README.md updated: all CLI examples use 'huskies' binary name, all .huskies/ references become .huskies/
- [ ] A migration path exists for existing installs: either huskies auto-detects and migrates .huskies/ → .huskies/, or a migration script (script/migrate) is provided
- [ ] All Claude Code .mcp.json files in existing worktrees are regenerated via scaffold or migration
- [ ] Gitea repository renamed from dave/huskies to dave/huskies (external action required, noted in story)
name: "Stage transition notifications can arrive out of order and show wrong story name"
agent: coder-opus
---
# Bug 462: Stage transition notifications can arrive out of order and show wrong story name
## Description
When a story moves through stages quickly (e.g. QA → Merge → Done), the stage transition notifications can arrive out of order in Matrix chat. The Done notification appears before the Merge notification.
Two issues:
1.**Out-of-order delivery**: When two notifications are sent close together, the Matrix homeserver can deliver them in the wrong order. The notification handler processes events sequentially and awaits each send, but the homeserver does not guarantee ordering for near-simultaneous messages.
2.**Missing story name on stale notifications**: The second notification shows the raw item_id instead of the story name because `read_story_name` looks in the stage directory from the event (e.g. `4_merge/`), but the file has already moved to the next stage (e.g. `5_done/`).
3.**`inferred_from_stage` guesses the source stage** instead of tracking the actual from-stage. This means skipped stages would show incorrect transitions.
## How to Reproduce
1. Have a story in QA that passes quickly
2. Story moves QA → Merge → Done in rapid succession
3. Observe notifications in Matrix
## Actual Result
Notifications arrive in wrong order (Done before Merge). The later notification shows the raw item_id instead of the story name:
Notifications arrive in chronological order. All notifications show the story name. Ideally, rapid transitions are coalesced into a single notification for the final stage.
## Acceptance Criteria
- [ ] read_story_name falls back to searching all stages when the expected stage directory has no match
- [ ] Consider deduplicating rapid transitions within a short window (e.g. only notify for the final stage)
- [ ] Track actual from-stage in WatcherEvent instead of guessing via inferred_from_stage
name: "Timer rejects backlog stories — should move to current on fire"
---
# Bug 464: Timer rejects backlog stories — should move to current on fire
## Description
The `timer` bot command requires stories to be in `work/2_current/` before scheduling. When a user tries to schedule a backlog story (e.g. `timer 463 12:45`), it returns:
"Story **463_story_...** is not in `work/2_current/`. Move it to current before scheduling a timer."
The timer should accept backlog stories. When the timer fires, it should move the story from backlog to current and let auto-assign start an agent.
## How to Reproduce
1. Have a story in backlog (e.g. 463)
2. Run `timer 463 12:45`
3. Observe rejection message
## Actual Result
Timer command rejects stories not in `work/2_current/`.
## Expected Result
Timer command accepts backlog stories. When the timer fires, it moves the story to current and auto-assign picks it up.
## Acceptance Criteria
- [ ] Timer bot command accepts stories in backlog or current
- [ ] Timer tick loop calls move_story_to_current before start_agent for backlog stories
- [ ] Unit tests cover scheduling and firing for backlog stories
# Bug 465: Timer tick loop never fires due entries
## Description
The timer tick loop (`spawn_timer_tick_loop`) is spawned by the Matrix bot runner, the Matrix bot is confirmed running (processing messages), but timers never fire. Past-due entries remain in `.huskies/timers.json` indefinitely — `take_due` never consumes them.
The tick loop uses `tokio::spawn` which swallows panics silently. If `move_story_to_current` or `start_agent` panics on the first tick (when all past-due entries fire at once), the entire task dies with no log output. The PTY debug spam may also push any `[timer]` log entries out of the ring buffer.
The bot command successfully adds entries to the in-memory store and persists them to disk, but the tick loop never processes them.
## How to Reproduce
1. Set a timer via bot command: `timer 463 HH:MM` (a time in the near future)
2. Wait past the scheduled time
3. Check `.huskies/timers.json` — entries are still present
4. Check server logs for `[timer]` — no entries found
## Actual Result
Timer entries remain in timers.json indefinitely. No `[timer] Timer fired` log entries appear. The story is never moved to current and no agent is started.
## Expected Result
Within 30 seconds of the scheduled time, the tick loop should call `take_due`, remove the entry from disk, move the story to current (if in backlog), and start an agent.
## Acceptance Criteria
- [ ] Add panic-catching (catch_unwind or tokio CancellationToken) to the tick loop so failures are logged
- [ ] Add a startup log line confirming the tick loop is running and how many pending timers were loaded
- [ ] Verify take_due runs on each 30-second tick by adding periodic debug logging
- [ ] Integration test: create a past-due timer entry, run the tick loop, assert the entry is consumed
name: "Configurable timezone in project.toml for timer scheduling"
---
# Story 466: Configurable timezone in project.toml for timer scheduling
## User Story
As a user running huskies in a container where TZ defaults to UTC, I want to configure my project's timezone in project.toml so that timer HH:MM inputs are interpreted in my actual timezone.
## Acceptance Criteria
- [ ] Add a `timezone` field to project.toml (e.g. `timezone = "Europe/London"`)
- [ ] next_occurrence_of_hhmm uses the configured timezone instead of chrono::Local
- [ ] Falls back to chrono::Local if no timezone is configured
- [ ] Timer confirmation message displays the time in the configured timezone
- [ ] timer list command shows times in the configured timezone
@@ -10,7 +10,7 @@ The `prompt_permission` MCP tool returns plain text ("Permission granted for '..
## How to Reproduce
1. Start the storkit server and open the web UI
1. Start the huskies server and open the web UI
2. Chat with the claude-code-pty model
3. Ask it to do something that requires a tool NOT in `.claude/settings.json` allow list (e.g. `wc -l /etc/hosts`, or WebFetch to a non-allowed domain)
@@ -6,7 +6,7 @@ name: "Retry limit for mergemaster and pipeline restarts"
## User Story
As a developer using storkit, I want pipeline auto-restarts to have a configurable retry limit so that failing agents don't loop infinitely consuming CPU and API credits.
As a developer using huskies, I want pipeline auto-restarts to have a configurable retry limit so that failing agents don't loop infinitely consuming CPU and API credits.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.