diff --git a/.story_kit/README.md b/.story_kit/README.md index 61e8eb6..123f346 100644 --- a/.story_kit/README.md +++ b/.story_kit/README.md @@ -5,6 +5,19 @@ --- +## 0. First Steps (For New LLM Sessions) + +When you start a new session with this project: + +1. **Check for MCP Tools:** Read `.mcp.json` to discover available programmatic tools. If it exists, you have direct access to workflow automation tools (create stories, spawn agents, record tests, etc.) via the MCP server. +2. **Read Context:** Check `.story_kit/specs/00_CONTEXT.md` for high-level project goals. +3. **Read Stack:** Check `.story_kit/specs/tech/STACK.md` for technical constraints and patterns. +4. **Check Stories:** Look at `.story_kit/stories/upcoming/` and `.story_kit/stories/current/` to see what work is pending. + +**Why This Matters:** The `.mcp.json` file indicates that you have programmatic access to tools like `create_story`, `start_agent`, `list_agents`, `get_agent_output`, and more. These tools allow you to directly manipulate the workflow and spawn subsidiary agents without manual file manipulation. + +--- + ## 1. The Philosophy We treat the codebase as the implementation of a **"Living Specification."** driven by **User Stories** @@ -19,7 +32,9 @@ Instead of ephemeral chat prompts ("Fix this", "Add that"), we work through pers ## 1.5 MCP Tools -Agents have programmatic access to the workflow via MCP tools served at `POST /mcp`. The project `.mcp.json` registers this endpoint automatically so Claude Code sessions and spawned agents can call tools like `create_story`, `validate_stories`, `list_upcoming`, `get_story_todos`, `record_tests`, and `ensure_acceptance` without parsing English instructions. +Agents have programmatic access to the workflow via MCP tools served at `POST /mcp`. The project `.mcp.json` registers this endpoint automatically so Claude Code sessions and spawned agents can call tools like `create_story`, `validate_stories`, `list_upcoming`, `get_story_todos`, `record_tests`, `ensure_acceptance`, `start_agent`, `stop_agent`, `list_agents`, and `get_agent_output` without parsing English instructions. + +**To discover what tools are available:** Check `.mcp.json` for the server endpoint, then use the MCP protocol to list available tools. --- @@ -29,7 +44,8 @@ When initializing a new project under this workflow, create the following struct ```text project_root/ - .story_kit + .mcp.json # MCP server configuration (if MCP tools are available) + .story_kit/ |-- README.md # This document ├── stories/ # Story workflow (upcoming/current/archived). ├── specs/ # Minimal guardrails (context + stack). @@ -51,7 +67,7 @@ When the user asks for a feature, follow this 4-step loop strictly: ### Step 1: The Story (Ingest) * **User Input:** "I want the robot to dance." -* **Action:** Create a story via `POST /api/workflow/stories/create` (preferred for agents — guarantees correct front matter and auto-assigns the story number). Alternatively, create a file manually in `stories/upcoming/` (e.g., `stories/upcoming/XX_robot_dance.md`). +* **Action:** Create a story via MCP tool `create_story` (preferred — guarantees correct front matter and auto-assigns the story number). Alternatively, create a file manually in `stories/upcoming/` (e.g., `stories/upcoming/XX_robot_dance.md`). * **Front Matter (Required):** Every story file MUST begin with YAML front matter containing `name` and `test_plan` fields: ```yaml --- @@ -169,8 +185,8 @@ Not everything needs a story or bug fix. Spikes are time-boxed investigations to When the LLM context window fills up (or the chat gets slow/confused): 1. **Stop Coding.** 2. **Instruction:** Tell the user to open a new chat. -3. **Handoff:** The only context the new LLM needs is in the `specs/` folder. - * *Prompt for New Session:* "I am working on Project X. Read `specs/00_CONTEXT.md` and `specs/tech/STACK.md`. Then look at `stories/` to see what is pending." +3. **Handoff:** The only context the new LLM needs is in the `specs/` folder and `.mcp.json`. + * *Prompt for New Session:* "I am working on Project X. Read `.mcp.json` to discover available tools, then read `specs/00_CONTEXT.md` and `specs/tech/STACK.md`. Then look at `stories/` to see what is pending." --- @@ -179,12 +195,13 @@ When the LLM context window fills up (or the chat gets slow/confused): If a user hands you this document and says "Apply this process to my project": -1. **Analyze the Request:** Ask for the high-level goal ("What are we building?") and the tech preferences ("Rust or Python?"). -2. **Git Check:** Check if the directory is a git repository (`git status`). If not, run `git init`. -3. **Scaffold:** Run commands to create the `specs/` and `stories/` folders. -4. **Draft Context:** Write `specs/00_CONTEXT.md` based on the user's answer. -5. **Draft Stack:** Write `specs/tech/STACK.md` based on best practices for that language. -6. **Wait:** Ask the user for "Story #1". +1. **Check for MCP Tools:** Look for `.mcp.json` in the project root. If it exists, you have programmatic access to workflow tools and agent spawning capabilities. +2. **Analyze the Request:** Ask for the high-level goal ("What are we building?") and the tech preferences ("Rust or Python?"). +3. **Git Check:** Check if the directory is a git repository (`git status`). If not, run `git init`. +4. **Scaffold:** Run commands to create the `specs/` and `stories/` folders. +5. **Draft Context:** Write `specs/00_CONTEXT.md` based on the user's answer. +6. **Draft Stack:** Write `specs/tech/STACK.md` based on best practices for that language. +7. **Wait:** Ask the user for "Story #1". ---