Noting existence of mcp server

This commit is contained in:
Dave
2026-02-19 19:46:48 +00:00
parent 6ea44fb5c4
commit 94d05c905b

View File

@@ -5,6 +5,19 @@
--- ---
## 0. First Steps (For New LLM Sessions)
When you start a new session with this project:
1. **Check for MCP Tools:** Read `.mcp.json` to discover available programmatic tools. If it exists, you have direct access to workflow automation tools (create stories, spawn agents, record tests, etc.) via the MCP server.
2. **Read Context:** Check `.story_kit/specs/00_CONTEXT.md` for high-level project goals.
3. **Read Stack:** Check `.story_kit/specs/tech/STACK.md` for technical constraints and patterns.
4. **Check Stories:** Look at `.story_kit/stories/upcoming/` and `.story_kit/stories/current/` to see what work is pending.
**Why This Matters:** The `.mcp.json` file indicates that you have programmatic access to tools like `create_story`, `start_agent`, `list_agents`, `get_agent_output`, and more. These tools allow you to directly manipulate the workflow and spawn subsidiary agents without manual file manipulation.
---
## 1. The Philosophy ## 1. The Philosophy
We treat the codebase as the implementation of a **"Living Specification."** driven by **User Stories** We treat the codebase as the implementation of a **"Living Specification."** driven by **User Stories**
@@ -19,7 +32,9 @@ Instead of ephemeral chat prompts ("Fix this", "Add that"), we work through pers
## 1.5 MCP Tools ## 1.5 MCP Tools
Agents have programmatic access to the workflow via MCP tools served at `POST /mcp`. The project `.mcp.json` registers this endpoint automatically so Claude Code sessions and spawned agents can call tools like `create_story`, `validate_stories`, `list_upcoming`, `get_story_todos`, `record_tests`, and `ensure_acceptance` without parsing English instructions. Agents have programmatic access to the workflow via MCP tools served at `POST /mcp`. The project `.mcp.json` registers this endpoint automatically so Claude Code sessions and spawned agents can call tools like `create_story`, `validate_stories`, `list_upcoming`, `get_story_todos`, `record_tests`, `ensure_acceptance`, `start_agent`, `stop_agent`, `list_agents`, and `get_agent_output` without parsing English instructions.
**To discover what tools are available:** Check `.mcp.json` for the server endpoint, then use the MCP protocol to list available tools.
--- ---
@@ -29,7 +44,8 @@ When initializing a new project under this workflow, create the following struct
```text ```text
project_root/ project_root/
.story_kit .mcp.json # MCP server configuration (if MCP tools are available)
.story_kit/
|-- README.md # This document |-- README.md # This document
├── stories/ # Story workflow (upcoming/current/archived). ├── stories/ # Story workflow (upcoming/current/archived).
├── specs/ # Minimal guardrails (context + stack). ├── specs/ # Minimal guardrails (context + stack).
@@ -51,7 +67,7 @@ When the user asks for a feature, follow this 4-step loop strictly:
### Step 1: The Story (Ingest) ### Step 1: The Story (Ingest)
* **User Input:** "I want the robot to dance." * **User Input:** "I want the robot to dance."
* **Action:** Create a story via `POST /api/workflow/stories/create` (preferred for agents — guarantees correct front matter and auto-assigns the story number). Alternatively, create a file manually in `stories/upcoming/` (e.g., `stories/upcoming/XX_robot_dance.md`). * **Action:** Create a story via MCP tool `create_story` (preferred — guarantees correct front matter and auto-assigns the story number). Alternatively, create a file manually in `stories/upcoming/` (e.g., `stories/upcoming/XX_robot_dance.md`).
* **Front Matter (Required):** Every story file MUST begin with YAML front matter containing `name` and `test_plan` fields: * **Front Matter (Required):** Every story file MUST begin with YAML front matter containing `name` and `test_plan` fields:
```yaml ```yaml
--- ---
@@ -169,8 +185,8 @@ Not everything needs a story or bug fix. Spikes are time-boxed investigations to
When the LLM context window fills up (or the chat gets slow/confused): When the LLM context window fills up (or the chat gets slow/confused):
1. **Stop Coding.** 1. **Stop Coding.**
2. **Instruction:** Tell the user to open a new chat. 2. **Instruction:** Tell the user to open a new chat.
3. **Handoff:** The only context the new LLM needs is in the `specs/` folder. 3. **Handoff:** The only context the new LLM needs is in the `specs/` folder and `.mcp.json`.
* *Prompt for New Session:* "I am working on Project X. Read `specs/00_CONTEXT.md` and `specs/tech/STACK.md`. Then look at `stories/` to see what is pending." * *Prompt for New Session:* "I am working on Project X. Read `.mcp.json` to discover available tools, then read `specs/00_CONTEXT.md` and `specs/tech/STACK.md`. Then look at `stories/` to see what is pending."
--- ---
@@ -179,12 +195,13 @@ When the LLM context window fills up (or the chat gets slow/confused):
If a user hands you this document and says "Apply this process to my project": If a user hands you this document and says "Apply this process to my project":
1. **Analyze the Request:** Ask for the high-level goal ("What are we building?") and the tech preferences ("Rust or Python?"). 1. **Check for MCP Tools:** Look for `.mcp.json` in the project root. If it exists, you have programmatic access to workflow tools and agent spawning capabilities.
2. **Git Check:** Check if the directory is a git repository (`git status`). If not, run `git init`. 2. **Analyze the Request:** Ask for the high-level goal ("What are we building?") and the tech preferences ("Rust or Python?").
3. **Scaffold:** Run commands to create the `specs/` and `stories/` folders. 3. **Git Check:** Check if the directory is a git repository (`git status`). If not, run `git init`.
4. **Draft Context:** Write `specs/00_CONTEXT.md` based on the user's answer. 4. **Scaffold:** Run commands to create the `specs/` and `stories/` folders.
5. **Draft Stack:** Write `specs/tech/STACK.md` based on best practices for that language. 5. **Draft Context:** Write `specs/00_CONTEXT.md` based on the user's answer.
6. **Wait:** Ask the user for "Story #1". 6. **Draft Stack:** Write `specs/tech/STACK.md` based on best practices for that language.
7. **Wait:** Ask the user for "Story #1".
--- ---