Renames the config directory and updates 514 references across 42 Rust source files, plus CLAUDE.md, .gitignore, Makefile, script/release, and .mcp.json files. All 1205 tests pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
34 lines
2.8 KiB
Markdown
34 lines
2.8 KiB
Markdown
# Project Context
|
|
|
|
## High-Level Goal
|
|
To build a standalone **Agentic AI Code Assistant** application as a single Rust binary that serves a Vite/React web UI and exposes a WebSocket API. The assistant will facilitate a test-driven development (TDD) workflow first, with both unit and integration tests providing the primary guardrails for code changes. Once the single-threaded TDD workflow is stable and usable (including compatibility with lower-cost agents), the project will evolve to a multi-agent orchestration model using Git worktrees and supervisory roles to maximize throughput. Unlike a passive chat interface, this assistant acts as an **Agent**, capable of using tools to read the filesystem, execute shell commands, manage git repositories, and modify code directly to implement features.
|
|
|
|
## Core Features
|
|
1. **Chat Interface:** A conversational UI for the user to interact with the AI assistant.
|
|
2. **Agentic Tool Bridge:** A robust system mapping LLM "Tool Calls" to native Rust functions.
|
|
* **Filesystem:** Read/Write access (scoped to the target project).
|
|
* **Search:** High-performance file searching (ripgrep-style) and content retrieval.
|
|
* **Shell Integration:** Ability to execute approved commands (e.g., `cargo`, `npm`, `git`) to run tests, linters, and version control.
|
|
3. **Workflow Management:** Specialized tools to manage a TDD-first lifecycle:
|
|
* Defining test requirements (unit + integration) before code changes.
|
|
* Implementing code via red-green-refactor.
|
|
* Enforcing test and quality gates before acceptance.
|
|
* Scaling later to multi-agent orchestration with Git worktrees and supervisory checks, after the single-threaded process is stable.
|
|
4. **LLM Integration:** Connection to an LLM backend to drive the intelligence and tool selection.
|
|
* **Remote:** Support for major APIs (Anthropic Claude, Google Gemini, OpenAI, etc).
|
|
* **Local:** Support for local inference via Ollama.
|
|
|
|
## Domain Definition
|
|
* **User:** A software engineer using the assistant to build a project.
|
|
* **Target Project:** The local software project the user is working on.
|
|
* **Agent:** The AI entity that receives prompts and decides which **Tools** to invoke to solve the problem.
|
|
* **Tool:** A discrete function exposed to the Agent (e.g., `run_shell_command`, `write_file`, `search_project`).
|
|
* **Story:** A unit of work defining a change (Feature Request).
|
|
* **Spec:** A persistent documentation artifact defining the current truth of the system.
|
|
|
|
## Glossary
|
|
* **SDSW:** Story-Driven Spec Workflow.
|
|
* **Web Server Binary:** The Rust binary that serves the Vite/React frontend and exposes the WebSocket API.
|
|
* **Living Spec:** The collection of Markdown files in `.story_kit/` that define the project.
|
|
* **Tool Call:** A structured request from the LLM to execute a specific native function.
|