feat: system prompt and persona
This commit is contained in:
24
.living_spec/specs/functional/PERSONA.md
Normal file
24
.living_spec/specs/functional/PERSONA.md
Normal file
@@ -0,0 +1,24 @@
|
||||
# Functional Spec: Agent Persona & System Prompt
|
||||
|
||||
## 1. Role Definition
|
||||
The Agent acts as a **Senior Software Engineer** embedded within the user's local environment.
|
||||
|
||||
## 2. Directives
|
||||
The System Prompt must enforce the following behaviors:
|
||||
1. **Tool First:** Do not guess code. Read files first.
|
||||
2. **Conciseness:** Do not explain "I will now do X". Just do X (call the tool).
|
||||
3. **Safety:** Never modify files outside the scope (though backend enforces this, the LLM should know).
|
||||
4. **Format:** When writing code, write the *whole* file if the tool requires it, or handle partials if we upgrade the tool (currently `write_file` is overwrite).
|
||||
|
||||
## 3. Implementation
|
||||
* **Location:** `src-tauri/src/llm/prompts.rs`
|
||||
* **Injection:** The system message is prepended to the `messages` vector in `chat::chat` before sending to the Provider.
|
||||
* **Deduplication:** Ensure we don't stack multiple system messages if the loop runs long (though currently we reconstruct history per turn).
|
||||
|
||||
## 4. The Prompt Text (Draft)
|
||||
"You are a Senior Software Engineer Agent running in a local Tauri environment.
|
||||
You have access to the user's filesystem via tools.
|
||||
- ALWAYS read files before modifying them to understand context.
|
||||
- When asked to create or edit, use 'write_file'.
|
||||
- 'write_file' overwrites the ENTIRE content. Do not write partial diffs.
|
||||
- Be concise. Use tools immediately."
|
||||
@@ -1,17 +0,0 @@
|
||||
# Story: UI Polish - Sticky Header & Compact Layout
|
||||
|
||||
## User Story
|
||||
**As a** User
|
||||
**I want** key controls (Model Selection, Tool Toggle, Project Path) to be visible at all times
|
||||
**So that** I don't have to scroll up to check my configuration or change settings.
|
||||
|
||||
## Acceptance Criteria
|
||||
* [ ] Frontend: Create a fixed `<Header />` component at the top of the viewport.
|
||||
* [ ] Frontend: Move "Active Project" display into this header (make it compact/truncated if long).
|
||||
* [ ] Frontend: Move "Ollama Model" and "Enable Tools" controls into this header.
|
||||
* [ ] Frontend: Ensure the Chat message list scrolls *under* the header (taking up remaining height).
|
||||
* [ ] Frontend: Remove the redundant "Active Project" bar from the main workspace area.
|
||||
|
||||
## Out of Scope
|
||||
* Full visual redesign (just layout fixing).
|
||||
* Settings modal (keep controls inline for now).
|
||||
15
.living_spec/stories/08_collapsible_tool_outputs.md
Normal file
15
.living_spec/stories/08_collapsible_tool_outputs.md
Normal file
@@ -0,0 +1,15 @@
|
||||
# Story: Collapsible Tool Outputs
|
||||
|
||||
## User Story
|
||||
**As a** User
|
||||
**I want** tool outputs (like long file contents or search results) to be collapsed by default
|
||||
**So that** the chat history remains readable and I can focus on the Agent's reasoning.
|
||||
|
||||
## Acceptance Criteria
|
||||
* [ ] Frontend: Render tool outputs inside a `<details>` / `<summary>` component (or custom equivalent).
|
||||
* [ ] Frontend: Default state should be **Closed/Collapsed**.
|
||||
* [ ] Frontend: The summary line should show the Tool Name + minimal args (e.g., "▶ read_file(src/main.rs)").
|
||||
* [ ] Frontend: Clicking the arrow/summary expands to show the full output.
|
||||
|
||||
## Out of Scope
|
||||
* Complex syntax highlighting for tool outputs (plain text/pre is fine).
|
||||
18
.living_spec/stories/09_system_prompt_persona.md
Normal file
18
.living_spec/stories/09_system_prompt_persona.md
Normal file
@@ -0,0 +1,18 @@
|
||||
# Story: System Prompt & Persona
|
||||
|
||||
## User Story
|
||||
**As a** User
|
||||
**I want** the Agent to behave like a Senior Engineer and know exactly how to use its tools
|
||||
**So that** it writes high-quality code and doesn't hallucinate capabilities or refuse to edit files.
|
||||
|
||||
## Acceptance Criteria
|
||||
* [ ] Backend: Define a robust System Prompt constant (likely in `src-tauri/src/llm/prompts.rs`).
|
||||
* [ ] Content: The prompt should define:
|
||||
* Role: "Senior Software Engineer / Agent".
|
||||
* Tone: Professional, direct, no fluff.
|
||||
* Tool usage instructions: "You have access to the local filesystem. Use `read_file` to inspect context before editing."
|
||||
* Workflow: "When asked to implement a feature, read relevant files first, then write."
|
||||
* [ ] Backend: Inject this system message at the *start* of every `chat` session sent to the Provider.
|
||||
|
||||
## Out of Scope
|
||||
* User-editable system prompts (future story).
|
||||
15
.living_spec/stories/10_persist_model_selection.md
Normal file
15
.living_spec/stories/10_persist_model_selection.md
Normal file
@@ -0,0 +1,15 @@
|
||||
# Story: Persist Model Selection
|
||||
|
||||
## User Story
|
||||
**As a** User
|
||||
**I want** the application to remember which LLM model I selected
|
||||
**So that** I don't have to switch from "llama3" to "deepseek" every time I launch the app.
|
||||
|
||||
## Acceptance Criteria
|
||||
* [ ] Backend/Frontend: Use `tauri-plugin-store` to save the `selected_model` string.
|
||||
* [ ] Frontend: On mount (after fetching available models), check the store.
|
||||
* [ ] Frontend: If the stored model exists in the available list, select it.
|
||||
* [ ] Frontend: When the user changes the dropdown, update the store.
|
||||
|
||||
## Out of Scope
|
||||
* Persisting per-project model settings (global setting is fine for now).
|
||||
Reference in New Issue
Block a user