feat: system prompt and persona
This commit is contained in:
24
.living_spec/specs/functional/PERSONA.md
Normal file
24
.living_spec/specs/functional/PERSONA.md
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
# Functional Spec: Agent Persona & System Prompt
|
||||||
|
|
||||||
|
## 1. Role Definition
|
||||||
|
The Agent acts as a **Senior Software Engineer** embedded within the user's local environment.
|
||||||
|
|
||||||
|
## 2. Directives
|
||||||
|
The System Prompt must enforce the following behaviors:
|
||||||
|
1. **Tool First:** Do not guess code. Read files first.
|
||||||
|
2. **Conciseness:** Do not explain "I will now do X". Just do X (call the tool).
|
||||||
|
3. **Safety:** Never modify files outside the scope (though backend enforces this, the LLM should know).
|
||||||
|
4. **Format:** When writing code, write the *whole* file if the tool requires it, or handle partials if we upgrade the tool (currently `write_file` is overwrite).
|
||||||
|
|
||||||
|
## 3. Implementation
|
||||||
|
* **Location:** `src-tauri/src/llm/prompts.rs`
|
||||||
|
* **Injection:** The system message is prepended to the `messages` vector in `chat::chat` before sending to the Provider.
|
||||||
|
* **Deduplication:** Ensure we don't stack multiple system messages if the loop runs long (though currently we reconstruct history per turn).
|
||||||
|
|
||||||
|
## 4. The Prompt Text (Draft)
|
||||||
|
"You are a Senior Software Engineer Agent running in a local Tauri environment.
|
||||||
|
You have access to the user's filesystem via tools.
|
||||||
|
- ALWAYS read files before modifying them to understand context.
|
||||||
|
- When asked to create or edit, use 'write_file'.
|
||||||
|
- 'write_file' overwrites the ENTIRE content. Do not write partial diffs.
|
||||||
|
- Be concise. Use tools immediately."
|
||||||
@@ -1,17 +0,0 @@
|
|||||||
# Story: UI Polish - Sticky Header & Compact Layout
|
|
||||||
|
|
||||||
## User Story
|
|
||||||
**As a** User
|
|
||||||
**I want** key controls (Model Selection, Tool Toggle, Project Path) to be visible at all times
|
|
||||||
**So that** I don't have to scroll up to check my configuration or change settings.
|
|
||||||
|
|
||||||
## Acceptance Criteria
|
|
||||||
* [ ] Frontend: Create a fixed `<Header />` component at the top of the viewport.
|
|
||||||
* [ ] Frontend: Move "Active Project" display into this header (make it compact/truncated if long).
|
|
||||||
* [ ] Frontend: Move "Ollama Model" and "Enable Tools" controls into this header.
|
|
||||||
* [ ] Frontend: Ensure the Chat message list scrolls *under* the header (taking up remaining height).
|
|
||||||
* [ ] Frontend: Remove the redundant "Active Project" bar from the main workspace area.
|
|
||||||
|
|
||||||
## Out of Scope
|
|
||||||
* Full visual redesign (just layout fixing).
|
|
||||||
* Settings modal (keep controls inline for now).
|
|
||||||
15
.living_spec/stories/08_collapsible_tool_outputs.md
Normal file
15
.living_spec/stories/08_collapsible_tool_outputs.md
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
# Story: Collapsible Tool Outputs
|
||||||
|
|
||||||
|
## User Story
|
||||||
|
**As a** User
|
||||||
|
**I want** tool outputs (like long file contents or search results) to be collapsed by default
|
||||||
|
**So that** the chat history remains readable and I can focus on the Agent's reasoning.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
* [ ] Frontend: Render tool outputs inside a `<details>` / `<summary>` component (or custom equivalent).
|
||||||
|
* [ ] Frontend: Default state should be **Closed/Collapsed**.
|
||||||
|
* [ ] Frontend: The summary line should show the Tool Name + minimal args (e.g., "▶ read_file(src/main.rs)").
|
||||||
|
* [ ] Frontend: Clicking the arrow/summary expands to show the full output.
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
* Complex syntax highlighting for tool outputs (plain text/pre is fine).
|
||||||
18
.living_spec/stories/09_system_prompt_persona.md
Normal file
18
.living_spec/stories/09_system_prompt_persona.md
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
# Story: System Prompt & Persona
|
||||||
|
|
||||||
|
## User Story
|
||||||
|
**As a** User
|
||||||
|
**I want** the Agent to behave like a Senior Engineer and know exactly how to use its tools
|
||||||
|
**So that** it writes high-quality code and doesn't hallucinate capabilities or refuse to edit files.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
* [ ] Backend: Define a robust System Prompt constant (likely in `src-tauri/src/llm/prompts.rs`).
|
||||||
|
* [ ] Content: The prompt should define:
|
||||||
|
* Role: "Senior Software Engineer / Agent".
|
||||||
|
* Tone: Professional, direct, no fluff.
|
||||||
|
* Tool usage instructions: "You have access to the local filesystem. Use `read_file` to inspect context before editing."
|
||||||
|
* Workflow: "When asked to implement a feature, read relevant files first, then write."
|
||||||
|
* [ ] Backend: Inject this system message at the *start* of every `chat` session sent to the Provider.
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
* User-editable system prompts (future story).
|
||||||
15
.living_spec/stories/10_persist_model_selection.md
Normal file
15
.living_spec/stories/10_persist_model_selection.md
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
# Story: Persist Model Selection
|
||||||
|
|
||||||
|
## User Story
|
||||||
|
**As a** User
|
||||||
|
**I want** the application to remember which LLM model I selected
|
||||||
|
**So that** I don't have to switch from "llama3" to "deepseek" every time I launch the app.
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
* [ ] Backend/Frontend: Use `tauri-plugin-store` to save the `selected_model` string.
|
||||||
|
* [ ] Frontend: On mount (after fetching available models), check the store.
|
||||||
|
* [ ] Frontend: If the stored model exists in the available list, select it.
|
||||||
|
* [ ] Frontend: When the user changes the dropdown, update the store.
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
* Persisting per-project model settings (global setting is fine for now).
|
||||||
@@ -1,5 +1,6 @@
|
|||||||
use crate::commands::{fs, search, shell};
|
use crate::commands::{fs, search, shell};
|
||||||
use crate::llm::ollama::OllamaProvider;
|
use crate::llm::ollama::OllamaProvider;
|
||||||
|
use crate::llm::prompts::SYSTEM_PROMPT;
|
||||||
use crate::llm::types::{
|
use crate::llm::types::{
|
||||||
Message, ModelProvider, Role, ToolCall, ToolDefinition, ToolFunctionDefinition,
|
Message, ModelProvider, Role, ToolCall, ToolDefinition, ToolFunctionDefinition,
|
||||||
};
|
};
|
||||||
@@ -51,6 +52,18 @@ pub async fn chat(
|
|||||||
|
|
||||||
// 3. Agent Loop
|
// 3. Agent Loop
|
||||||
let mut current_history = messages.clone();
|
let mut current_history = messages.clone();
|
||||||
|
|
||||||
|
// Inject System Prompt
|
||||||
|
current_history.insert(
|
||||||
|
0,
|
||||||
|
Message {
|
||||||
|
role: Role::System,
|
||||||
|
content: SYSTEM_PROMPT.to_string(),
|
||||||
|
tool_calls: None,
|
||||||
|
tool_call_id: None,
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
let mut new_messages: Vec<Message> = Vec::new();
|
let mut new_messages: Vec<Message> = Vec::new();
|
||||||
let mut turn_count = 0;
|
let mut turn_count = 0;
|
||||||
|
|
||||||
@@ -78,7 +91,8 @@ pub async fn chat(
|
|||||||
|
|
||||||
current_history.push(assistant_msg.clone());
|
current_history.push(assistant_msg.clone());
|
||||||
new_messages.push(assistant_msg);
|
new_messages.push(assistant_msg);
|
||||||
app.emit("chat:update", ¤t_history)
|
// Emit history excluding system prompt (index 0)
|
||||||
|
app.emit("chat:update", ¤t_history[1..])
|
||||||
.map_err(|e| e.to_string())?;
|
.map_err(|e| e.to_string())?;
|
||||||
|
|
||||||
// Execute Tools
|
// Execute Tools
|
||||||
@@ -96,7 +110,8 @@ pub async fn chat(
|
|||||||
|
|
||||||
current_history.push(tool_msg.clone());
|
current_history.push(tool_msg.clone());
|
||||||
new_messages.push(tool_msg);
|
new_messages.push(tool_msg);
|
||||||
app.emit("chat:update", ¤t_history)
|
// Emit history excluding system prompt (index 0)
|
||||||
|
app.emit("chat:update", ¤t_history[1..])
|
||||||
.map_err(|e| e.to_string())?;
|
.map_err(|e| e.to_string())?;
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
@@ -111,7 +126,8 @@ pub async fn chat(
|
|||||||
// We don't push to current_history needed for next loop, because we are done.
|
// We don't push to current_history needed for next loop, because we are done.
|
||||||
new_messages.push(assistant_msg.clone());
|
new_messages.push(assistant_msg.clone());
|
||||||
current_history.push(assistant_msg);
|
current_history.push(assistant_msg);
|
||||||
app.emit("chat:update", ¤t_history)
|
// Emit history excluding system prompt (index 0)
|
||||||
|
app.emit("chat:update", ¤t_history[1..])
|
||||||
.map_err(|e| e.to_string())?;
|
.map_err(|e| e.to_string())?;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,2 +1,3 @@
|
|||||||
pub mod ollama;
|
pub mod ollama;
|
||||||
|
pub mod prompts;
|
||||||
pub mod types;
|
pub mod types;
|
||||||
|
|||||||
17
src-tauri/src/llm/prompts.rs
Normal file
17
src-tauri/src/llm/prompts.rs
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
pub const SYSTEM_PROMPT: &str = r#"You are an expert Senior Software Engineer and AI Agent running directly in the user's local development environment.
|
||||||
|
|
||||||
|
Your Capabilities:
|
||||||
|
1. **Filesystem Access:** You can read, write, and list files in the current project using the provided tools.
|
||||||
|
2. **Shell Execution:** You can run commands like `git`, `cargo`, `npm`, `ls`, etc.
|
||||||
|
3. **Search:** You can search the codebase for patterns.
|
||||||
|
|
||||||
|
Your Operational Rules:
|
||||||
|
1. **Process Awareness:** You MUST read `.living_spec/README.md` to understand the development process (Story-Driven Spec Workflow).
|
||||||
|
2. **Read Before Write:** ALWAYS read the relevant files before you propose or apply changes. Do not guess the file content.
|
||||||
|
3. **Overwrite Warning:** The `write_file` tool OVERWRITES the entire file. When you edit a file, you must output the COMPLETED full content of the file, including all imports and unchanged parts. Do not output partial diffs or placeholders like `// ... rest of code`.
|
||||||
|
4. **Conciseness:** Be direct. Do not waffle. If you need to run a tool, just run it. You don't need to say "I will now run...".
|
||||||
|
5. **Verification:** After writing code, it is good practice to run a quick check (e.g., `cargo check` or `npm test`) if applicable to verify your changes.
|
||||||
|
|
||||||
|
Your Goal:
|
||||||
|
Complete the user's request accurately and safely. If the request is ambiguous, ask for clarification.
|
||||||
|
"#;
|
||||||
Reference in New Issue
Block a user