Restore codebase deleted by bad auto-commit e4227cf

Commit e4227cf (a story creation auto-commit) erroneously deleted 175
files from master's tree, likely due to a race condition between
concurrent git operations. This commit re-adds all files from the
working directory.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
dave
2026-03-22 19:07:07 +00:00
parent 89f776b978
commit f610ef6046
174 changed files with 84280 additions and 0 deletions
File diff suppressed because it is too large Load Diff
+4
View File
@@ -0,0 +1,4 @@
pub mod chat;
pub mod prompts;
pub mod providers;
pub mod types;
+163
View File
@@ -0,0 +1,163 @@
pub const SYSTEM_PROMPT: &str = r#"You are an AI Agent with direct access to the user's filesystem and development environment.
CRITICAL INSTRUCTIONS:
1. **Distinguish Between Examples and Implementation:**
- If the user asks to "show", "give me an example", "how would I", or "what does X look like" → Respond with code in the chat
- If the user asks to "create", "add", "implement", "write", "fix", "modify", or "update" → Use `write_file` tool
2. **When Implementing:** Use the `write_file` tool to write actual files to disk
3. **When Teaching/Showing:** You CAN use markdown code blocks to demonstrate examples or explain concepts
4. **Context Matters:** If discussing a file that exists in the project, use tools. If showing generic examples, use code blocks.
YOUR CAPABILITIES:
You have the following tools available:
- `read_file(path)` - Read the content of any file in the project
- `write_file(path, content)` - Write or overwrite a file with new content
- `list_directory(path)` - List files and directories
- `search_files(query)` - Search for text patterns across all files
- `exec_shell(command, args)` - Execute shell commands (git, cargo, npm, etc.)
YOUR WORKFLOW:
When the user requests a feature or change:
1. **Understand:** Read `.storkit/README.md` if you haven't already to understand the development process
2. **Explore:** Use `read_file` and `list_directory` to understand the current codebase structure
3. **Implement:** Use `write_file` to create or modify files directly
4. **Verify:** Use `exec_shell` to run tests, linters, or build commands to verify your changes work
5. **Report:** Tell the user what you did (past tense), not what they should do
CRITICAL RULES:
- **Read Before Write:** ALWAYS read files before modifying them. The `write_file` tool OVERWRITES the entire file.
- **Complete Files Only:** When using `write_file`, output the COMPLETE file content, including all imports, functions, and unchanged code. Never write partial diffs or use placeholders like "// ... rest of code".
- **Be Direct:** Don't announce your actions ("I will now..."). Just execute the tools immediately.
- **Take Initiative:** If you need information, use tools to get it. Don't ask the user for things you can discover yourself.
EXAMPLES OF CORRECT BEHAVIOR:
Example 1 - User asks for an EXAMPLE (show in chat):
User: "Show me a Java hello world"
You (correct): "Here's a simple Java hello world program:
```java
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, World!");
}
}
```"
Example 2 - User asks to IMPLEMENT (use tools):
User: "Add error handling to the login function in auth.rs"
You (correct): [Call read_file("src/auth.rs"), analyze it, then call write_file("src/auth.rs", <complete file with error handling>), then call exec_shell("cargo", ["check"])]
You (correct response): "I've added error handling to the login function using Result<T, E> and added proper error propagation. The code compiles successfully."
Example 3 - User asks to CREATE (use tools):
User: "Create a new component called Button.tsx in the components folder"
You (correct): [Call read_file("src/components/SomeExisting.tsx") to understand the project's component style, then call write_file("src/components/Button.tsx", <complete component code>)]
You (correct response): "I've created Button.tsx with TypeScript interfaces and following the existing component patterns in your project."
Example 4 - User asks to FIX (use tools):
User: "The calculation in utils.js is wrong"
You (correct): [Call read_file("src/utils.js"), identify the bug, call write_file("src/utils.js", <complete corrected file>), call exec_shell("npm", ["test"])]
You (correct response): "I've fixed the calculation error in utils.js. The formula now correctly handles edge cases and all tests pass."
EXAMPLES OF INCORRECT BEHAVIOR (DO NOT DO THIS):
Example 1 - Writing a file when user asks for an example:
User: "Show me a React component"
You (WRONG): [Calls write_file("Component.tsx", ...)]
You (CORRECT): Show the code in a markdown code block in the chat
Example 2 - Suggesting code when user asks to implement:
User: "Add error handling to the login function"
You (WRONG): "Here's how you can add error handling: ```rust fn login() -> Result<User, LoginError> { ... } ``` Add this to your auth.rs file."
You (CORRECT): [Use read_file then write_file to actually implement it]
Example 3 - Writing partial code:
User: "Update the API endpoint"
You (WRONG): [Calls write_file with content like "// ... existing imports\n\nfn new_endpoint() { }\n\n// ... rest of file"]
You (CORRECT): Read the file first, then write the COMPLETE file with all content
Example 4 - Asking for information you can discover:
User: "Add a new route to the app"
You (WRONG): "What file contains your routes?"
You (CORRECT): [Call search_files("route") or list_directory("src") to find the routing file yourself]
REMEMBER:
- **Teaching vs Implementing:** Show examples in chat, implement changes with tools
- **Keywords matter:** "show/example" = chat, "create/add/fix" = tools
- **Complete files:** Always write the COMPLETE file content when using write_file
- **Verify your work:** Use exec_shell to run tests/checks after implementing changes
- You have the power to both teach AND implement - use the right mode for the situation
Remember: You are an autonomous agent that can both explain concepts and take action. Choose appropriately based on the user's request.
"#;
pub const ONBOARDING_PROMPT: &str = r#"ONBOARDING MODE ACTIVE — This is a newly scaffolded project. The spec files still contain placeholder content and must be replaced with real project information before any stories can be written.
Guide the user through each step below. Ask ONE category of questions at a time — do not overwhelm the user with everything at once.
## Step 1: Project Context
Ask the user:
- What is this project? What does it do?
- Who are the target users?
- What are the core features or goals?
Then use `write_file` to write `.storkit/specs/00_CONTEXT.md` with:
- **High-Level Goal** — a clear, concise summary of what the project does
- **Core Features** — 3-5 bullet points
- **Domain Definition** — key terms and roles
- **Glossary** — project-specific terminology
## Step 2: Tech Stack
Ask the user:
- What programming language(s)?
- What framework(s) or libraries?
- What build tool(s)?
- What test runner(s)? (e.g. cargo test, pytest, jest, pnpm test)
- What linter(s)? (e.g. clippy, eslint, biome, ruff)
Then use `write_file` to write `.storkit/specs/tech/STACK.md` with:
- **Overview** of the architecture
- **Core Stack** — languages, frameworks, build tools
- **Coding Standards** — formatting, linting, quality gates
- **Libraries (Approved)** — key dependencies
## Step 3: Test Script
Based on the tech stack answers, use `write_file` to write `script/test` — a bash script that invokes the project's actual test runner. Examples:
- Rust: `cargo test`
- Python: `pytest`
- Node/TypeScript: `pnpm test`
- Go: `go test ./...`
- Multi-component: run each component's tests sequentially
The script must start with `#!/usr/bin/env bash` and `set -euo pipefail`.
## Step 4: Project Configuration
The scaffold has written `.storkit/project.toml` with example `[[component]]` sections. You must replace these examples with real definitions that match the project's actual tech stack.
First, inspect the project structure to identify the tech stack:
- Use `list_directory(".")` to see top-level files and directories
- Look for tech stack markers: `Cargo.toml` (Rust/Cargo), `package.json` (Node/frontend), `pyproject.toml` or `requirements.txt` (Python), `go.mod` (Go), `Gemfile` (Ruby)
- Check subdirectories like `frontend/`, `backend/`, `app/`, `web/` for nested stacks
- If you find a `package.json`, check whether `pnpm-lock.yaml`, `yarn.lock`, or `package-lock.json` exists to determine the package manager
Then use `read_file(".storkit/project.toml")` to see the current content, keeping the `[[agent]]` sections intact.
Finally, use `write_file` to rewrite `.storkit/project.toml` with real `[[component]]` entries. Each component needs:
- `name` — component identifier (e.g. "backend", "frontend", "app")
- `path` — relative path from project root (use "." for root, "frontend" for a frontend subdirectory)
- `setup` — list of setup commands that install dependencies and verify the build (e.g. ["pnpm install"], ["cargo check"])
- `teardown` — list of cleanup commands (usually [])
Preserve all `[[agent]]` entries from the existing file. Only replace the `[[component]]` sections.
## Step 5: Commit & Finish
After writing all files:
1. Use `exec_shell` to run: `git`, `["add", "-A"]`
2. Use `exec_shell` to run: `git`, `["commit", "-m", "docs: populate project specs and configure tooling"]`
3. Tell the user: "Your project is set up! You're ready to write Story #1. Just tell me what you'd like to build."
## Rules
- Be conversational and helpful
- After each file write, briefly confirm what you wrote
- Make specs specific to the user's project — never leave scaffold placeholders
- Do NOT skip steps or combine multiple steps into one question
"#;
+868
View File
@@ -0,0 +1,868 @@
use crate::llm::types::{
CompletionResponse, FunctionCall, Message, Role, ToolCall, ToolDefinition,
};
use futures::StreamExt;
use reqwest::header::{CONTENT_TYPE, HeaderMap, HeaderValue};
use serde::{Deserialize, Serialize};
use serde_json::json;
use tokio::sync::watch::Receiver;
const ANTHROPIC_API_URL: &str = "https://api.anthropic.com/v1/messages";
const ANTHROPIC_VERSION: &str = "2023-06-01";
pub struct AnthropicProvider {
api_key: String,
client: reqwest::Client,
api_url: String,
}
#[derive(Debug, Serialize, Deserialize)]
struct AnthropicMessage {
role: String, // "user" or "assistant"
content: AnthropicContent,
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(untagged)]
enum AnthropicContent {
Text(String),
Blocks(Vec<AnthropicContentBlock>),
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(tag = "type")]
enum AnthropicContentBlock {
#[serde(rename = "text")]
Text { text: String },
#[serde(rename = "tool_use")]
ToolUse {
id: String,
name: String,
input: serde_json::Value,
},
#[serde(rename = "tool_result")]
ToolResult {
tool_use_id: String,
content: String,
},
}
#[derive(Debug, Serialize)]
struct AnthropicTool {
name: String,
description: String,
input_schema: serde_json::Value,
}
#[derive(Debug, Deserialize)]
struct StreamEvent {
#[serde(rename = "type")]
event_type: String,
#[serde(flatten)]
data: serde_json::Value,
}
impl AnthropicProvider {
pub fn new(api_key: String) -> Self {
Self {
api_key,
client: reqwest::Client::new(),
api_url: ANTHROPIC_API_URL.to_string(),
}
}
#[cfg(test)]
fn new_with_url(api_key: String, api_url: String) -> Self {
Self {
api_key,
client: reqwest::Client::new(),
api_url,
}
}
fn convert_tools(tools: &[ToolDefinition]) -> Vec<AnthropicTool> {
tools
.iter()
.map(|tool| AnthropicTool {
name: tool.function.name.clone(),
description: tool.function.description.clone(),
input_schema: tool.function.parameters.clone(),
})
.collect()
}
fn convert_messages(messages: &[Message]) -> Vec<AnthropicMessage> {
let mut anthropic_messages: Vec<AnthropicMessage> = Vec::new();
for msg in messages {
match msg.role {
Role::System => {
continue;
}
Role::User => {
anthropic_messages.push(AnthropicMessage {
role: "user".to_string(),
content: AnthropicContent::Text(msg.content.clone()),
});
}
Role::Assistant => {
if let Some(tool_calls) = &msg.tool_calls {
let mut blocks = Vec::new();
if !msg.content.is_empty() {
blocks.push(AnthropicContentBlock::Text {
text: msg.content.clone(),
});
}
for call in tool_calls {
let input: serde_json::Value =
serde_json::from_str(&call.function.arguments).unwrap_or(json!({}));
blocks.push(AnthropicContentBlock::ToolUse {
id: call
.id
.clone()
.unwrap_or_else(|| uuid::Uuid::new_v4().to_string()),
name: call.function.name.clone(),
input,
});
}
anthropic_messages.push(AnthropicMessage {
role: "assistant".to_string(),
content: AnthropicContent::Blocks(blocks),
});
} else {
anthropic_messages.push(AnthropicMessage {
role: "assistant".to_string(),
content: AnthropicContent::Text(msg.content.clone()),
});
}
}
Role::Tool => {
let tool_use_id = msg.tool_call_id.clone().unwrap_or_default();
anthropic_messages.push(AnthropicMessage {
role: "user".to_string(),
content: AnthropicContent::Blocks(vec![
AnthropicContentBlock::ToolResult {
tool_use_id,
content: msg.content.clone(),
},
]),
});
}
}
}
anthropic_messages
}
fn extract_system_prompt(messages: &[Message]) -> String {
messages
.iter()
.filter(|m| matches!(m.role, Role::System))
.map(|m| m.content.as_str())
.collect::<Vec<_>>()
.join("\n\n")
}
pub async fn chat_stream<F, A>(
&self,
model: &str,
messages: &[Message],
tools: &[ToolDefinition],
cancel_rx: &mut Receiver<bool>,
mut on_token: F,
mut on_activity: A,
) -> Result<CompletionResponse, String>
where
F: FnMut(&str),
A: FnMut(&str),
{
let anthropic_messages = Self::convert_messages(messages);
let anthropic_tools = Self::convert_tools(tools);
let system_prompt = Self::extract_system_prompt(messages);
let mut request_body = json!({
"model": model,
"max_tokens": 4096,
"messages": anthropic_messages,
"stream": true,
});
if !system_prompt.is_empty() {
request_body["system"] = json!(system_prompt);
}
if !anthropic_tools.is_empty() {
request_body["tools"] = json!(anthropic_tools);
}
let mut headers = HeaderMap::new();
headers.insert(CONTENT_TYPE, HeaderValue::from_static("application/json"));
headers.insert(
"x-api-key",
HeaderValue::from_str(&self.api_key).map_err(|e| e.to_string())?,
);
headers.insert(
"anthropic-version",
HeaderValue::from_static(ANTHROPIC_VERSION),
);
let response = self
.client
.post(&self.api_url)
.headers(headers)
.json(&request_body)
.send()
.await
.map_err(|e| format!("Failed to send request to Anthropic: {e}"))?;
if !response.status().is_success() {
let status = response.status();
let error_text = response
.text()
.await
.unwrap_or_else(|_| "Unknown error".to_string());
return Err(format!("Anthropic API error {status}: {error_text}"));
}
let mut stream = response.bytes_stream();
let mut accumulated_text = String::new();
let mut tool_calls: Vec<ToolCall> = Vec::new();
let mut current_tool_use: Option<(String, String, String)> = None;
loop {
let chunk = tokio::select! {
result = stream.next() => {
match result {
Some(c) => c,
None => break,
}
}
_ = cancel_rx.changed() => {
if *cancel_rx.borrow() {
return Err("Chat cancelled by user".to_string());
}
continue;
}
};
let bytes = chunk.map_err(|e| format!("Stream error: {e}"))?;
let text = String::from_utf8_lossy(&bytes);
for line in text.lines() {
if let Some(json_str) = line.strip_prefix("data: ") {
if json_str == "[DONE]" {
break;
}
let event: StreamEvent = match serde_json::from_str(json_str) {
Ok(e) => e,
Err(_) => continue,
};
match event.event_type.as_str() {
"content_block_start" => {
if let Some(content_block) = event.data.get("content_block")
&& content_block.get("type") == Some(&json!("tool_use"))
{
let id = content_block["id"].as_str().unwrap_or("").to_string();
let name = content_block["name"].as_str().unwrap_or("").to_string();
on_activity(&name);
current_tool_use = Some((id, name, String::new()));
}
}
"content_block_delta" => {
if let Some(delta) = event.data.get("delta") {
if delta.get("type") == Some(&json!("text_delta")) {
if let Some(text) = delta.get("text").and_then(|t| t.as_str()) {
accumulated_text.push_str(text);
on_token(text);
}
} else if delta.get("type") == Some(&json!("input_json_delta"))
&& let Some((_, _, input_json)) = &mut current_tool_use
&& let Some(partial) =
delta.get("partial_json").and_then(|p| p.as_str())
{
input_json.push_str(partial);
}
}
}
"content_block_stop" => {
if let Some((id, name, input_json)) = current_tool_use.take() {
tool_calls.push(ToolCall {
id: Some(id),
kind: "function".to_string(),
function: FunctionCall {
name,
arguments: input_json,
},
});
}
}
_ => {}
}
}
}
}
Ok(CompletionResponse {
content: if accumulated_text.is_empty() {
None
} else {
Some(accumulated_text)
},
tool_calls: if tool_calls.is_empty() {
None
} else {
Some(tool_calls)
},
session_id: None,
})
}
}
#[cfg(test)]
mod tests {
use super::{AnthropicContent, AnthropicContentBlock, AnthropicProvider};
use crate::llm::types::{
FunctionCall, Message, Role, ToolCall, ToolDefinition, ToolFunctionDefinition,
};
use serde_json::json;
fn user_msg(content: &str) -> Message {
Message {
role: Role::User,
content: content.to_string(),
tool_calls: None,
tool_call_id: None,
}
}
fn system_msg(content: &str) -> Message {
Message {
role: Role::System,
content: content.to_string(),
tool_calls: None,
tool_call_id: None,
}
}
fn assistant_msg(content: &str) -> Message {
Message {
role: Role::Assistant,
content: content.to_string(),
tool_calls: None,
tool_call_id: None,
}
}
fn make_tool_def(name: &str) -> ToolDefinition {
ToolDefinition {
kind: "function".to_string(),
function: ToolFunctionDefinition {
name: name.to_string(),
description: format!("{name} description"),
parameters: json!({"type": "object", "properties": {}}),
},
}
}
// ── convert_tools ────────────────────────────────────────────────────────
#[test]
fn test_convert_tools_empty() {
let result = AnthropicProvider::convert_tools(&[]);
assert!(result.is_empty());
}
#[test]
fn test_convert_tools_single() {
let tool = make_tool_def("search_files");
let result = AnthropicProvider::convert_tools(&[tool]);
assert_eq!(result.len(), 1);
assert_eq!(result[0].name, "search_files");
assert_eq!(result[0].description, "search_files description");
assert_eq!(
result[0].input_schema,
json!({"type": "object", "properties": {}})
);
}
#[test]
fn test_convert_tools_multiple() {
let tools = vec![make_tool_def("read_file"), make_tool_def("write_file")];
let result = AnthropicProvider::convert_tools(&tools);
assert_eq!(result.len(), 2);
assert_eq!(result[0].name, "read_file");
assert_eq!(result[1].name, "write_file");
}
// ── convert_messages ─────────────────────────────────────────────────────
#[test]
fn test_convert_messages_user() {
let msgs = vec![user_msg("Hello")];
let result = AnthropicProvider::convert_messages(&msgs);
assert_eq!(result.len(), 1);
assert_eq!(result[0].role, "user");
match &result[0].content {
AnthropicContent::Text(t) => assert_eq!(t, "Hello"),
_ => panic!("Expected text content"),
}
}
#[test]
fn test_convert_messages_system_skipped() {
let msgs = vec![system_msg("You are helpful"), user_msg("Hi")];
let result = AnthropicProvider::convert_messages(&msgs);
assert_eq!(result.len(), 1);
assert_eq!(result[0].role, "user");
}
#[test]
fn test_convert_messages_assistant_text() {
let msgs = vec![assistant_msg("I can help with that")];
let result = AnthropicProvider::convert_messages(&msgs);
assert_eq!(result.len(), 1);
assert_eq!(result[0].role, "assistant");
match &result[0].content {
AnthropicContent::Text(t) => assert_eq!(t, "I can help with that"),
_ => panic!("Expected text content"),
}
}
#[test]
fn test_convert_messages_assistant_with_tool_calls_no_content() {
let msgs = vec![Message {
role: Role::Assistant,
content: String::new(),
tool_calls: Some(vec![ToolCall {
id: Some("toolu_abc".to_string()),
kind: "function".to_string(),
function: FunctionCall {
name: "search_files".to_string(),
arguments: r#"{"pattern": "*.rs"}"#.to_string(),
},
}]),
tool_call_id: None,
}];
let result = AnthropicProvider::convert_messages(&msgs);
assert_eq!(result.len(), 1);
assert_eq!(result[0].role, "assistant");
match &result[0].content {
AnthropicContent::Blocks(blocks) => {
assert_eq!(blocks.len(), 1);
match &blocks[0] {
AnthropicContentBlock::ToolUse { id, name, .. } => {
assert_eq!(id, "toolu_abc");
assert_eq!(name, "search_files");
}
_ => panic!("Expected ToolUse block"),
}
}
_ => panic!("Expected blocks content"),
}
}
#[test]
fn test_convert_messages_assistant_with_tool_calls_and_content() {
let msgs = vec![Message {
role: Role::Assistant,
content: "Let me search for that".to_string(),
tool_calls: Some(vec![ToolCall {
id: Some("toolu_xyz".to_string()),
kind: "function".to_string(),
function: FunctionCall {
name: "read_file".to_string(),
arguments: r#"{"path": "main.rs"}"#.to_string(),
},
}]),
tool_call_id: None,
}];
let result = AnthropicProvider::convert_messages(&msgs);
assert_eq!(result.len(), 1);
match &result[0].content {
AnthropicContent::Blocks(blocks) => {
assert_eq!(blocks.len(), 2);
match &blocks[0] {
AnthropicContentBlock::Text { text } => {
assert_eq!(text, "Let me search for that");
}
_ => panic!("Expected Text block first"),
}
match &blocks[1] {
AnthropicContentBlock::ToolUse { id, name, .. } => {
assert_eq!(id, "toolu_xyz");
assert_eq!(name, "read_file");
}
_ => panic!("Expected ToolUse block second"),
}
}
_ => panic!("Expected blocks content"),
}
}
#[test]
fn test_convert_messages_assistant_tool_call_invalid_json_args() {
// Invalid JSON args fall back to {}
let msgs = vec![Message {
role: Role::Assistant,
content: String::new(),
tool_calls: Some(vec![ToolCall {
id: None,
kind: "function".to_string(),
function: FunctionCall {
name: "my_tool".to_string(),
arguments: "not valid json".to_string(),
},
}]),
tool_call_id: None,
}];
let result = AnthropicProvider::convert_messages(&msgs);
match &result[0].content {
AnthropicContent::Blocks(blocks) => match &blocks[0] {
AnthropicContentBlock::ToolUse { input, .. } => {
assert_eq!(*input, json!({}));
}
_ => panic!("Expected ToolUse block"),
},
_ => panic!("Expected blocks"),
}
}
#[test]
fn test_convert_messages_assistant_tool_call_no_id_generates_uuid() {
let msgs = vec![Message {
role: Role::Assistant,
content: String::new(),
tool_calls: Some(vec![ToolCall {
id: None, // no id provided
kind: "function".to_string(),
function: FunctionCall {
name: "my_tool".to_string(),
arguments: "{}".to_string(),
},
}]),
tool_call_id: None,
}];
let result = AnthropicProvider::convert_messages(&msgs);
match &result[0].content {
AnthropicContent::Blocks(blocks) => match &blocks[0] {
AnthropicContentBlock::ToolUse { id, .. } => {
assert!(!id.is_empty(), "Should have generated a UUID");
}
_ => panic!("Expected ToolUse block"),
},
_ => panic!("Expected blocks"),
}
}
#[test]
fn test_convert_messages_tool_role() {
let msgs = vec![Message {
role: Role::Tool,
content: "file content here".to_string(),
tool_calls: None,
tool_call_id: Some("toolu_123".to_string()),
}];
let result = AnthropicProvider::convert_messages(&msgs);
assert_eq!(result.len(), 1);
assert_eq!(result[0].role, "user");
match &result[0].content {
AnthropicContent::Blocks(blocks) => {
assert_eq!(blocks.len(), 1);
match &blocks[0] {
AnthropicContentBlock::ToolResult {
tool_use_id,
content,
} => {
assert_eq!(tool_use_id, "toolu_123");
assert_eq!(content, "file content here");
}
_ => panic!("Expected ToolResult block"),
}
}
_ => panic!("Expected blocks content"),
}
}
#[test]
fn test_convert_messages_tool_role_no_id_defaults_empty() {
let msgs = vec![Message {
role: Role::Tool,
content: "result".to_string(),
tool_calls: None,
tool_call_id: None,
}];
let result = AnthropicProvider::convert_messages(&msgs);
match &result[0].content {
AnthropicContent::Blocks(blocks) => match &blocks[0] {
AnthropicContentBlock::ToolResult { tool_use_id, .. } => {
assert_eq!(tool_use_id, "");
}
_ => panic!("Expected ToolResult block"),
},
_ => panic!("Expected blocks"),
}
}
#[test]
fn test_convert_messages_mixed_roles() {
let msgs = vec![
system_msg("Be helpful"),
user_msg("What is the time?"),
assistant_msg("I can check that."),
];
let result = AnthropicProvider::convert_messages(&msgs);
// System is skipped
assert_eq!(result.len(), 2);
assert_eq!(result[0].role, "user");
assert_eq!(result[1].role, "assistant");
}
// ── extract_system_prompt ─────────────────────────────────────────────────
#[test]
fn test_extract_system_prompt_no_messages() {
let msgs: Vec<Message> = vec![];
let prompt = AnthropicProvider::extract_system_prompt(&msgs);
assert!(prompt.is_empty());
}
#[test]
fn test_extract_system_prompt_no_system_messages() {
let msgs = vec![user_msg("Hello"), assistant_msg("Hi there")];
let prompt = AnthropicProvider::extract_system_prompt(&msgs);
assert!(prompt.is_empty());
}
#[test]
fn test_extract_system_prompt_single() {
let msgs = vec![system_msg("You are a helpful assistant"), user_msg("Hi")];
let prompt = AnthropicProvider::extract_system_prompt(&msgs);
assert_eq!(prompt, "You are a helpful assistant");
}
#[test]
fn test_extract_system_prompt_multiple_joined() {
let msgs = vec![
system_msg("First instruction"),
system_msg("Second instruction"),
user_msg("Hello"),
];
let prompt = AnthropicProvider::extract_system_prompt(&msgs);
assert_eq!(prompt, "First instruction\n\nSecond instruction");
}
// ── chat_stream (HTTP mocked) ─────────────────────────────────────────────
#[tokio::test]
async fn test_chat_stream_text_response() {
let mut server = mockito::Server::new_async().await;
let delta1 = json!({
"type": "content_block_delta",
"delta": {"type": "text_delta", "text": "Hello"}
});
let delta2 = json!({
"type": "content_block_delta",
"delta": {"type": "text_delta", "text": " world"}
});
let body = format!("data: {delta1}\ndata: {delta2}\ndata: [DONE]\n");
let _m = server
.mock("POST", "/v1/messages")
.with_status(200)
.with_header("content-type", "text/event-stream")
.with_body(body)
.create_async()
.await;
let provider = AnthropicProvider::new_with_url(
"test-key".to_string(),
format!("{}/v1/messages", server.url()),
);
let (_tx, mut cancel_rx) = tokio::sync::watch::channel(false);
let mut tokens = Vec::<String>::new();
let result = provider
.chat_stream(
"claude-3-5-sonnet-20241022",
&[user_msg("Hello")],
&[],
&mut cancel_rx,
|t| tokens.push(t.to_string()),
|_| {},
)
.await;
assert!(result.is_ok());
let response = result.unwrap();
assert_eq!(response.content, Some("Hello world".to_string()));
assert!(response.tool_calls.is_none());
assert_eq!(tokens, vec!["Hello", " world"]);
}
#[tokio::test]
async fn test_chat_stream_error_response() {
let mut server = mockito::Server::new_async().await;
let _m = server
.mock("POST", "/v1/messages")
.with_status(401)
.with_body(r#"{"error":{"type":"authentication_error","message":"Invalid API key"}}"#)
.create_async()
.await;
let provider = AnthropicProvider::new_with_url(
"bad-key".to_string(),
format!("{}/v1/messages", server.url()),
);
let (_tx, mut cancel_rx) = tokio::sync::watch::channel(false);
let result = provider
.chat_stream(
"claude-3-5-sonnet-20241022",
&[user_msg("Hello")],
&[],
&mut cancel_rx,
|_| {},
|_| {},
)
.await;
assert!(result.is_err());
assert!(result.unwrap_err().contains("401"));
}
#[tokio::test]
async fn test_chat_stream_tool_use_response() {
let mut server = mockito::Server::new_async().await;
let start_event = json!({
"type": "content_block_start",
"content_block": {"type": "tool_use", "id": "toolu_abc", "name": "search_files"}
});
let delta_event = json!({
"type": "content_block_delta",
"delta": {"type": "input_json_delta", "partial_json": "{}"}
});
let stop_event = json!({"type": "content_block_stop"});
let body = format!(
"data: {start_event}\ndata: {delta_event}\ndata: {stop_event}\ndata: [DONE]\n"
);
let _m = server
.mock("POST", "/v1/messages")
.with_status(200)
.with_header("content-type", "text/event-stream")
.with_body(body)
.create_async()
.await;
let provider = AnthropicProvider::new_with_url(
"test-key".to_string(),
format!("{}/v1/messages", server.url()),
);
let (_tx, mut cancel_rx) = tokio::sync::watch::channel(false);
let mut activities = Vec::<String>::new();
let result = provider
.chat_stream(
"claude-3-5-sonnet-20241022",
&[user_msg("Find Rust files")],
&[make_tool_def("search_files")],
&mut cancel_rx,
|_| {},
|a| activities.push(a.to_string()),
)
.await;
assert!(result.is_ok());
let response = result.unwrap();
assert!(response.content.is_none());
let tool_calls = response.tool_calls.expect("Expected tool calls");
assert_eq!(tool_calls.len(), 1);
assert_eq!(tool_calls[0].id, Some("toolu_abc".to_string()));
assert_eq!(tool_calls[0].function.name, "search_files");
assert_eq!(activities, vec!["search_files"]);
}
#[tokio::test]
async fn test_chat_stream_includes_system_prompt() {
let mut server = mockito::Server::new_async().await;
let delta = json!({
"type": "content_block_delta",
"delta": {"type": "text_delta", "text": "ok"}
});
let body = format!("data: {delta}\ndata: [DONE]\n");
let _m = server
.mock("POST", "/v1/messages")
.with_status(200)
.with_header("content-type", "text/event-stream")
.with_body(body)
.create_async()
.await;
let provider = AnthropicProvider::new_with_url(
"test-key".to_string(),
format!("{}/v1/messages", server.url()),
);
let (_tx, mut cancel_rx) = tokio::sync::watch::channel(false);
let messages = vec![system_msg("Be concise"), user_msg("Hello")];
let result = provider
.chat_stream(
"claude-3-5-sonnet-20241022",
&messages,
&[],
&mut cancel_rx,
|_| {},
|_| {},
)
.await;
assert!(result.is_ok());
assert_eq!(result.unwrap().content, Some("ok".to_string()));
}
#[tokio::test]
async fn test_chat_stream_empty_response_gives_none_content() {
let mut server = mockito::Server::new_async().await;
let _m = server
.mock("POST", "/v1/messages")
.with_status(200)
.with_header("content-type", "text/event-stream")
.with_body("data: [DONE]\n")
.create_async()
.await;
let provider = AnthropicProvider::new_with_url(
"test-key".to_string(),
format!("{}/v1/messages", server.url()),
);
let (_tx, mut cancel_rx) = tokio::sync::watch::channel(false);
let result = provider
.chat_stream(
"claude-3-5-sonnet-20241022",
&[user_msg("Hello")],
&[],
&mut cancel_rx,
|_| {},
|_| {},
)
.await;
assert!(result.is_ok());
let response = result.unwrap();
assert!(response.content.is_none());
assert!(response.tool_calls.is_none());
}
}
File diff suppressed because it is too large Load Diff
+3
View File
@@ -0,0 +1,3 @@
pub mod anthropic;
pub mod claude_code;
pub mod ollama;
+267
View File
@@ -0,0 +1,267 @@
use crate::llm::types::{
CompletionResponse, FunctionCall, Message, ModelProvider, Role, ToolCall, ToolDefinition,
};
use async_trait::async_trait;
use futures::StreamExt;
use serde::{Deserialize, Serialize};
use serde_json::Value;
pub struct OllamaProvider {
base_url: String,
}
impl OllamaProvider {
pub fn new(base_url: String) -> Self {
Self { base_url }
}
pub async fn get_models(base_url: &str) -> Result<Vec<String>, String> {
let client = reqwest::Client::new();
let url = format!("{}/api/tags", base_url.trim_end_matches('/'));
let res = client
.get(&url)
.send()
.await
.map_err(|e| format!("Request failed: {}", e))?;
if !res.status().is_success() {
let status = res.status();
let text = res.text().await.unwrap_or_default();
return Err(format!("Ollama API error {}: {}", status, text));
}
let body: OllamaTagsResponse = res
.json()
.await
.map_err(|e| format!("Failed to parse response: {}", e))?;
Ok(body.models.into_iter().map(|m| m.name).collect())
}
/// Streaming chat that calls `on_token` for each token chunk.
pub async fn chat_stream<F>(
&self,
model: &str,
messages: &[Message],
tools: &[ToolDefinition],
cancel_rx: &mut tokio::sync::watch::Receiver<bool>,
mut on_token: F,
) -> Result<CompletionResponse, String>
where
F: FnMut(&str) + Send,
{
let client = reqwest::Client::new();
let url = format!("{}/api/chat", self.base_url.trim_end_matches('/'));
let ollama_messages: Vec<OllamaRequestMessage> = messages
.iter()
.map(|m| {
let tool_calls = m.tool_calls.as_ref().map(|calls| {
calls
.iter()
.map(|tc| {
let args_val: Value = serde_json::from_str(&tc.function.arguments)
.unwrap_or(Value::String(tc.function.arguments.clone()));
OllamaRequestToolCall {
kind: tc.kind.clone(),
function: OllamaRequestFunctionCall {
name: tc.function.name.clone(),
arguments: args_val,
},
}
})
.collect()
});
OllamaRequestMessage {
role: m.role.clone(),
content: m.content.clone(),
tool_calls,
tool_call_id: m.tool_call_id.clone(),
}
})
.collect();
let request_body = OllamaRequest {
model,
messages: ollama_messages,
stream: true,
tools,
};
let res = client
.post(&url)
.json(&request_body)
.send()
.await
.map_err(|e| format!("Request failed: {}", e))?;
if !res.status().is_success() {
let status = res.status();
let text = res.text().await.unwrap_or_default();
return Err(format!("Ollama API error {}: {}", status, text));
}
let mut stream = res.bytes_stream();
let mut buffer = String::new();
let mut accumulated_content = String::new();
let mut final_tool_calls: Option<Vec<ToolCall>> = None;
loop {
if *cancel_rx.borrow() {
return Err("Chat cancelled by user".to_string());
}
let chunk_result = tokio::select! {
chunk = stream.next() => {
match chunk {
Some(c) => c,
None => break,
}
}
_ = cancel_rx.changed() => {
if *cancel_rx.borrow() {
return Err("Chat cancelled by user".to_string());
} else {
continue;
}
}
};
let chunk = chunk_result.map_err(|e| format!("Stream error: {}", e))?;
buffer.push_str(&String::from_utf8_lossy(&chunk));
while let Some(newline_pos) = buffer.find('\n') {
let line = buffer[..newline_pos].trim().to_string();
buffer = buffer[newline_pos + 1..].to_string();
if line.is_empty() {
continue;
}
let stream_msg: OllamaStreamResponse =
serde_json::from_str(&line).map_err(|e| format!("JSON parse error: {}", e))?;
if !stream_msg.message.content.is_empty() {
accumulated_content.push_str(&stream_msg.message.content);
on_token(&stream_msg.message.content);
}
if let Some(tool_calls) = stream_msg.message.tool_calls {
final_tool_calls = Some(
tool_calls
.into_iter()
.map(|tc| ToolCall {
id: None,
kind: "function".to_string(),
function: FunctionCall {
name: tc.function.name,
arguments: tc.function.arguments.to_string(),
},
})
.collect(),
);
}
if stream_msg.done {
break;
}
}
}
Ok(CompletionResponse {
content: if accumulated_content.is_empty() {
None
} else {
Some(accumulated_content)
},
tool_calls: final_tool_calls,
session_id: None,
})
}
}
#[derive(Deserialize)]
struct OllamaTagsResponse {
models: Vec<OllamaModelTag>,
}
#[derive(Deserialize)]
struct OllamaModelTag {
name: String,
}
#[derive(Serialize)]
struct OllamaRequest<'a> {
model: &'a str,
messages: Vec<OllamaRequestMessage>,
stream: bool,
#[serde(skip_serializing_if = "is_empty_tools")]
tools: &'a [ToolDefinition],
}
fn is_empty_tools(tools: &&[ToolDefinition]) -> bool {
tools.is_empty()
}
#[derive(Serialize)]
struct OllamaRequestMessage {
role: Role,
content: String,
#[serde(skip_serializing_if = "Option::is_none")]
tool_calls: Option<Vec<OllamaRequestToolCall>>,
#[serde(skip_serializing_if = "Option::is_none")]
tool_call_id: Option<String>,
}
#[derive(Serialize)]
struct OllamaRequestToolCall {
function: OllamaRequestFunctionCall,
#[serde(rename = "type")]
kind: String,
}
#[derive(Serialize)]
struct OllamaRequestFunctionCall {
name: String,
arguments: Value,
}
#[derive(Deserialize)]
struct OllamaStreamResponse {
message: OllamaStreamMessage,
done: bool,
}
#[derive(Deserialize)]
struct OllamaStreamMessage {
#[serde(default)]
content: String,
#[serde(default)]
tool_calls: Option<Vec<OllamaResponseToolCall>>,
}
#[derive(Deserialize)]
struct OllamaResponseToolCall {
function: OllamaResponseFunctionCall,
}
#[derive(Deserialize)]
struct OllamaResponseFunctionCall {
name: String,
arguments: Value,
}
#[async_trait]
impl ModelProvider for OllamaProvider {
async fn chat(
&self,
_model: &str,
_messages: &[Message],
_tools: &[ToolDefinition],
) -> Result<CompletionResponse, String> {
Err("Non-streaming Ollama chat not implemented for server".to_string())
}
}
+72
View File
@@ -0,0 +1,72 @@
use async_trait::async_trait;
use serde::{Deserialize, Serialize};
use std::fmt::Debug;
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq)]
#[serde(rename_all = "lowercase")]
pub enum Role {
System,
User,
Assistant,
Tool,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct Message {
pub role: Role,
pub content: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub tool_calls: Option<Vec<ToolCall>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub tool_call_id: Option<String>,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct ToolCall {
pub id: Option<String>,
pub function: FunctionCall,
#[serde(rename = "type")]
pub kind: String,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct FunctionCall {
pub name: String,
pub arguments: String,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct ToolDefinition {
#[serde(rename = "type")]
pub kind: String,
pub function: ToolFunctionDefinition,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct ToolFunctionDefinition {
pub name: String,
pub description: String,
pub parameters: serde_json::Value,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct CompletionResponse {
pub content: Option<String>,
pub tool_calls: Option<Vec<ToolCall>>,
/// Claude Code session ID for conversation resumption.
#[serde(skip_serializing_if = "Option::is_none")]
pub session_id: Option<String>,
}
#[async_trait]
#[allow(dead_code)]
pub trait ModelProvider: Send + Sync {
async fn chat(
&self,
model: &str,
messages: &[Message],
tools: &[ToolDefinition],
) -> Result<CompletionResponse, String>;
}