moved from tauri to a server with embedded UI
9
.gitignore
vendored
@@ -7,10 +7,11 @@ yarn-error.log*
|
||||
pnpm-debug.log*
|
||||
lerna-debug.log*
|
||||
|
||||
node_modules
|
||||
dist
|
||||
dist-ssr
|
||||
*.local
|
||||
frontend/node_modules
|
||||
frontend/dist
|
||||
frontend/dist-ssr
|
||||
frontend/*.local
|
||||
server/target
|
||||
|
||||
# Editor directories and files
|
||||
.vscode/*
|
||||
|
||||
@@ -172,6 +172,10 @@ If a user hands you this document and says "Apply this process to my project":
|
||||
|
||||
**MANDATORY:** Before completing Step 4 (Verification) of any story, you MUST run all applicable linters and fix ALL errors and warnings. Zero tolerance for warnings or errors.
|
||||
|
||||
**AUTO-RUN CHECKS:** Always run the required lint/test/build checks as soon as relevant changes are made. Do not ask for permission to run them—run them automatically and fix any failures.
|
||||
|
||||
**ALWAYS FIX DIAGNOSTICS:** At every stage, you must proactively fix all errors and warnings without waiting for user confirmation. Do not pause to ask whether to fix diagnostics—fix them immediately as part of the workflow.
|
||||
|
||||
### TypeScript/JavaScript: Biome
|
||||
|
||||
* **Tool:** [Biome](https://biomejs.dev/) - Fast formatter and linter
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Project Context
|
||||
|
||||
## High-Level Goal
|
||||
To build a standalone **Agentic AI Code Assistant** application using Tauri. The assistant will facilitate a "Story-Driven Spec Workflow" (SDSW) for software development. Unlike a passive chat interface, this assistant acts as an **Agent**, capable of using tools to read the filesystem, execute shell commands, manage git repositories, and modify code directly to implement features.
|
||||
To build a standalone **Agentic AI Code Assistant** application as a single Rust binary that serves a Vite/React web UI and exposes a WebSocket API. The assistant will facilitate a "Story-Driven Spec Workflow" (SDSW) for software development. Unlike a passive chat interface, this assistant acts as an **Agent**, capable of using tools to read the filesystem, execute shell commands, manage git repositories, and modify code directly to implement features.
|
||||
|
||||
## Core Features
|
||||
1. **Chat Interface:** A conversational UI for the user to interact with the AI assistant.
|
||||
@@ -28,6 +28,6 @@ To build a standalone **Agentic AI Code Assistant** application using Tauri. The
|
||||
|
||||
## Glossary
|
||||
* **SDSW:** Story-Driven Spec Workflow.
|
||||
* **Tauri:** The framework used to build this assistant (Rust backend + Web frontend).
|
||||
* **Web Server Binary:** The Rust binary that serves the Vite/React frontend and exposes the WebSocket API.
|
||||
* **Living Spec:** The collection of Markdown files in `.living_spec/` that define the project.
|
||||
* **Tool Call:** A structured request from the LLM to execute a specific native function.
|
||||
@@ -1,12 +1,12 @@
|
||||
# Tech Stack & Constraints
|
||||
|
||||
## Overview
|
||||
This project is a desktop application built with **Tauri**. It functions as an **Agentic Code Assistant** capable of safely executing tools on the host system.
|
||||
This project is a standalone Rust **web server binary** that serves a Vite/React frontend and exposes a **WebSocket API**. The built frontend assets are packaged with the binary (in a `frontend` directory) and served as static files. It functions as an **Agentic Code Assistant** capable of safely executing tools on the host system.
|
||||
|
||||
## Core Stack
|
||||
* **Backend:** Rust (Tauri Core)
|
||||
* **Backend:** Rust (Web Server)
|
||||
* **MSRV:** Stable (latest)
|
||||
* **Framework:** Tauri v2
|
||||
* **Framework:** Poem HTTP server with WebSocket support for streaming; HTTP APIs should use Poem OpenAPI (Swagger) for non-streaming endpoints.
|
||||
* **Frontend:** TypeScript + React
|
||||
* **Build Tool:** Vite
|
||||
* **Styling:** CSS Modules or Tailwind (TBD - Defaulting to CSS Modules)
|
||||
@@ -17,12 +17,12 @@ This project is a desktop application built with **Tauri**. It functions as an *
|
||||
The application follows a **Tool-Use (Function Calling)** architecture:
|
||||
1. **Frontend:** Collects user input and sends it to the LLM.
|
||||
2. **LLM:** Decides to generate text OR request a **Tool Call** (e.g., `execute_shell`, `read_file`).
|
||||
3. **Tauri Backend (The "Hand"):**
|
||||
3. **Web Server Backend (The "Hand"):**
|
||||
* Intercepts Tool Calls.
|
||||
* Validates the request against the **Safety Policy**.
|
||||
* Executes the native code (File I/O, Shell Process, Search).
|
||||
* Returns the output (stdout/stderr/file content) to the LLM.
|
||||
* **Event Loop:** The backend emits real-time events (`chat:update`) to the frontend to ensure UI responsiveness during long-running Agent tasks.
|
||||
* **Streaming:** The backend sends real-time updates over WebSocket to keep the UI responsive during long-running Agent tasks.
|
||||
|
||||
## LLM Provider Abstraction
|
||||
To support both Remote and Local models, the system implements a `ModelProvider` abstraction layer.
|
||||
@@ -39,8 +39,7 @@ To support both Remote and Local models, the system implements a `ModelProvider`
|
||||
* Otherwise → Ollama
|
||||
* Single unified model dropdown with section headers ("Anthropic", "Ollama")
|
||||
* **API Key Management:**
|
||||
* Anthropic API key stored in OS keychain (macOS Keychain, Windows Credential Manager, Linux Secret Service)
|
||||
* Uses `keyring` crate for cross-platform secure storage
|
||||
* Anthropic API key stored server-side and persisted securely
|
||||
* On first use of Claude model, user prompted to enter API key
|
||||
* Key persists across sessions (no re-entry needed)
|
||||
|
||||
@@ -98,15 +97,11 @@ To support both Remote and Local models, the system implements a `ModelProvider`
|
||||
* `tokio`: Async runtime.
|
||||
* `reqwest`: For LLM API calls (Anthropic, Ollama).
|
||||
* `eventsource-stream`: For Server-Sent Events (Anthropic streaming).
|
||||
* `keyring`: Secure API key storage in OS keychain.
|
||||
* `uuid`: For unique message IDs.
|
||||
* `chrono`: For timestamps.
|
||||
* `tauri-plugin-dialog`: Native system dialogs.
|
||||
* `tauri-plugin-store`: Persistent key-value storage.
|
||||
* `poem`: HTTP server framework.
|
||||
* `poem-openapi`: OpenAPI (Swagger) for non-streaming HTTP APIs.
|
||||
* **JavaScript:**
|
||||
* `@tauri-apps/api`: Tauri Bridge.
|
||||
* `@tauri-apps/plugin-dialog`: Dialog API.
|
||||
* `@tauri-apps/plugin-store`: Store API.
|
||||
* `react-markdown`: For rendering chat responses.
|
||||
|
||||
## Safety & Sandbox
|
||||
|
||||
23
.living_spec/stories/archive/24_tauri_to_browser_ui.md
Normal file
@@ -0,0 +1,23 @@
|
||||
# Story 01: Replace Tauri with Browser UI Served by Rust Binary
|
||||
|
||||
## User Story
|
||||
As a user, I want to run a single Rust binary that serves the web UI and exposes a WebSocket API, so I can use the app in my browser without installing a desktop shell.
|
||||
|
||||
## Acceptance Criteria
|
||||
- The app runs as a single Rust binary that:
|
||||
- Serves the built frontend assets from a `frontend` directory.
|
||||
- Exposes a WebSocket endpoint for chat streaming and tool execution.
|
||||
- The browser UI uses the WebSocket API for:
|
||||
- Sending chat messages.
|
||||
- Receiving streaming token updates and final chat history updates.
|
||||
- Requesting file operations, search, and shell execution.
|
||||
- The project selection UI uses a browser file picker (not native OS dialogs).
|
||||
- Model preference and last project selection are persisted server-side (no Tauri store).
|
||||
- The Tauri backend and configuration are removed from the build pipeline.
|
||||
- The frontend remains a Vite/React build and is served as static assets by the Rust binary.
|
||||
|
||||
## Out of Scope
|
||||
- Reworking the LLM provider implementations beyond wiring changes.
|
||||
- Changing the UI layout/visual design.
|
||||
- Adding authentication or multi-user support.
|
||||
- Switching away from Vite for frontend builds.
|
||||
14
README.md
@@ -1,11 +1,19 @@
|
||||
# Tauri + React + Typescript
|
||||
# Living Spec Standalone (Web Server Binary)
|
||||
|
||||
This template should help get you started developing with Tauri, React and Typescript in Vite.
|
||||
This app runs as a single Rust web server binary that serves the Vite/React frontend and exposes APIs.
|
||||
The frontend lives in the `frontend/` directory.
|
||||
|
||||
## Running it
|
||||
|
||||
```bash
|
||||
pnpm tauri dev
|
||||
# Build the frontend
|
||||
cd frontend
|
||||
pnpm install
|
||||
pnpm build
|
||||
cd ..
|
||||
|
||||
# Run the server (serves embedded frontend/dist/)
|
||||
cargo run --manifest-path server/Cargo.toml
|
||||
```
|
||||
|
||||
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
{
|
||||
"$schema": "https://biomejs.dev/schemas/2.3.10/schema.json",
|
||||
"$schema": "https://biomejs.dev/schemas/2.3.15/schema.json",
|
||||
"vcs": {
|
||||
"enabled": true,
|
||||
"clientKind": "git",
|
||||
"useIgnoreFile": true
|
||||
},
|
||||
"files": {
|
||||
"includes": ["**", "!!**/dist"]
|
||||
"includes": ["frontend/**"]
|
||||
},
|
||||
"formatter": {
|
||||
"enabled": true,
|
||||
|
||||
13
frontend/.gitignore
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
# Logs
|
||||
logs
|
||||
*.log
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
pnpm-debug.log*
|
||||
lerna-debug.log*
|
||||
|
||||
node_modules
|
||||
dist
|
||||
dist-ssr
|
||||
*.local
|
||||
@@ -7,14 +7,10 @@
|
||||
"dev": "vite",
|
||||
"build": "tsc && vite build",
|
||||
"preview": "vite preview",
|
||||
"tauri": "tauri",
|
||||
"server": "cargo run --manifest-path server/Cargo.toml",
|
||||
"test": "jest"
|
||||
},
|
||||
"dependencies": {
|
||||
"@tauri-apps/api": "^2",
|
||||
"@tauri-apps/plugin-dialog": "^2.4.2",
|
||||
"@tauri-apps/plugin-opener": "^2",
|
||||
"@tauri-apps/plugin-store": "^2.4.1",
|
||||
"@types/react-syntax-highlighter": "^15.5.13",
|
||||
"react": "^19.1.0",
|
||||
"react-dom": "^19.1.0",
|
||||
@@ -22,7 +18,6 @@
|
||||
"react-syntax-highlighter": "^16.1.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@tauri-apps/cli": "^2",
|
||||
"@testing-library/jest-dom": "^6.0.0",
|
||||
"@testing-library/react": "^14.0.0",
|
||||
"@testing-library/user-event": "^14.4.3",
|
||||
159
pnpm-lock.yaml → frontend/pnpm-lock.yaml
generated
@@ -8,18 +8,6 @@ importers:
|
||||
|
||||
.:
|
||||
dependencies:
|
||||
'@tauri-apps/api':
|
||||
specifier: ^2
|
||||
version: 2.9.1
|
||||
'@tauri-apps/plugin-dialog':
|
||||
specifier: ^2.4.2
|
||||
version: 2.4.2
|
||||
'@tauri-apps/plugin-opener':
|
||||
specifier: ^2
|
||||
version: 2.5.2
|
||||
'@tauri-apps/plugin-store':
|
||||
specifier: ^2.4.1
|
||||
version: 2.4.1
|
||||
'@types/react-syntax-highlighter':
|
||||
specifier: ^15.5.13
|
||||
version: 15.5.13
|
||||
@@ -36,9 +24,6 @@ importers:
|
||||
specifier: ^16.1.0
|
||||
version: 16.1.0(react@19.2.3)
|
||||
devDependencies:
|
||||
'@tauri-apps/cli':
|
||||
specifier: ^2
|
||||
version: 2.9.6
|
||||
'@testing-library/jest-dom':
|
||||
specifier: ^6.0.0
|
||||
version: 6.9.1
|
||||
@@ -624,89 +609,6 @@ packages:
|
||||
'@sinonjs/fake-timers@10.3.0':
|
||||
resolution: {integrity: sha512-V4BG07kuYSUkTCSBHG8G8TNhM+F19jXFWnQtzj+we8DrkpSBCee9Z3Ms8yiGer/dlmhe35/Xdgyo3/0rQKg7YA==}
|
||||
|
||||
'@tauri-apps/api@2.9.1':
|
||||
resolution: {integrity: sha512-IGlhP6EivjXHepbBic618GOmiWe4URJiIeZFlB7x3czM0yDHHYviH1Xvoiv4FefdkQtn6v7TuwWCRfOGdnVUGw==}
|
||||
|
||||
'@tauri-apps/cli-darwin-arm64@2.9.6':
|
||||
resolution: {integrity: sha512-gf5no6N9FCk1qMrti4lfwP77JHP5haASZgVbBgpZG7BUepB3fhiLCXGUK8LvuOjP36HivXewjg72LTnPDScnQQ==}
|
||||
engines: {node: '>= 10'}
|
||||
cpu: [arm64]
|
||||
os: [darwin]
|
||||
|
||||
'@tauri-apps/cli-darwin-x64@2.9.6':
|
||||
resolution: {integrity: sha512-oWh74WmqbERwwrwcueJyY6HYhgCksUc6NT7WKeXyrlY/FPmNgdyQAgcLuTSkhRFuQ6zh4Np1HZpOqCTpeZBDcw==}
|
||||
engines: {node: '>= 10'}
|
||||
cpu: [x64]
|
||||
os: [darwin]
|
||||
|
||||
'@tauri-apps/cli-linux-arm-gnueabihf@2.9.6':
|
||||
resolution: {integrity: sha512-/zde3bFroFsNXOHN204DC2qUxAcAanUjVXXSdEGmhwMUZeAQalNj5cz2Qli2elsRjKN/hVbZOJj0gQ5zaYUjSg==}
|
||||
engines: {node: '>= 10'}
|
||||
cpu: [arm]
|
||||
os: [linux]
|
||||
|
||||
'@tauri-apps/cli-linux-arm64-gnu@2.9.6':
|
||||
resolution: {integrity: sha512-pvbljdhp9VOo4RnID5ywSxgBs7qiylTPlK56cTk7InR3kYSTJKYMqv/4Q/4rGo/mG8cVppesKIeBMH42fw6wjg==}
|
||||
engines: {node: '>= 10'}
|
||||
cpu: [arm64]
|
||||
os: [linux]
|
||||
|
||||
'@tauri-apps/cli-linux-arm64-musl@2.9.6':
|
||||
resolution: {integrity: sha512-02TKUndpodXBCR0oP//6dZWGYcc22Upf2eP27NvC6z0DIqvkBBFziQUcvi2n6SrwTRL0yGgQjkm9K5NIn8s6jw==}
|
||||
engines: {node: '>= 10'}
|
||||
cpu: [arm64]
|
||||
os: [linux]
|
||||
|
||||
'@tauri-apps/cli-linux-riscv64-gnu@2.9.6':
|
||||
resolution: {integrity: sha512-fmp1hnulbqzl1GkXl4aTX9fV+ubHw2LqlLH1PE3BxZ11EQk+l/TmiEongjnxF0ie4kV8DQfDNJ1KGiIdWe1GvQ==}
|
||||
engines: {node: '>= 10'}
|
||||
cpu: [riscv64]
|
||||
os: [linux]
|
||||
|
||||
'@tauri-apps/cli-linux-x64-gnu@2.9.6':
|
||||
resolution: {integrity: sha512-vY0le8ad2KaV1PJr+jCd8fUF9VOjwwQP/uBuTJvhvKTloEwxYA/kAjKK9OpIslGA9m/zcnSo74czI6bBrm2sYA==}
|
||||
engines: {node: '>= 10'}
|
||||
cpu: [x64]
|
||||
os: [linux]
|
||||
|
||||
'@tauri-apps/cli-linux-x64-musl@2.9.6':
|
||||
resolution: {integrity: sha512-TOEuB8YCFZTWVDzsO2yW0+zGcoMiPPwcUgdnW1ODnmgfwccpnihDRoks+ABT1e3fHb1ol8QQWsHSCovb3o2ENQ==}
|
||||
engines: {node: '>= 10'}
|
||||
cpu: [x64]
|
||||
os: [linux]
|
||||
|
||||
'@tauri-apps/cli-win32-arm64-msvc@2.9.6':
|
||||
resolution: {integrity: sha512-ujmDGMRc4qRLAnj8nNG26Rlz9klJ0I0jmZs2BPpmNNf0gM/rcVHhqbEkAaHPTBVIrtUdf7bGvQAD2pyIiUrBHQ==}
|
||||
engines: {node: '>= 10'}
|
||||
cpu: [arm64]
|
||||
os: [win32]
|
||||
|
||||
'@tauri-apps/cli-win32-ia32-msvc@2.9.6':
|
||||
resolution: {integrity: sha512-S4pT0yAJgFX8QRCyKA1iKjZ9Q/oPjCZf66A/VlG5Yw54Nnr88J1uBpmenINbXxzyhduWrIXBaUbEY1K80ZbpMg==}
|
||||
engines: {node: '>= 10'}
|
||||
cpu: [ia32]
|
||||
os: [win32]
|
||||
|
||||
'@tauri-apps/cli-win32-x64-msvc@2.9.6':
|
||||
resolution: {integrity: sha512-ldWuWSSkWbKOPjQMJoYVj9wLHcOniv7diyI5UAJ4XsBdtaFB0pKHQsqw/ItUma0VXGC7vB4E9fZjivmxur60aw==}
|
||||
engines: {node: '>= 10'}
|
||||
cpu: [x64]
|
||||
os: [win32]
|
||||
|
||||
'@tauri-apps/cli@2.9.6':
|
||||
resolution: {integrity: sha512-3xDdXL5omQ3sPfBfdC8fCtDKcnyV7OqyzQgfyT5P3+zY6lcPqIYKQBvUasNvppi21RSdfhy44ttvJmftb0PCDw==}
|
||||
engines: {node: '>= 10'}
|
||||
hasBin: true
|
||||
|
||||
'@tauri-apps/plugin-dialog@2.4.2':
|
||||
resolution: {integrity: sha512-lNIn5CZuw8WZOn8zHzmFmDSzg5zfohWoa3mdULP0YFh/VogVdMVWZPcWSHlydsiJhRQYaTNSYKN7RmZKE2lCYQ==}
|
||||
|
||||
'@tauri-apps/plugin-opener@2.5.2':
|
||||
resolution: {integrity: sha512-ei/yRRoCklWHImwpCcDK3VhNXx+QXM9793aQ64YxpqVF0BDuuIlXhZgiAkc15wnPVav+IbkYhmDJIv5R326Mew==}
|
||||
|
||||
'@tauri-apps/plugin-store@2.4.1':
|
||||
resolution: {integrity: sha512-ckGSEzZ5Ii4Hf2D5x25Oqnm2Zf9MfDWAzR+volY0z/OOBz6aucPKEY0F649JvQ0Vupku6UJo7ugpGRDOFOunkA==}
|
||||
|
||||
'@testing-library/dom@9.3.4':
|
||||
resolution: {integrity: sha512-FlS4ZWlp97iiNWig0Muq8p+3rVDjRiYE+YKGbAqXOu9nwJFFOdL00kFpz42M+4huzYi86vAK1sOOfyOG45muIQ==}
|
||||
engines: {node: '>=14'}
|
||||
@@ -2834,67 +2736,6 @@ snapshots:
|
||||
dependencies:
|
||||
'@sinonjs/commons': 3.0.1
|
||||
|
||||
'@tauri-apps/api@2.9.1': {}
|
||||
|
||||
'@tauri-apps/cli-darwin-arm64@2.9.6':
|
||||
optional: true
|
||||
|
||||
'@tauri-apps/cli-darwin-x64@2.9.6':
|
||||
optional: true
|
||||
|
||||
'@tauri-apps/cli-linux-arm-gnueabihf@2.9.6':
|
||||
optional: true
|
||||
|
||||
'@tauri-apps/cli-linux-arm64-gnu@2.9.6':
|
||||
optional: true
|
||||
|
||||
'@tauri-apps/cli-linux-arm64-musl@2.9.6':
|
||||
optional: true
|
||||
|
||||
'@tauri-apps/cli-linux-riscv64-gnu@2.9.6':
|
||||
optional: true
|
||||
|
||||
'@tauri-apps/cli-linux-x64-gnu@2.9.6':
|
||||
optional: true
|
||||
|
||||
'@tauri-apps/cli-linux-x64-musl@2.9.6':
|
||||
optional: true
|
||||
|
||||
'@tauri-apps/cli-win32-arm64-msvc@2.9.6':
|
||||
optional: true
|
||||
|
||||
'@tauri-apps/cli-win32-ia32-msvc@2.9.6':
|
||||
optional: true
|
||||
|
||||
'@tauri-apps/cli-win32-x64-msvc@2.9.6':
|
||||
optional: true
|
||||
|
||||
'@tauri-apps/cli@2.9.6':
|
||||
optionalDependencies:
|
||||
'@tauri-apps/cli-darwin-arm64': 2.9.6
|
||||
'@tauri-apps/cli-darwin-x64': 2.9.6
|
||||
'@tauri-apps/cli-linux-arm-gnueabihf': 2.9.6
|
||||
'@tauri-apps/cli-linux-arm64-gnu': 2.9.6
|
||||
'@tauri-apps/cli-linux-arm64-musl': 2.9.6
|
||||
'@tauri-apps/cli-linux-riscv64-gnu': 2.9.6
|
||||
'@tauri-apps/cli-linux-x64-gnu': 2.9.6
|
||||
'@tauri-apps/cli-linux-x64-musl': 2.9.6
|
||||
'@tauri-apps/cli-win32-arm64-msvc': 2.9.6
|
||||
'@tauri-apps/cli-win32-ia32-msvc': 2.9.6
|
||||
'@tauri-apps/cli-win32-x64-msvc': 2.9.6
|
||||
|
||||
'@tauri-apps/plugin-dialog@2.4.2':
|
||||
dependencies:
|
||||
'@tauri-apps/api': 2.9.1
|
||||
|
||||
'@tauri-apps/plugin-opener@2.5.2':
|
||||
dependencies:
|
||||
'@tauri-apps/api': 2.9.1
|
||||
|
||||
'@tauri-apps/plugin-store@2.4.1':
|
||||
dependencies:
|
||||
'@tauri-apps/api': 2.9.1
|
||||
|
||||
'@testing-library/dom@9.3.4':
|
||||
dependencies:
|
||||
'@babel/code-frame': 7.27.1
|
||||
|
Before Width: | Height: | Size: 2.5 KiB After Width: | Height: | Size: 2.5 KiB |
|
Before Width: | Height: | Size: 1.5 KiB After Width: | Height: | Size: 1.5 KiB |
@@ -23,7 +23,10 @@
|
||||
|
||||
.container {
|
||||
margin: 0;
|
||||
padding-top: 10vh;
|
||||
padding-top: 0;
|
||||
height: 100vh;
|
||||
overflow: hidden;
|
||||
box-sizing: border-box;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
justify-content: center;
|
||||
@@ -186,7 +189,10 @@ details summary span:first-child {
|
||||
}
|
||||
|
||||
/* Ensure scroll functionality is maintained */
|
||||
html,
|
||||
body,
|
||||
html {
|
||||
overflow-x: hidden;
|
||||
#root {
|
||||
height: 100%;
|
||||
margin: 0;
|
||||
overflow: hidden;
|
||||
}
|
||||
93
frontend/src/App.tsx
Normal file
@@ -0,0 +1,93 @@
|
||||
import * as React from "react";
|
||||
import { api } from "./api/client";
|
||||
import { Chat } from "./components/Chat";
|
||||
import "./App.css";
|
||||
|
||||
function App() {
|
||||
const [projectPath, setProjectPath] = React.useState<string | null>(null);
|
||||
const [errorMsg, setErrorMsg] = React.useState<string | null>(null);
|
||||
const [pathInput, setPathInput] = React.useState("");
|
||||
const [isOpening, setIsOpening] = React.useState(false);
|
||||
|
||||
async function openProject(path: string) {
|
||||
if (!path.trim()) {
|
||||
setErrorMsg("Please enter a project path.");
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
setErrorMsg(null);
|
||||
setIsOpening(true);
|
||||
const confirmedPath = await api.openProject(path.trim());
|
||||
setProjectPath(confirmedPath);
|
||||
} catch (e) {
|
||||
console.error(e);
|
||||
const message =
|
||||
e instanceof Error
|
||||
? e.message
|
||||
: typeof e === "string"
|
||||
? e
|
||||
: "An error occurred opening the project.";
|
||||
setErrorMsg(message);
|
||||
} finally {
|
||||
setIsOpening(false);
|
||||
}
|
||||
}
|
||||
|
||||
function handleOpen() {
|
||||
void openProject(pathInput);
|
||||
}
|
||||
|
||||
async function closeProject() {
|
||||
try {
|
||||
await api.closeProject();
|
||||
setProjectPath(null);
|
||||
} catch (e) {
|
||||
console.error(e);
|
||||
}
|
||||
}
|
||||
|
||||
return (
|
||||
<main
|
||||
className="container"
|
||||
style={{ height: "100vh", padding: 0, maxWidth: "100%" }}
|
||||
>
|
||||
{!projectPath ? (
|
||||
<div
|
||||
className="selection-screen"
|
||||
style={{ padding: "2rem", maxWidth: "800px", margin: "0 auto" }}
|
||||
>
|
||||
<h1>AI Code Assistant</h1>
|
||||
<p>Paste a project path to start the Story-Driven Spec Workflow.</p>
|
||||
<input
|
||||
type="text"
|
||||
value={pathInput}
|
||||
placeholder="/path/to/project"
|
||||
onChange={(event) => setPathInput(event.target.value)}
|
||||
onKeyDown={(event) => {
|
||||
if (event.key === "Enter") {
|
||||
handleOpen();
|
||||
}
|
||||
}}
|
||||
style={{ width: "100%", padding: "10px", marginTop: "12px" }}
|
||||
/>
|
||||
<button type="button" onClick={handleOpen} disabled={isOpening}>
|
||||
{isOpening ? "Opening..." : "Open Project"}
|
||||
</button>
|
||||
</div>
|
||||
) : (
|
||||
<div className="workspace" style={{ height: "100%" }}>
|
||||
<Chat projectPath={projectPath} onCloseProject={closeProject} />
|
||||
</div>
|
||||
)}
|
||||
|
||||
{errorMsg && (
|
||||
<div className="error-message" style={{ marginTop: "20px" }}>
|
||||
<p style={{ color: "red" }}>Error: {errorMsg}</p>
|
||||
</div>
|
||||
)}
|
||||
</main>
|
||||
);
|
||||
}
|
||||
|
||||
export default App;
|
||||
226
frontend/src/api/client.ts
Normal file
@@ -0,0 +1,226 @@
|
||||
export type WsRequest =
|
||||
| {
|
||||
type: "chat";
|
||||
messages: Message[];
|
||||
config: ProviderConfig;
|
||||
}
|
||||
| {
|
||||
type: "cancel";
|
||||
};
|
||||
|
||||
export type WsResponse =
|
||||
| { type: "token"; content: string }
|
||||
| { type: "update"; messages: Message[] }
|
||||
| { type: "error"; message: string };
|
||||
|
||||
export interface ProviderConfig {
|
||||
provider: string;
|
||||
model: string;
|
||||
base_url?: string;
|
||||
enable_tools?: boolean;
|
||||
}
|
||||
|
||||
export type Role = "system" | "user" | "assistant" | "tool";
|
||||
|
||||
export interface ToolCall {
|
||||
id?: string;
|
||||
type: string;
|
||||
function: {
|
||||
name: string;
|
||||
arguments: string;
|
||||
};
|
||||
}
|
||||
|
||||
export interface Message {
|
||||
role: Role;
|
||||
content: string;
|
||||
tool_calls?: ToolCall[];
|
||||
tool_call_id?: string;
|
||||
}
|
||||
|
||||
export interface FileEntry {
|
||||
name: string;
|
||||
kind: "file" | "dir";
|
||||
}
|
||||
|
||||
export interface SearchResult {
|
||||
path: string;
|
||||
matches: number;
|
||||
}
|
||||
|
||||
export interface CommandOutput {
|
||||
stdout: string;
|
||||
stderr: string;
|
||||
exit_code: number;
|
||||
}
|
||||
|
||||
const DEFAULT_API_BASE = "/api";
|
||||
const DEFAULT_WS_PATH = "/ws";
|
||||
|
||||
function buildApiUrl(path: string, baseUrl = DEFAULT_API_BASE): string {
|
||||
return `${baseUrl}${path}`;
|
||||
}
|
||||
|
||||
async function requestJson<T>(
|
||||
path: string,
|
||||
options: RequestInit = {},
|
||||
baseUrl = DEFAULT_API_BASE,
|
||||
): Promise<T> {
|
||||
const res = await fetch(buildApiUrl(path, baseUrl), {
|
||||
headers: {
|
||||
"Content-Type": "application/json",
|
||||
...(options.headers ?? {}),
|
||||
},
|
||||
...options,
|
||||
});
|
||||
|
||||
if (!res.ok) {
|
||||
const text = await res.text();
|
||||
throw new Error(text || `Request failed (${res.status})`);
|
||||
}
|
||||
|
||||
return res.json() as Promise<T>;
|
||||
}
|
||||
|
||||
export const api = {
|
||||
getCurrentProject(baseUrl?: string) {
|
||||
return requestJson<string | null>("/project", {}, baseUrl);
|
||||
},
|
||||
openProject(path: string, baseUrl?: string) {
|
||||
return requestJson<string>(
|
||||
"/project",
|
||||
{ method: "POST", body: JSON.stringify({ path }) },
|
||||
baseUrl,
|
||||
);
|
||||
},
|
||||
closeProject(baseUrl?: string) {
|
||||
return requestJson<boolean>("/project", { method: "DELETE" }, baseUrl);
|
||||
},
|
||||
getModelPreference(baseUrl?: string) {
|
||||
return requestJson<string | null>("/model", {}, baseUrl);
|
||||
},
|
||||
setModelPreference(model: string, baseUrl?: string) {
|
||||
return requestJson<boolean>(
|
||||
"/model",
|
||||
{ method: "POST", body: JSON.stringify({ model }) },
|
||||
baseUrl,
|
||||
);
|
||||
},
|
||||
getOllamaModels(baseUrlParam?: string, baseUrl?: string) {
|
||||
const url = new URL(
|
||||
buildApiUrl("/ollama/models", baseUrl),
|
||||
window.location.origin,
|
||||
);
|
||||
if (baseUrlParam) {
|
||||
url.searchParams.set("base_url", baseUrlParam);
|
||||
}
|
||||
return requestJson<string[]>(url.pathname + url.search, {}, "");
|
||||
},
|
||||
getAnthropicApiKeyExists(baseUrl?: string) {
|
||||
return requestJson<boolean>("/anthropic/key/exists", {}, baseUrl);
|
||||
},
|
||||
setAnthropicApiKey(api_key: string, baseUrl?: string) {
|
||||
return requestJson<boolean>(
|
||||
"/anthropic/key",
|
||||
{ method: "POST", body: JSON.stringify({ api_key }) },
|
||||
baseUrl,
|
||||
);
|
||||
},
|
||||
readFile(path: string, baseUrl?: string) {
|
||||
return requestJson<string>(
|
||||
"/fs/read",
|
||||
{ method: "POST", body: JSON.stringify({ path }) },
|
||||
baseUrl,
|
||||
);
|
||||
},
|
||||
writeFile(path: string, content: string, baseUrl?: string) {
|
||||
return requestJson<boolean>(
|
||||
"/fs/write",
|
||||
{ method: "POST", body: JSON.stringify({ path, content }) },
|
||||
baseUrl,
|
||||
);
|
||||
},
|
||||
listDirectory(path: string, baseUrl?: string) {
|
||||
return requestJson<FileEntry[]>(
|
||||
"/fs/list",
|
||||
{ method: "POST", body: JSON.stringify({ path }) },
|
||||
baseUrl,
|
||||
);
|
||||
},
|
||||
searchFiles(query: string, baseUrl?: string) {
|
||||
return requestJson<SearchResult[]>(
|
||||
"/fs/search",
|
||||
{ method: "POST", body: JSON.stringify({ query }) },
|
||||
baseUrl,
|
||||
);
|
||||
},
|
||||
execShell(command: string, args: string[], baseUrl?: string) {
|
||||
return requestJson<CommandOutput>(
|
||||
"/shell/exec",
|
||||
{ method: "POST", body: JSON.stringify({ command, args }) },
|
||||
baseUrl,
|
||||
);
|
||||
},
|
||||
cancelChat(baseUrl?: string) {
|
||||
return requestJson<boolean>("/chat/cancel", { method: "POST" }, baseUrl);
|
||||
},
|
||||
};
|
||||
|
||||
export class ChatWebSocket {
|
||||
private socket?: WebSocket;
|
||||
private onToken?: (content: string) => void;
|
||||
private onUpdate?: (messages: Message[]) => void;
|
||||
private onError?: (message: string) => void;
|
||||
|
||||
connect(
|
||||
handlers: {
|
||||
onToken?: (content: string) => void;
|
||||
onUpdate?: (messages: Message[]) => void;
|
||||
onError?: (message: string) => void;
|
||||
},
|
||||
wsPath = DEFAULT_WS_PATH,
|
||||
) {
|
||||
this.onToken = handlers.onToken;
|
||||
this.onUpdate = handlers.onUpdate;
|
||||
this.onError = handlers.onError;
|
||||
|
||||
const protocol = window.location.protocol === "https:" ? "wss" : "ws";
|
||||
const wsUrl = `${protocol}://${window.location.host}${wsPath}`;
|
||||
this.socket = new WebSocket(wsUrl);
|
||||
|
||||
this.socket.onmessage = (event) => {
|
||||
try {
|
||||
const data = JSON.parse(event.data) as WsResponse;
|
||||
if (data.type === "token") this.onToken?.(data.content);
|
||||
if (data.type === "update") this.onUpdate?.(data.messages);
|
||||
if (data.type === "error") this.onError?.(data.message);
|
||||
} catch (err) {
|
||||
this.onError?.(String(err));
|
||||
}
|
||||
};
|
||||
|
||||
this.socket.onerror = () => {
|
||||
this.onError?.("WebSocket error");
|
||||
};
|
||||
}
|
||||
|
||||
sendChat(messages: Message[], config: ProviderConfig) {
|
||||
this.send({ type: "chat", messages, config });
|
||||
}
|
||||
|
||||
cancel() {
|
||||
this.send({ type: "cancel" });
|
||||
}
|
||||
|
||||
close() {
|
||||
this.socket?.close();
|
||||
}
|
||||
|
||||
private send(payload: WsRequest) {
|
||||
if (!this.socket || this.socket.readyState !== WebSocket.OPEN) {
|
||||
this.onError?.("WebSocket is not connected");
|
||||
return;
|
||||
}
|
||||
this.socket.send(JSON.stringify(payload));
|
||||
}
|
||||
}
|
||||
|
Before Width: | Height: | Size: 4.0 KiB After Width: | Height: | Size: 4.0 KiB |
906
frontend/src/components/Chat.tsx
Normal file
@@ -0,0 +1,906 @@
|
||||
import * as React from "react";
|
||||
import Markdown from "react-markdown";
|
||||
import { Prism as SyntaxHighlighter } from "react-syntax-highlighter";
|
||||
import { oneDark } from "react-syntax-highlighter/dist/esm/styles/prism";
|
||||
import { api, ChatWebSocket } from "../api/client";
|
||||
import type { Message, ProviderConfig, ToolCall } from "../types";
|
||||
|
||||
const { useCallback, useEffect, useRef, useState } = React;
|
||||
|
||||
interface ChatProps {
|
||||
projectPath: string;
|
||||
onCloseProject: () => void;
|
||||
}
|
||||
|
||||
export function Chat({ projectPath, onCloseProject }: ChatProps) {
|
||||
const [messages, setMessages] = useState<Message[]>([]);
|
||||
const [input, setInput] = useState("");
|
||||
const [loading, setLoading] = useState(false);
|
||||
const [model, setModel] = useState("llama3.1");
|
||||
const [enableTools, setEnableTools] = useState(true);
|
||||
const [availableModels, setAvailableModels] = useState<string[]>([]);
|
||||
const [claudeModels] = useState<string[]>([
|
||||
"claude-3-5-sonnet-20241022",
|
||||
"claude-3-5-haiku-20241022",
|
||||
]);
|
||||
const [streamingContent, setStreamingContent] = useState("");
|
||||
const [showApiKeyDialog, setShowApiKeyDialog] = useState(false);
|
||||
const [apiKeyInput, setApiKeyInput] = useState("");
|
||||
|
||||
const wsRef = useRef<ChatWebSocket | null>(null);
|
||||
const messagesEndRef = useRef<HTMLDivElement>(null);
|
||||
const inputRef = useRef<HTMLInputElement>(null);
|
||||
const scrollContainerRef = useRef<HTMLDivElement>(null);
|
||||
const shouldAutoScrollRef = useRef(true);
|
||||
const lastScrollTopRef = useRef(0);
|
||||
const userScrolledUpRef = useRef(false);
|
||||
const pendingMessageRef = useRef<string>("");
|
||||
|
||||
const estimateTokens = (text: string): number => Math.ceil(text.length / 4);
|
||||
|
||||
const getContextWindowSize = (modelName: string): number => {
|
||||
if (modelName.startsWith("claude-")) return 200000;
|
||||
if (modelName.includes("llama3")) return 8192;
|
||||
if (modelName.includes("qwen2.5")) return 32768;
|
||||
if (modelName.includes("deepseek")) return 16384;
|
||||
return 8192;
|
||||
};
|
||||
|
||||
const calculateContextUsage = (): {
|
||||
used: number;
|
||||
total: number;
|
||||
percentage: number;
|
||||
} => {
|
||||
let totalTokens = 0;
|
||||
|
||||
totalTokens += 200;
|
||||
|
||||
for (const msg of messages) {
|
||||
totalTokens += estimateTokens(msg.content);
|
||||
if (msg.tool_calls) {
|
||||
totalTokens += estimateTokens(JSON.stringify(msg.tool_calls));
|
||||
}
|
||||
}
|
||||
|
||||
if (streamingContent) {
|
||||
totalTokens += estimateTokens(streamingContent);
|
||||
}
|
||||
|
||||
const contextWindow = getContextWindowSize(model);
|
||||
const percentage = Math.round((totalTokens / contextWindow) * 100);
|
||||
|
||||
return {
|
||||
used: totalTokens,
|
||||
total: contextWindow,
|
||||
percentage,
|
||||
};
|
||||
};
|
||||
|
||||
const contextUsage = calculateContextUsage();
|
||||
|
||||
const getContextEmoji = (percentage: number): string => {
|
||||
if (percentage >= 90) return "🔴";
|
||||
if (percentage >= 75) return "🟡";
|
||||
return "🟢";
|
||||
};
|
||||
|
||||
useEffect(() => {
|
||||
api
|
||||
.getOllamaModels()
|
||||
.then(async (models) => {
|
||||
if (models.length > 0) {
|
||||
const sortedModels = models.sort((a, b) =>
|
||||
a.toLowerCase().localeCompare(b.toLowerCase()),
|
||||
);
|
||||
setAvailableModels(sortedModels);
|
||||
|
||||
try {
|
||||
const savedModel = await api.getModelPreference();
|
||||
if (savedModel) {
|
||||
setModel(savedModel);
|
||||
} else if (sortedModels.length > 0) {
|
||||
setModel(sortedModels[0]);
|
||||
}
|
||||
} catch (e) {
|
||||
console.error(e);
|
||||
}
|
||||
}
|
||||
})
|
||||
.catch((err) => console.error(err));
|
||||
}, []);
|
||||
|
||||
useEffect(() => {
|
||||
const ws = new ChatWebSocket();
|
||||
wsRef.current = ws;
|
||||
|
||||
ws.connect({
|
||||
onToken: (content) => {
|
||||
setStreamingContent((prev: string) => prev + content);
|
||||
},
|
||||
onUpdate: (history) => {
|
||||
setMessages(history);
|
||||
setStreamingContent("");
|
||||
const last = history[history.length - 1];
|
||||
if (last?.role === "assistant" && !last.tool_calls) {
|
||||
setLoading(false);
|
||||
}
|
||||
},
|
||||
onError: (message) => {
|
||||
console.error("WebSocket error:", message);
|
||||
setLoading(false);
|
||||
},
|
||||
});
|
||||
|
||||
return () => {
|
||||
ws.close();
|
||||
wsRef.current = null;
|
||||
};
|
||||
}, []);
|
||||
|
||||
const scrollToBottom = useCallback(() => {
|
||||
const element = scrollContainerRef.current;
|
||||
if (element) {
|
||||
element.scrollTop = element.scrollHeight;
|
||||
lastScrollTopRef.current = element.scrollHeight;
|
||||
}
|
||||
}, []);
|
||||
|
||||
const handleScroll = () => {
|
||||
const element = scrollContainerRef.current;
|
||||
if (!element) return;
|
||||
|
||||
const currentScrollTop = element.scrollTop;
|
||||
const isAtBottom =
|
||||
element.scrollHeight - element.scrollTop - element.clientHeight < 5;
|
||||
|
||||
if (currentScrollTop < lastScrollTopRef.current) {
|
||||
userScrolledUpRef.current = true;
|
||||
shouldAutoScrollRef.current = false;
|
||||
}
|
||||
|
||||
if (isAtBottom) {
|
||||
userScrolledUpRef.current = false;
|
||||
shouldAutoScrollRef.current = true;
|
||||
}
|
||||
|
||||
lastScrollTopRef.current = currentScrollTop;
|
||||
};
|
||||
|
||||
const autoScrollKey = messages.length + streamingContent.length;
|
||||
|
||||
useEffect(() => {
|
||||
if (
|
||||
autoScrollKey >= 0 &&
|
||||
shouldAutoScrollRef.current &&
|
||||
!userScrolledUpRef.current
|
||||
) {
|
||||
scrollToBottom();
|
||||
}
|
||||
}, [autoScrollKey, scrollToBottom]);
|
||||
|
||||
useEffect(() => {
|
||||
inputRef.current?.focus();
|
||||
}, []);
|
||||
|
||||
const cancelGeneration = async () => {
|
||||
try {
|
||||
wsRef.current?.cancel();
|
||||
await api.cancelChat();
|
||||
|
||||
if (streamingContent) {
|
||||
setMessages((prev: Message[]) => [
|
||||
...prev,
|
||||
{ role: "assistant", content: streamingContent },
|
||||
]);
|
||||
setStreamingContent("");
|
||||
}
|
||||
|
||||
setLoading(false);
|
||||
} catch (e) {
|
||||
console.error("Failed to cancel chat:", e);
|
||||
}
|
||||
};
|
||||
|
||||
const sendMessage = async (messageOverride?: string) => {
|
||||
const messageToSend = messageOverride ?? input;
|
||||
if (!messageToSend.trim() || loading) return;
|
||||
|
||||
if (model.startsWith("claude-")) {
|
||||
const hasKey = await api.getAnthropicApiKeyExists();
|
||||
if (!hasKey) {
|
||||
pendingMessageRef.current = messageToSend;
|
||||
setShowApiKeyDialog(true);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
const userMsg: Message = { role: "user", content: messageToSend };
|
||||
const newHistory = [...messages, userMsg];
|
||||
|
||||
setMessages(newHistory);
|
||||
if (!messageOverride || messageOverride === input) {
|
||||
setInput("");
|
||||
}
|
||||
setLoading(true);
|
||||
setStreamingContent("");
|
||||
|
||||
try {
|
||||
const config: ProviderConfig = {
|
||||
provider: model.startsWith("claude-") ? "anthropic" : "ollama",
|
||||
model,
|
||||
base_url: "http://localhost:11434",
|
||||
enable_tools: enableTools,
|
||||
};
|
||||
|
||||
wsRef.current?.sendChat(newHistory, config);
|
||||
} catch (e) {
|
||||
console.error("Chat error:", e);
|
||||
const errorMessage = String(e);
|
||||
if (!errorMessage.includes("Chat cancelled by user")) {
|
||||
setMessages((prev: Message[]) => [
|
||||
...prev,
|
||||
{ role: "assistant", content: `**Error:** ${e}` },
|
||||
]);
|
||||
}
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
const handleSaveApiKey = async () => {
|
||||
if (!apiKeyInput.trim()) return;
|
||||
|
||||
try {
|
||||
await api.setAnthropicApiKey(apiKeyInput);
|
||||
setShowApiKeyDialog(false);
|
||||
setApiKeyInput("");
|
||||
|
||||
const pendingMessage = pendingMessageRef.current;
|
||||
pendingMessageRef.current = "";
|
||||
|
||||
if (pendingMessage.trim()) {
|
||||
sendMessage(pendingMessage);
|
||||
}
|
||||
} catch (e) {
|
||||
console.error("Failed to save API key:", e);
|
||||
alert(`Failed to save API key: ${e}`);
|
||||
}
|
||||
};
|
||||
|
||||
const clearSession = async () => {
|
||||
const confirmed = window.confirm(
|
||||
"Are you sure? This will clear all messages and reset the conversation context.",
|
||||
);
|
||||
|
||||
if (confirmed) {
|
||||
try {
|
||||
await api.cancelChat();
|
||||
wsRef.current?.cancel();
|
||||
} catch (e) {
|
||||
console.error("Failed to cancel chat:", e);
|
||||
}
|
||||
|
||||
setMessages([]);
|
||||
setStreamingContent("");
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<div
|
||||
className="chat-container"
|
||||
style={{
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
height: "100%",
|
||||
backgroundColor: "#171717",
|
||||
color: "#ececec",
|
||||
}}
|
||||
>
|
||||
<div
|
||||
style={{
|
||||
padding: "12px 24px",
|
||||
borderBottom: "1px solid #333",
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
justifyContent: "space-between",
|
||||
background: "#171717",
|
||||
flexShrink: 0,
|
||||
fontSize: "0.9rem",
|
||||
color: "#ececec",
|
||||
}}
|
||||
>
|
||||
<div
|
||||
style={{
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "12px",
|
||||
overflow: "hidden",
|
||||
flex: 1,
|
||||
marginRight: "20px",
|
||||
}}
|
||||
>
|
||||
<div
|
||||
title={projectPath}
|
||||
style={{
|
||||
whiteSpace: "nowrap",
|
||||
overflow: "hidden",
|
||||
textOverflow: "ellipsis",
|
||||
fontWeight: "500",
|
||||
color: "#aaa",
|
||||
direction: "rtl",
|
||||
textAlign: "left",
|
||||
fontFamily: "monospace",
|
||||
fontSize: "0.85em",
|
||||
}}
|
||||
>
|
||||
{projectPath}
|
||||
</div>
|
||||
<button
|
||||
type="button"
|
||||
onClick={onCloseProject}
|
||||
style={{
|
||||
background: "transparent",
|
||||
border: "none",
|
||||
cursor: "pointer",
|
||||
color: "#999",
|
||||
fontSize: "0.8em",
|
||||
padding: "4px 8px",
|
||||
borderRadius: "4px",
|
||||
}}
|
||||
onMouseOver={(e) => {
|
||||
e.currentTarget.style.background = "#333";
|
||||
}}
|
||||
onMouseOut={(e) => {
|
||||
e.currentTarget.style.background = "transparent";
|
||||
}}
|
||||
onFocus={(e) => {
|
||||
e.currentTarget.style.background = "#333";
|
||||
}}
|
||||
onBlur={(e) => {
|
||||
e.currentTarget.style.background = "transparent";
|
||||
}}
|
||||
>
|
||||
✕
|
||||
</button>
|
||||
</div>
|
||||
|
||||
<div style={{ display: "flex", alignItems: "center", gap: "16px" }}>
|
||||
<div
|
||||
style={{
|
||||
fontSize: "0.9em",
|
||||
color: "#ccc",
|
||||
whiteSpace: "nowrap",
|
||||
}}
|
||||
title={`Context: ${contextUsage.used.toLocaleString()} / ${contextUsage.total.toLocaleString()} tokens (${contextUsage.percentage}%)`}
|
||||
>
|
||||
{getContextEmoji(contextUsage.percentage)} {contextUsage.percentage}
|
||||
%
|
||||
</div>
|
||||
|
||||
<button
|
||||
type="button"
|
||||
onClick={clearSession}
|
||||
style={{
|
||||
padding: "6px 12px",
|
||||
borderRadius: "99px",
|
||||
border: "none",
|
||||
fontSize: "0.85em",
|
||||
backgroundColor: "#2f2f2f",
|
||||
color: "#888",
|
||||
cursor: "pointer",
|
||||
outline: "none",
|
||||
transition: "all 0.2s",
|
||||
}}
|
||||
onMouseOver={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#3f3f3f";
|
||||
e.currentTarget.style.color = "#ccc";
|
||||
}}
|
||||
onMouseOut={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#2f2f2f";
|
||||
e.currentTarget.style.color = "#888";
|
||||
}}
|
||||
onFocus={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#3f3f3f";
|
||||
e.currentTarget.style.color = "#ccc";
|
||||
}}
|
||||
onBlur={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#2f2f2f";
|
||||
e.currentTarget.style.color = "#888";
|
||||
}}
|
||||
>
|
||||
🔄 New Session
|
||||
</button>
|
||||
{availableModels.length > 0 || claudeModels.length > 0 ? (
|
||||
<select
|
||||
value={model}
|
||||
onChange={(e) => {
|
||||
const newModel = e.target.value;
|
||||
setModel(newModel);
|
||||
api.setModelPreference(newModel).catch(console.error);
|
||||
}}
|
||||
style={{
|
||||
padding: "6px 32px 6px 16px",
|
||||
borderRadius: "99px",
|
||||
border: "none",
|
||||
fontSize: "0.9em",
|
||||
backgroundColor: "#2f2f2f",
|
||||
color: "#ececec",
|
||||
cursor: "pointer",
|
||||
outline: "none",
|
||||
appearance: "none",
|
||||
WebkitAppearance: "none",
|
||||
backgroundImage: `url("data:image/svg+xml;charset=US-ASCII,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20width%3D%22292.4%22%20height%3D%22292.4%22%3E%3Cpath%20fill%3D%22%23ececec%22%20d%3D%22M287%2069.4a17.6%2017.6%200%200%200-13-5.4H18.4c-5%200-9.3%201.8-12.9%205.4A17.6%2017.6%200%200%200%200%2082.2c0%205%201.8%209.3%205.4%2012.9l128%20127.9c3.6%203.6%207.8%205.4%2012.8%205.4s9.2-1.8%2012.8-5.4L287%2095c3.5-3.5%205.4-7.8%205.4-12.8%200-5-1.9-9.2-5.5-12.8z%22%2F%3E%3C%2Fsvg%3E")`,
|
||||
backgroundRepeat: "no-repeat",
|
||||
backgroundPosition: "right 12px center",
|
||||
backgroundSize: "10px",
|
||||
}}
|
||||
>
|
||||
{claudeModels.length > 0 && (
|
||||
<optgroup label="Anthropic">
|
||||
{claudeModels.map((m: string) => (
|
||||
<option key={m} value={m}>
|
||||
{m}
|
||||
</option>
|
||||
))}
|
||||
</optgroup>
|
||||
)}
|
||||
{availableModels.length > 0 && (
|
||||
<optgroup label="Ollama">
|
||||
{availableModels.map((m: string) => (
|
||||
<option key={m} value={m}>
|
||||
{m}
|
||||
</option>
|
||||
))}
|
||||
</optgroup>
|
||||
)}
|
||||
</select>
|
||||
) : (
|
||||
<input
|
||||
value={model}
|
||||
onChange={(e) => {
|
||||
const newModel = e.target.value;
|
||||
setModel(newModel);
|
||||
api.setModelPreference(newModel).catch(console.error);
|
||||
}}
|
||||
placeholder="Model"
|
||||
style={{
|
||||
padding: "6px 12px",
|
||||
borderRadius: "99px",
|
||||
border: "none",
|
||||
fontSize: "0.9em",
|
||||
background: "#2f2f2f",
|
||||
color: "#ececec",
|
||||
outline: "none",
|
||||
}}
|
||||
/>
|
||||
)}
|
||||
<label
|
||||
style={{
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "6px",
|
||||
cursor: "pointer",
|
||||
fontSize: "0.9em",
|
||||
color: "#aaa",
|
||||
}}
|
||||
title="Allow the Agent to read/write files"
|
||||
>
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={enableTools}
|
||||
onChange={(e) => setEnableTools(e.target.checked)}
|
||||
style={{ accentColor: "#000" }}
|
||||
/>
|
||||
<span>Tools</span>
|
||||
</label>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div
|
||||
ref={scrollContainerRef}
|
||||
onScroll={handleScroll}
|
||||
style={{
|
||||
flex: 1,
|
||||
overflowY: "auto",
|
||||
padding: "20px 0",
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
gap: "24px",
|
||||
}}
|
||||
>
|
||||
<div
|
||||
style={{
|
||||
maxWidth: "768px",
|
||||
margin: "0 auto",
|
||||
width: "100%",
|
||||
padding: "0 24px",
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
gap: "24px",
|
||||
}}
|
||||
>
|
||||
{messages.map((msg: Message, idx: number) => (
|
||||
<div
|
||||
key={`msg-${idx}-${msg.role}-${msg.content.substring(0, 20)}`}
|
||||
style={{
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
alignItems: msg.role === "user" ? "flex-end" : "flex-start",
|
||||
}}
|
||||
>
|
||||
<div
|
||||
style={{
|
||||
maxWidth: "100%",
|
||||
padding: msg.role === "user" ? "10px 16px" : "0",
|
||||
borderRadius: msg.role === "user" ? "20px" : "0",
|
||||
background:
|
||||
msg.role === "user"
|
||||
? "#2f2f2f"
|
||||
: msg.role === "tool"
|
||||
? "#222"
|
||||
: "transparent",
|
||||
color: "#ececec",
|
||||
border: msg.role === "tool" ? "1px solid #333" : "none",
|
||||
fontFamily: msg.role === "tool" ? "monospace" : "inherit",
|
||||
fontSize: msg.role === "tool" ? "0.85em" : "1em",
|
||||
fontWeight: "500",
|
||||
whiteSpace: msg.role === "tool" ? "pre-wrap" : "normal",
|
||||
lineHeight: "1.6",
|
||||
}}
|
||||
>
|
||||
{msg.role === "user" ? (
|
||||
msg.content
|
||||
) : msg.role === "tool" ? (
|
||||
<details style={{ cursor: "pointer" }}>
|
||||
<summary
|
||||
style={{
|
||||
color: "#aaa",
|
||||
fontSize: "0.9em",
|
||||
marginBottom: "8px",
|
||||
listStyle: "none",
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "6px",
|
||||
}}
|
||||
>
|
||||
<span style={{ fontSize: "0.8em" }}>▶</span>
|
||||
<span>
|
||||
Tool Output
|
||||
{msg.tool_call_id && ` (${msg.tool_call_id})`}
|
||||
</span>
|
||||
</summary>
|
||||
<pre
|
||||
style={{
|
||||
maxHeight: "300px",
|
||||
overflow: "auto",
|
||||
margin: 0,
|
||||
padding: "8px",
|
||||
background: "#1a1a1a",
|
||||
borderRadius: "4px",
|
||||
fontSize: "0.85em",
|
||||
whiteSpace: "pre-wrap",
|
||||
wordBreak: "break-word",
|
||||
}}
|
||||
>
|
||||
{msg.content}
|
||||
</pre>
|
||||
</details>
|
||||
) : (
|
||||
<div className="markdown-body">
|
||||
<Markdown
|
||||
components={{
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
// biome-ignore lint/suspicious/noExplicitAny: react-markdown requires any for component props
|
||||
code: ({ className, children, ...props }: any) => {
|
||||
const match = /language-(\w+)/.exec(className || "");
|
||||
const isInline = !className;
|
||||
return !isInline && match ? (
|
||||
<SyntaxHighlighter
|
||||
// biome-ignore lint/suspicious/noExplicitAny: oneDark style types are incompatible
|
||||
style={oneDark as any}
|
||||
language={match[1]}
|
||||
PreTag="div"
|
||||
>
|
||||
{String(children).replace(/\n$/, "")}
|
||||
</SyntaxHighlighter>
|
||||
) : (
|
||||
<code className={className} {...props}>
|
||||
{children}
|
||||
</code>
|
||||
);
|
||||
},
|
||||
}}
|
||||
>
|
||||
{msg.content}
|
||||
</Markdown>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{msg.tool_calls && (
|
||||
<div
|
||||
style={{
|
||||
marginTop: "12px",
|
||||
fontSize: "0.85em",
|
||||
color: "#aaa",
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
gap: "8px",
|
||||
}}
|
||||
>
|
||||
{msg.tool_calls.map((tc: ToolCall, i: number) => {
|
||||
let argsSummary = "";
|
||||
try {
|
||||
const args = JSON.parse(tc.function.arguments);
|
||||
const firstKey = Object.keys(args)[0];
|
||||
if (firstKey && args[firstKey]) {
|
||||
argsSummary = String(args[firstKey]);
|
||||
if (argsSummary.length > 50) {
|
||||
argsSummary = `${argsSummary.substring(0, 47)}...`;
|
||||
}
|
||||
}
|
||||
} catch (_e) {
|
||||
// ignore
|
||||
}
|
||||
|
||||
return (
|
||||
<div
|
||||
key={`tool-${i}-${tc.function.name}`}
|
||||
style={{
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "8px",
|
||||
fontFamily: "monospace",
|
||||
}}
|
||||
>
|
||||
<span style={{ color: "#888" }}>▶</span>
|
||||
<span
|
||||
style={{
|
||||
background: "#333",
|
||||
padding: "2px 6px",
|
||||
borderRadius: "4px",
|
||||
}}
|
||||
>
|
||||
{tc.function.name}
|
||||
{argsSummary && `(${argsSummary})`}
|
||||
</span>
|
||||
</div>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
))}
|
||||
{loading && streamingContent && (
|
||||
<div
|
||||
style={{
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
alignItems: "flex-start",
|
||||
}}
|
||||
>
|
||||
<div
|
||||
style={{
|
||||
maxWidth: "85%",
|
||||
padding: "16px 20px",
|
||||
borderRadius: "12px",
|
||||
background: "#262626",
|
||||
color: "#fff",
|
||||
border: "1px solid #404040",
|
||||
fontFamily: "system-ui, -apple-system, sans-serif",
|
||||
fontSize: "0.95rem",
|
||||
fontWeight: 400,
|
||||
whiteSpace: "pre-wrap",
|
||||
lineHeight: 1.6,
|
||||
}}
|
||||
>
|
||||
<Markdown
|
||||
components={{
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
// biome-ignore lint/suspicious/noExplicitAny: react-markdown requires any for component props
|
||||
code: ({ className, children, ...props }: any) => {
|
||||
const match = /language-(\w+)/.exec(className || "");
|
||||
const isInline = !className;
|
||||
return !isInline && match ? (
|
||||
<SyntaxHighlighter
|
||||
// biome-ignore lint/suspicious/noExplicitAny: oneDark style types are incompatible
|
||||
style={oneDark as any}
|
||||
language={match[1]}
|
||||
PreTag="div"
|
||||
>
|
||||
{String(children).replace(/\n$/, "")}
|
||||
</SyntaxHighlighter>
|
||||
) : (
|
||||
<code className={className} {...props}>
|
||||
{children}
|
||||
</code>
|
||||
);
|
||||
},
|
||||
}}
|
||||
>
|
||||
{streamingContent}
|
||||
</Markdown>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
{loading && !streamingContent && (
|
||||
<div
|
||||
style={{
|
||||
alignSelf: "flex-start",
|
||||
color: "#888",
|
||||
fontSize: "0.9em",
|
||||
marginTop: "10px",
|
||||
}}
|
||||
>
|
||||
<span className="pulse">Thinking...</span>
|
||||
</div>
|
||||
)}
|
||||
<div ref={messagesEndRef} />
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div
|
||||
style={{
|
||||
padding: "24px",
|
||||
background: "#171717",
|
||||
display: "flex",
|
||||
justifyContent: "center",
|
||||
}}
|
||||
>
|
||||
<div
|
||||
style={{
|
||||
maxWidth: "768px",
|
||||
width: "100%",
|
||||
display: "flex",
|
||||
gap: "8px",
|
||||
alignItems: "center",
|
||||
}}
|
||||
>
|
||||
<input
|
||||
ref={inputRef}
|
||||
value={input}
|
||||
onChange={(e) => setInput(e.target.value)}
|
||||
onKeyDown={(e) => {
|
||||
if (e.key === "Enter") {
|
||||
sendMessage();
|
||||
}
|
||||
}}
|
||||
placeholder="Send a message..."
|
||||
style={{
|
||||
flex: 1,
|
||||
padding: "14px 20px",
|
||||
borderRadius: "24px",
|
||||
border: "1px solid #333",
|
||||
outline: "none",
|
||||
fontSize: "1rem",
|
||||
fontWeight: "500",
|
||||
background: "#2f2f2f",
|
||||
color: "#ececec",
|
||||
boxShadow: "0 2px 6px rgba(0,0,0,0.02)",
|
||||
}}
|
||||
/>
|
||||
<button
|
||||
type="button"
|
||||
onClick={loading ? cancelGeneration : () => sendMessage()}
|
||||
disabled={!loading && !input.trim()}
|
||||
style={{
|
||||
background: "#ececec",
|
||||
color: "black",
|
||||
border: "none",
|
||||
borderRadius: "50%",
|
||||
width: "32px",
|
||||
height: "32px",
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
justifyContent: "center",
|
||||
cursor: "pointer",
|
||||
opacity: !loading && !input.trim() ? 0.5 : 1,
|
||||
flexShrink: 0,
|
||||
}}
|
||||
>
|
||||
{loading ? "■" : "↑"}
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{showApiKeyDialog && (
|
||||
<div
|
||||
style={{
|
||||
position: "fixed",
|
||||
top: 0,
|
||||
left: 0,
|
||||
right: 0,
|
||||
bottom: 0,
|
||||
backgroundColor: "rgba(0, 0, 0, 0.7)",
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
justifyContent: "center",
|
||||
zIndex: 1000,
|
||||
}}
|
||||
>
|
||||
<div
|
||||
style={{
|
||||
backgroundColor: "#2f2f2f",
|
||||
padding: "32px",
|
||||
borderRadius: "12px",
|
||||
maxWidth: "500px",
|
||||
width: "90%",
|
||||
border: "1px solid #444",
|
||||
}}
|
||||
>
|
||||
<h2 style={{ marginTop: 0, color: "#ececec" }}>
|
||||
Enter Anthropic API Key
|
||||
</h2>
|
||||
<p
|
||||
style={{ color: "#aaa", fontSize: "0.9em", marginBottom: "20px" }}
|
||||
>
|
||||
To use Claude models, please enter your Anthropic API key. Your
|
||||
key will be stored server-side and reused across sessions.
|
||||
</p>
|
||||
<input
|
||||
type="password"
|
||||
value={apiKeyInput}
|
||||
onChange={(e) => setApiKeyInput(e.target.value)}
|
||||
onKeyDown={(e) => e.key === "Enter" && handleSaveApiKey()}
|
||||
placeholder="sk-ant-..."
|
||||
style={{
|
||||
width: "100%",
|
||||
padding: "12px",
|
||||
borderRadius: "8px",
|
||||
border: "1px solid #555",
|
||||
backgroundColor: "#1a1a1a",
|
||||
color: "#ececec",
|
||||
fontSize: "1em",
|
||||
marginBottom: "20px",
|
||||
outline: "none",
|
||||
}}
|
||||
/>
|
||||
<div
|
||||
style={{
|
||||
display: "flex",
|
||||
gap: "12px",
|
||||
justifyContent: "flex-end",
|
||||
}}
|
||||
>
|
||||
<button
|
||||
type="button"
|
||||
onClick={() => {
|
||||
setShowApiKeyDialog(false);
|
||||
setApiKeyInput("");
|
||||
pendingMessageRef.current = "";
|
||||
}}
|
||||
style={{
|
||||
padding: "10px 20px",
|
||||
borderRadius: "8px",
|
||||
border: "1px solid #555",
|
||||
backgroundColor: "transparent",
|
||||
color: "#aaa",
|
||||
cursor: "pointer",
|
||||
fontSize: "0.9em",
|
||||
}}
|
||||
>
|
||||
Cancel
|
||||
</button>
|
||||
<button
|
||||
type="button"
|
||||
onClick={handleSaveApiKey}
|
||||
disabled={!apiKeyInput.trim()}
|
||||
style={{
|
||||
padding: "10px 20px",
|
||||
borderRadius: "8px",
|
||||
border: "none",
|
||||
backgroundColor: apiKeyInput.trim() ? "#ececec" : "#555",
|
||||
color: apiKeyInput.trim() ? "#000" : "#888",
|
||||
cursor: apiKeyInput.trim() ? "pointer" : "not-allowed",
|
||||
fontSize: "0.9em",
|
||||
}}
|
||||
>
|
||||
Save Key
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
import React from "react";
|
||||
import * as React from "react";
|
||||
import ReactDOM from "react-dom/client";
|
||||
import App from "./App";
|
||||
|
||||
@@ -1,19 +1,3 @@
|
||||
export interface FileEntry {
|
||||
name: string;
|
||||
kind: "file" | "dir";
|
||||
}
|
||||
|
||||
export interface SearchResult {
|
||||
path: string;
|
||||
matches: number;
|
||||
}
|
||||
|
||||
export interface CommandOutput {
|
||||
stdout: string;
|
||||
stderr: string;
|
||||
exit_code: number;
|
||||
}
|
||||
|
||||
export type Role = "system" | "user" | "assistant" | "tool";
|
||||
|
||||
export interface ToolCall {
|
||||
@@ -38,3 +22,45 @@ export interface ProviderConfig {
|
||||
base_url?: string;
|
||||
enable_tools?: boolean;
|
||||
}
|
||||
|
||||
export interface FileEntry {
|
||||
name: string;
|
||||
kind: "file" | "dir";
|
||||
}
|
||||
|
||||
export interface SearchResult {
|
||||
path: string;
|
||||
matches: number;
|
||||
}
|
||||
|
||||
export interface CommandOutput {
|
||||
stdout: string;
|
||||
stderr: string;
|
||||
exit_code: number;
|
||||
}
|
||||
|
||||
export type WsRequest =
|
||||
| {
|
||||
type: "chat";
|
||||
messages: Message[];
|
||||
config: ProviderConfig;
|
||||
}
|
||||
| {
|
||||
type: "cancel";
|
||||
};
|
||||
|
||||
export type WsResponse =
|
||||
| { type: "token"; content: string }
|
||||
| { type: "update"; messages: Message[] }
|
||||
| { type: "error"; message: string };
|
||||
|
||||
// Re-export API client types for convenience
|
||||
export type {
|
||||
Message as ApiMessage,
|
||||
ProviderConfig as ApiProviderConfig,
|
||||
FileEntry as ApiFileEntry,
|
||||
SearchResult as ApiSearchResult,
|
||||
CommandOutput as ApiCommandOutput,
|
||||
WsRequest as ApiWsRequest,
|
||||
WsResponse as ApiWsResponse,
|
||||
};
|
||||
10
frontend/src/vite-env.d.ts
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
/// <reference types="vite/client" />
|
||||
|
||||
declare module "react" {
|
||||
interface InputHTMLAttributes<T> {
|
||||
webkitdirectory?: string;
|
||||
directory?: string;
|
||||
}
|
||||
}
|
||||
|
||||
export {};
|
||||
20
frontend/vite.config.ts
Normal file
@@ -0,0 +1,20 @@
|
||||
import react from "@vitejs/plugin-react";
|
||||
import { defineConfig } from "vite";
|
||||
|
||||
// https://vite.dev/config/
|
||||
export default defineConfig(() => ({
|
||||
plugins: [react()],
|
||||
server: {
|
||||
proxy: {
|
||||
"/api": "http://localhost:3001",
|
||||
"/ws": {
|
||||
target: "ws://localhost:3001",
|
||||
ws: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
build: {
|
||||
outDir: "dist",
|
||||
emptyOutDir: true,
|
||||
},
|
||||
}));
|
||||
3022
server/Cargo.lock
generated
Normal file
25
server/Cargo.toml
Normal file
@@ -0,0 +1,25 @@
|
||||
[package]
|
||||
name = "living-spec-standalone-server"
|
||||
version = "0.1.0"
|
||||
edition = "2024"
|
||||
build = "build.rs"
|
||||
|
||||
[dependencies]
|
||||
poem = { version = "3", features = ["websocket"] }
|
||||
poem-openapi = { version = "5", features = ["swagger-ui"] }
|
||||
tokio = { version = "1", features = ["rt-multi-thread", "macros", "sync"] }
|
||||
serde = { version = "1", features = ["derive"] }
|
||||
serde_json = "1"
|
||||
reqwest = { version = "0.13.2", features = ["json", "stream"] }
|
||||
futures = "0.3"
|
||||
uuid = { version = "1.20.0", features = ["v4", "serde"] }
|
||||
chrono = { version = "0.4.43", features = ["serde"] }
|
||||
async-trait = "0.1.89"
|
||||
ignore = "0.4.25"
|
||||
walkdir = "2.5.0"
|
||||
eventsource-stream = "0.2.3"
|
||||
rust-embed = "8"
|
||||
mime_guess = "2"
|
||||
|
||||
[dev-dependencies]
|
||||
tempfile = "3"
|
||||
23
server/build.rs
Normal file
@@ -0,0 +1,23 @@
|
||||
use std::fs;
|
||||
use std::path::Path;
|
||||
|
||||
fn main() {
|
||||
let dist_dir = Path::new("../frontend/dist");
|
||||
|
||||
println!("cargo:rerun-if-changed=build.rs");
|
||||
|
||||
if let Ok(entries) = fs::read_dir(dist_dir) {
|
||||
for entry in entries.flatten() {
|
||||
let path = entry.path();
|
||||
if path.is_dir() {
|
||||
if let Ok(sub_entries) = fs::read_dir(&path) {
|
||||
for sub_entry in sub_entries.flatten() {
|
||||
println!("cargo:rerun-if-changed={}", sub_entry.path().display());
|
||||
}
|
||||
}
|
||||
} else {
|
||||
println!("cargo:rerun-if-changed={}", path.display());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
378
server/src/commands/chat.rs
Normal file
@@ -0,0 +1,378 @@
|
||||
use crate::llm::prompts::SYSTEM_PROMPT;
|
||||
use crate::llm::types::{Message, Role, ToolCall, ToolDefinition, ToolFunctionDefinition};
|
||||
use crate::state::SessionState;
|
||||
use crate::store::StoreOps;
|
||||
use serde::Deserialize;
|
||||
use serde_json::json;
|
||||
|
||||
const MAX_TURNS: usize = 30;
|
||||
const KEY_ANTHROPIC_API_KEY: &str = "anthropic_api_key";
|
||||
|
||||
#[derive(Deserialize, Clone)]
|
||||
pub struct ProviderConfig {
|
||||
pub provider: String,
|
||||
pub model: String,
|
||||
pub base_url: Option<String>,
|
||||
pub enable_tools: Option<bool>,
|
||||
}
|
||||
|
||||
fn get_anthropic_api_key_exists_impl(store: &dyn StoreOps) -> bool {
|
||||
match store.get(KEY_ANTHROPIC_API_KEY) {
|
||||
Some(value) => value.as_str().map(|k| !k.is_empty()).unwrap_or(false),
|
||||
None => false,
|
||||
}
|
||||
}
|
||||
|
||||
fn set_anthropic_api_key_impl(store: &dyn StoreOps, api_key: &str) -> Result<(), String> {
|
||||
store.set(KEY_ANTHROPIC_API_KEY, json!(api_key));
|
||||
store.save()?;
|
||||
|
||||
match store.get(KEY_ANTHROPIC_API_KEY) {
|
||||
Some(value) => {
|
||||
if let Some(retrieved) = value.as_str() {
|
||||
if retrieved != api_key {
|
||||
return Err("Retrieved key does not match saved key".to_string());
|
||||
}
|
||||
} else {
|
||||
return Err("Stored value is not a string".to_string());
|
||||
}
|
||||
}
|
||||
None => {
|
||||
return Err("API key was saved but cannot be retrieved".to_string());
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn get_anthropic_api_key_impl(store: &dyn StoreOps) -> Result<String, String> {
|
||||
match store.get(KEY_ANTHROPIC_API_KEY) {
|
||||
Some(value) => {
|
||||
if let Some(key) = value.as_str() {
|
||||
if key.is_empty() {
|
||||
Err("Anthropic API key is empty. Please set your API key.".to_string())
|
||||
} else {
|
||||
Ok(key.to_string())
|
||||
}
|
||||
} else {
|
||||
Err("Stored API key is not a string".to_string())
|
||||
}
|
||||
}
|
||||
None => Err("Anthropic API key not found. Please set your API key.".to_string()),
|
||||
}
|
||||
}
|
||||
|
||||
fn parse_tool_arguments(args_str: &str) -> Result<serde_json::Value, String> {
|
||||
serde_json::from_str(args_str).map_err(|e| format!("Error parsing arguments: {e}"))
|
||||
}
|
||||
|
||||
pub fn get_tool_definitions() -> Vec<ToolDefinition> {
|
||||
vec![
|
||||
ToolDefinition {
|
||||
kind: "function".to_string(),
|
||||
function: ToolFunctionDefinition {
|
||||
name: "read_file".to_string(),
|
||||
description: "Reads the complete content of a file from the project. Use this to understand existing code before making changes.".to_string(),
|
||||
parameters: json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": { "type": "string", "description": "Relative path to the file from project root" }
|
||||
},
|
||||
"required": ["path"]
|
||||
}),
|
||||
},
|
||||
},
|
||||
ToolDefinition {
|
||||
kind: "function".to_string(),
|
||||
function: ToolFunctionDefinition {
|
||||
name: "write_file".to_string(),
|
||||
description: "Creates or completely overwrites a file with new content. YOU MUST USE THIS to implement code changes - do not suggest code to the user. The content parameter must contain the COMPLETE file including all imports, functions, and unchanged code.".to_string(),
|
||||
parameters: json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": { "type": "string", "description": "Relative path to the file from project root" },
|
||||
"content": { "type": "string", "description": "The complete file content to write (not a diff or partial code)" }
|
||||
},
|
||||
"required": ["path", "content"]
|
||||
}),
|
||||
},
|
||||
},
|
||||
ToolDefinition {
|
||||
kind: "function".to_string(),
|
||||
function: ToolFunctionDefinition {
|
||||
name: "list_directory".to_string(),
|
||||
description: "Lists all files and directories at a given path. Use this to explore the project structure.".to_string(),
|
||||
parameters: json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": { "type": "string", "description": "Relative path to list (use '.' for project root)" }
|
||||
},
|
||||
"required": ["path"]
|
||||
}),
|
||||
},
|
||||
},
|
||||
ToolDefinition {
|
||||
kind: "function".to_string(),
|
||||
function: ToolFunctionDefinition {
|
||||
name: "search_files".to_string(),
|
||||
description: "Searches for text patterns across all files in the project. Use this to find functions, variables, or code patterns when you don't know which file they're in."
|
||||
.to_string(),
|
||||
parameters: json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"query": { "type": "string", "description": "The text pattern to search for across all files" }
|
||||
},
|
||||
"required": ["query"]
|
||||
}),
|
||||
},
|
||||
},
|
||||
ToolDefinition {
|
||||
kind: "function".to_string(),
|
||||
function: ToolFunctionDefinition {
|
||||
name: "exec_shell".to_string(),
|
||||
description: "Executes a shell command in the project root directory. Use this to run tests, build commands, git operations, or any command-line tool. Examples: cargo check, npm test, git status.".to_string(),
|
||||
parameters: json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"command": {
|
||||
"type": "string",
|
||||
"description": "The command binary to execute (e.g., 'git', 'cargo', 'npm', 'ls')"
|
||||
},
|
||||
"args": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Array of arguments to pass to the command (e.g., ['status'] for git status)"
|
||||
}
|
||||
},
|
||||
"required": ["command", "args"]
|
||||
}),
|
||||
},
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
pub async fn get_ollama_models(base_url: Option<String>) -> Result<Vec<String>, String> {
|
||||
use crate::llm::providers::ollama::OllamaProvider;
|
||||
let url = base_url.unwrap_or_else(|| "http://localhost:11434".to_string());
|
||||
OllamaProvider::get_models(&url).await
|
||||
}
|
||||
|
||||
pub fn get_anthropic_api_key_exists(store: &dyn StoreOps) -> Result<bool, String> {
|
||||
Ok(get_anthropic_api_key_exists_impl(store))
|
||||
}
|
||||
|
||||
pub fn set_anthropic_api_key(store: &dyn StoreOps, api_key: String) -> Result<(), String> {
|
||||
set_anthropic_api_key_impl(store, &api_key)
|
||||
}
|
||||
|
||||
pub async fn chat<F, U>(
|
||||
messages: Vec<Message>,
|
||||
config: ProviderConfig,
|
||||
state: &SessionState,
|
||||
store: &dyn StoreOps,
|
||||
mut on_update: F,
|
||||
mut on_token: U,
|
||||
) -> Result<Vec<Message>, String>
|
||||
where
|
||||
F: FnMut(&[Message]) + Send,
|
||||
U: FnMut(&str) + Send,
|
||||
{
|
||||
use crate::llm::providers::anthropic::AnthropicProvider;
|
||||
use crate::llm::providers::ollama::OllamaProvider;
|
||||
|
||||
let _ = state.cancel_tx.send(false);
|
||||
let mut cancel_rx = state.cancel_rx.clone();
|
||||
cancel_rx.borrow_and_update();
|
||||
|
||||
let base_url = config
|
||||
.base_url
|
||||
.clone()
|
||||
.unwrap_or_else(|| "http://localhost:11434".to_string());
|
||||
|
||||
let is_claude = config.model.starts_with("claude-");
|
||||
|
||||
if !is_claude && config.provider.as_str() != "ollama" {
|
||||
return Err(format!("Unsupported provider: {}", config.provider));
|
||||
}
|
||||
|
||||
let tool_defs = get_tool_definitions();
|
||||
let tools = if config.enable_tools.unwrap_or(true) {
|
||||
tool_defs.as_slice()
|
||||
} else {
|
||||
&[]
|
||||
};
|
||||
|
||||
let mut current_history = messages.clone();
|
||||
|
||||
current_history.insert(
|
||||
0,
|
||||
Message {
|
||||
role: Role::System,
|
||||
content: SYSTEM_PROMPT.to_string(),
|
||||
tool_calls: None,
|
||||
tool_call_id: None,
|
||||
},
|
||||
);
|
||||
|
||||
current_history.insert(
|
||||
1,
|
||||
Message {
|
||||
role: Role::System,
|
||||
content: "REMINDER: Distinguish between showing examples (use code blocks in chat) vs implementing changes (use write_file tool). Keywords like 'show me', 'example', 'how does' = chat response. Keywords like 'create', 'add', 'implement', 'fix' = use tools."
|
||||
.to_string(),
|
||||
tool_calls: None,
|
||||
tool_call_id: None,
|
||||
},
|
||||
);
|
||||
|
||||
let mut new_messages: Vec<Message> = Vec::new();
|
||||
let mut turn_count = 0;
|
||||
|
||||
loop {
|
||||
if *cancel_rx.borrow() {
|
||||
return Err("Chat cancelled by user".to_string());
|
||||
}
|
||||
|
||||
if turn_count >= MAX_TURNS {
|
||||
return Err("Max conversation turns reached.".to_string());
|
||||
}
|
||||
turn_count += 1;
|
||||
|
||||
let response = if is_claude {
|
||||
let api_key = get_anthropic_api_key_impl(store)?;
|
||||
let anthropic_provider = AnthropicProvider::new(api_key);
|
||||
anthropic_provider
|
||||
.chat_stream(
|
||||
&config.model,
|
||||
¤t_history,
|
||||
tools,
|
||||
&mut cancel_rx,
|
||||
|token| on_token(token),
|
||||
)
|
||||
.await
|
||||
.map_err(|e| format!("Anthropic Error: {e}"))?
|
||||
} else {
|
||||
let ollama_provider = OllamaProvider::new(base_url.clone());
|
||||
ollama_provider
|
||||
.chat_stream(
|
||||
&config.model,
|
||||
¤t_history,
|
||||
tools,
|
||||
&mut cancel_rx,
|
||||
|token| on_token(token),
|
||||
)
|
||||
.await
|
||||
.map_err(|e| format!("Ollama Error: {e}"))?
|
||||
};
|
||||
|
||||
if let Some(tool_calls) = response.tool_calls {
|
||||
let assistant_msg = Message {
|
||||
role: Role::Assistant,
|
||||
content: response.content.unwrap_or_default(),
|
||||
tool_calls: Some(tool_calls.clone()),
|
||||
tool_call_id: None,
|
||||
};
|
||||
|
||||
current_history.push(assistant_msg.clone());
|
||||
new_messages.push(assistant_msg);
|
||||
on_update(¤t_history[2..]);
|
||||
|
||||
for call in tool_calls {
|
||||
if *cancel_rx.borrow() {
|
||||
return Err("Chat cancelled before tool execution".to_string());
|
||||
}
|
||||
|
||||
let output = execute_tool(&call, state).await;
|
||||
|
||||
let tool_msg = Message {
|
||||
role: Role::Tool,
|
||||
content: output,
|
||||
tool_calls: None,
|
||||
tool_call_id: call.id,
|
||||
};
|
||||
|
||||
current_history.push(tool_msg.clone());
|
||||
new_messages.push(tool_msg);
|
||||
on_update(¤t_history[2..]);
|
||||
}
|
||||
} else {
|
||||
let assistant_msg = Message {
|
||||
role: Role::Assistant,
|
||||
content: response.content.unwrap_or_default(),
|
||||
tool_calls: None,
|
||||
tool_call_id: None,
|
||||
};
|
||||
|
||||
new_messages.push(assistant_msg.clone());
|
||||
current_history.push(assistant_msg);
|
||||
on_update(¤t_history[2..]);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
Ok(new_messages)
|
||||
}
|
||||
|
||||
async fn execute_tool(call: &ToolCall, state: &SessionState) -> String {
|
||||
use crate::commands::{fs, search, shell};
|
||||
|
||||
let name = call.function.name.as_str();
|
||||
let args: serde_json::Value = match parse_tool_arguments(&call.function.arguments) {
|
||||
Ok(v) => v,
|
||||
Err(e) => return e,
|
||||
};
|
||||
|
||||
match name {
|
||||
"read_file" => {
|
||||
let path = args["path"].as_str().unwrap_or("").to_string();
|
||||
match fs::read_file(path, state).await {
|
||||
Ok(content) => content,
|
||||
Err(e) => format!("Error: {e}"),
|
||||
}
|
||||
}
|
||||
"write_file" => {
|
||||
let path = args["path"].as_str().unwrap_or("").to_string();
|
||||
let content = args["content"].as_str().unwrap_or("").to_string();
|
||||
match fs::write_file(path, content, state).await {
|
||||
Ok(()) => "File written successfully.".to_string(),
|
||||
Err(e) => format!("Error: {e}"),
|
||||
}
|
||||
}
|
||||
"list_directory" => {
|
||||
let path = args["path"].as_str().unwrap_or("").to_string();
|
||||
match fs::list_directory(path, state).await {
|
||||
Ok(entries) => serde_json::to_string(&entries).unwrap_or_default(),
|
||||
Err(e) => format!("Error: {e}"),
|
||||
}
|
||||
}
|
||||
"search_files" => {
|
||||
let query = args["query"].as_str().unwrap_or("").to_string();
|
||||
match search::search_files(query, state).await {
|
||||
Ok(results) => serde_json::to_string(&results).unwrap_or_default(),
|
||||
Err(e) => format!("Error: {e}"),
|
||||
}
|
||||
}
|
||||
"exec_shell" => {
|
||||
let command = args["command"].as_str().unwrap_or("").to_string();
|
||||
let args_vec: Vec<String> = args["args"]
|
||||
.as_array()
|
||||
.map(|arr| {
|
||||
arr.iter()
|
||||
.map(|v| v.as_str().unwrap_or("").to_string())
|
||||
.collect()
|
||||
})
|
||||
.unwrap_or_default();
|
||||
|
||||
match shell::exec_shell(command, args_vec, state).await {
|
||||
Ok(output) => serde_json::to_string(&output).unwrap_or_default(),
|
||||
Err(e) => format!("Error: {e}"),
|
||||
}
|
||||
}
|
||||
_ => format!("Unknown tool: {name}"),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn cancel_chat(state: &SessionState) -> Result<(), String> {
|
||||
state.cancel_tx.send(true).map_err(|e| e.to_string())?;
|
||||
Ok(())
|
||||
}
|
||||
191
server/src/commands/fs.rs
Normal file
@@ -0,0 +1,191 @@
|
||||
use crate::state::SessionState;
|
||||
use crate::store::StoreOps;
|
||||
use serde::Serialize;
|
||||
use serde_json::json;
|
||||
use std::fs;
|
||||
use std::path::PathBuf;
|
||||
|
||||
const KEY_LAST_PROJECT: &str = "last_project_path";
|
||||
const KEY_SELECTED_MODEL: &str = "selected_model";
|
||||
|
||||
/// Resolves a relative path against the active project root (pure function for testing).
|
||||
/// Returns error if path attempts traversal (..).
|
||||
fn resolve_path_impl(root: PathBuf, relative_path: &str) -> Result<PathBuf, String> {
|
||||
if relative_path.contains("..") {
|
||||
return Err("Security Violation: Directory traversal ('..') is not allowed.".to_string());
|
||||
}
|
||||
|
||||
Ok(root.join(relative_path))
|
||||
}
|
||||
|
||||
/// Resolves a relative path against the active project root.
|
||||
/// Returns error if no project is open or if path attempts traversal (..).
|
||||
fn resolve_path(state: &SessionState, relative_path: &str) -> Result<PathBuf, String> {
|
||||
let root = state.get_project_root()?;
|
||||
resolve_path_impl(root, relative_path)
|
||||
}
|
||||
|
||||
/// Validate that a path exists and is a directory (pure function for testing)
|
||||
async fn validate_project_path(path: PathBuf) -> Result<(), String> {
|
||||
tokio::task::spawn_blocking(move || {
|
||||
if !path.exists() {
|
||||
return Err(format!("Path does not exist: {}", path.display()));
|
||||
}
|
||||
if !path.is_dir() {
|
||||
return Err(format!("Path is not a directory: {}", path.display()));
|
||||
}
|
||||
Ok(())
|
||||
})
|
||||
.await
|
||||
.map_err(|e| format!("Task failed: {}", e))?
|
||||
}
|
||||
|
||||
pub async fn open_project(
|
||||
path: String,
|
||||
state: &SessionState,
|
||||
store: &dyn StoreOps,
|
||||
) -> Result<String, String> {
|
||||
let p = PathBuf::from(&path);
|
||||
|
||||
validate_project_path(p.clone()).await?;
|
||||
|
||||
{
|
||||
let mut root = state.project_root.lock().map_err(|e| e.to_string())?;
|
||||
*root = Some(p);
|
||||
}
|
||||
|
||||
store.set(KEY_LAST_PROJECT, json!(path));
|
||||
store.save()?;
|
||||
|
||||
Ok(path)
|
||||
}
|
||||
|
||||
pub fn close_project(state: &SessionState, store: &dyn StoreOps) -> Result<(), String> {
|
||||
{
|
||||
let mut root = state.project_root.lock().map_err(|e| e.to_string())?;
|
||||
*root = None;
|
||||
}
|
||||
|
||||
store.delete(KEY_LAST_PROJECT);
|
||||
store.save()?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn get_current_project(
|
||||
state: &SessionState,
|
||||
store: &dyn StoreOps,
|
||||
) -> Result<Option<String>, String> {
|
||||
{
|
||||
let root = state.project_root.lock().map_err(|e| e.to_string())?;
|
||||
if let Some(path) = &*root {
|
||||
return Ok(Some(path.to_string_lossy().to_string()));
|
||||
}
|
||||
}
|
||||
|
||||
if let Some(path_str) = store
|
||||
.get(KEY_LAST_PROJECT)
|
||||
.as_ref()
|
||||
.and_then(|val| val.as_str())
|
||||
{
|
||||
let p = PathBuf::from(path_str);
|
||||
if p.exists() && p.is_dir() {
|
||||
let mut root = state.project_root.lock().map_err(|e| e.to_string())?;
|
||||
*root = Some(p);
|
||||
return Ok(Some(path_str.to_string()));
|
||||
}
|
||||
}
|
||||
|
||||
Ok(None)
|
||||
}
|
||||
|
||||
pub fn get_model_preference(store: &dyn StoreOps) -> Result<Option<String>, String> {
|
||||
if let Some(model) = store
|
||||
.get(KEY_SELECTED_MODEL)
|
||||
.as_ref()
|
||||
.and_then(|val| val.as_str())
|
||||
{
|
||||
return Ok(Some(model.to_string()));
|
||||
}
|
||||
Ok(None)
|
||||
}
|
||||
|
||||
pub fn set_model_preference(model: String, store: &dyn StoreOps) -> Result<(), String> {
|
||||
store.set(KEY_SELECTED_MODEL, json!(model));
|
||||
store.save()?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn read_file_impl(full_path: PathBuf) -> Result<String, String> {
|
||||
tokio::task::spawn_blocking(move || {
|
||||
fs::read_to_string(&full_path).map_err(|e| format!("Failed to read file: {}", e))
|
||||
})
|
||||
.await
|
||||
.map_err(|e| format!("Task failed: {}", e))?
|
||||
}
|
||||
|
||||
pub async fn read_file(path: String, state: &SessionState) -> Result<String, String> {
|
||||
let full_path = resolve_path(state, &path)?;
|
||||
read_file_impl(full_path).await
|
||||
}
|
||||
|
||||
async fn write_file_impl(full_path: PathBuf, content: String) -> Result<(), String> {
|
||||
tokio::task::spawn_blocking(move || {
|
||||
if let Some(parent) = full_path.parent() {
|
||||
fs::create_dir_all(parent)
|
||||
.map_err(|e| format!("Failed to create directories: {}", e))?;
|
||||
}
|
||||
|
||||
fs::write(&full_path, content).map_err(|e| format!("Failed to write file: {}", e))
|
||||
})
|
||||
.await
|
||||
.map_err(|e| format!("Task failed: {}", e))?
|
||||
}
|
||||
|
||||
pub async fn write_file(path: String, content: String, state: &SessionState) -> Result<(), String> {
|
||||
let full_path = resolve_path(state, &path)?;
|
||||
write_file_impl(full_path, content).await
|
||||
}
|
||||
|
||||
#[derive(Serialize, Debug, poem_openapi::Object)]
|
||||
pub struct FileEntry {
|
||||
pub name: String,
|
||||
pub kind: String,
|
||||
}
|
||||
|
||||
async fn list_directory_impl(full_path: PathBuf) -> Result<Vec<FileEntry>, String> {
|
||||
tokio::task::spawn_blocking(move || {
|
||||
let entries = fs::read_dir(&full_path).map_err(|e| format!("Failed to read dir: {}", e))?;
|
||||
|
||||
let mut result = Vec::new();
|
||||
for entry in entries {
|
||||
let entry = entry.map_err(|e| e.to_string())?;
|
||||
let ft = entry.file_type().map_err(|e| e.to_string())?;
|
||||
let name = entry.file_name().to_string_lossy().to_string();
|
||||
|
||||
result.push(FileEntry {
|
||||
name,
|
||||
kind: if ft.is_dir() {
|
||||
"dir".to_string()
|
||||
} else {
|
||||
"file".to_string()
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
result.sort_by(|a, b| match (a.kind.as_str(), b.kind.as_str()) {
|
||||
("dir", "file") => std::cmp::Ordering::Less,
|
||||
("file", "dir") => std::cmp::Ordering::Greater,
|
||||
_ => a.name.cmp(&b.name),
|
||||
});
|
||||
|
||||
Ok(result)
|
||||
})
|
||||
.await
|
||||
.map_err(|e| format!("Task failed: {}", e))?
|
||||
}
|
||||
|
||||
pub async fn list_directory(path: String, state: &SessionState) -> Result<Vec<FileEntry>, String> {
|
||||
let full_path = resolve_path(state, &path)?;
|
||||
list_directory_impl(full_path).await
|
||||
}
|
||||
65
server/src/commands/search.rs
Normal file
@@ -0,0 +1,65 @@
|
||||
use crate::state::SessionState;
|
||||
use ignore::WalkBuilder;
|
||||
use serde::Serialize;
|
||||
use std::fs;
|
||||
use std::path::PathBuf;
|
||||
|
||||
#[derive(Serialize, Debug, poem_openapi::Object)]
|
||||
pub struct SearchResult {
|
||||
pub path: String,
|
||||
pub matches: usize,
|
||||
}
|
||||
|
||||
fn get_project_root(state: &SessionState) -> Result<PathBuf, String> {
|
||||
state.get_project_root()
|
||||
}
|
||||
|
||||
pub async fn search_files(
|
||||
query: String,
|
||||
state: &SessionState,
|
||||
) -> Result<Vec<SearchResult>, String> {
|
||||
let root = get_project_root(state)?;
|
||||
search_files_impl(query, root).await
|
||||
}
|
||||
|
||||
pub async fn search_files_impl(query: String, root: PathBuf) -> Result<Vec<SearchResult>, String> {
|
||||
let root_clone = root.clone();
|
||||
|
||||
let results = tokio::task::spawn_blocking(move || {
|
||||
let mut matches = Vec::new();
|
||||
let walker = WalkBuilder::new(&root_clone).git_ignore(true).build();
|
||||
|
||||
for result in walker {
|
||||
match result {
|
||||
Ok(entry) => {
|
||||
if !entry.file_type().map(|ft| ft.is_file()).unwrap_or(false) {
|
||||
continue;
|
||||
}
|
||||
|
||||
let path = entry.path();
|
||||
if let Ok(content) = fs::read_to_string(path)
|
||||
&& content.contains(&query)
|
||||
{
|
||||
let relative = path
|
||||
.strip_prefix(&root_clone)
|
||||
.unwrap_or(path)
|
||||
.to_string_lossy()
|
||||
.to_string();
|
||||
|
||||
matches.push(SearchResult {
|
||||
path: relative,
|
||||
matches: 1,
|
||||
});
|
||||
}
|
||||
}
|
||||
Err(err) => eprintln!("Error walking dir: {}", err),
|
||||
}
|
||||
}
|
||||
|
||||
matches
|
||||
})
|
||||
.await
|
||||
.map_err(|e| format!("Search task failed: {e}"))?;
|
||||
|
||||
Ok(results)
|
||||
}
|
||||
58
server/src/commands/shell.rs
Normal file
@@ -0,0 +1,58 @@
|
||||
use crate::state::SessionState;
|
||||
use serde::Serialize;
|
||||
use std::path::PathBuf;
|
||||
use std::process::Command;
|
||||
|
||||
/// Helper to get the root path (cloned) without joining
|
||||
fn get_project_root(state: &SessionState) -> Result<PathBuf, String> {
|
||||
state.get_project_root()
|
||||
}
|
||||
|
||||
#[derive(Serialize, Debug, poem_openapi::Object)]
|
||||
pub struct CommandOutput {
|
||||
pub stdout: String,
|
||||
pub stderr: String,
|
||||
pub exit_code: i32,
|
||||
}
|
||||
|
||||
/// Execute shell command logic (pure function for testing)
|
||||
async fn exec_shell_impl(
|
||||
command: String,
|
||||
args: Vec<String>,
|
||||
root: PathBuf,
|
||||
) -> Result<CommandOutput, String> {
|
||||
// Security Allowlist
|
||||
let allowed_commands = [
|
||||
"git", "cargo", "npm", "yarn", "pnpm", "node", "bun", "ls", "find", "grep", "mkdir", "rm",
|
||||
"mv", "cp", "touch", "rustc", "rustfmt",
|
||||
];
|
||||
|
||||
if !allowed_commands.contains(&command.as_str()) {
|
||||
return Err(format!("Command '{}' is not in the allowlist.", command));
|
||||
}
|
||||
|
||||
let output = tokio::task::spawn_blocking(move || {
|
||||
Command::new(&command)
|
||||
.args(&args)
|
||||
.current_dir(root)
|
||||
.output()
|
||||
})
|
||||
.await
|
||||
.map_err(|e| format!("Task join error: {}", e))?
|
||||
.map_err(|e| format!("Failed to execute command: {}", e))?;
|
||||
|
||||
Ok(CommandOutput {
|
||||
stdout: String::from_utf8_lossy(&output.stdout).to_string(),
|
||||
stderr: String::from_utf8_lossy(&output.stderr).to_string(),
|
||||
exit_code: output.status.code().unwrap_or(-1),
|
||||
})
|
||||
}
|
||||
|
||||
pub async fn exec_shell(
|
||||
command: String,
|
||||
args: Vec<String>,
|
||||
state: &SessionState,
|
||||
) -> Result<CommandOutput, String> {
|
||||
let root = get_project_root(state)?;
|
||||
exec_shell_impl(command, args, root).await
|
||||
}
|
||||
@@ -89,148 +89,3 @@ REMEMBER:
|
||||
|
||||
Remember: You are an autonomous agent that can both explain concepts and take action. Choose appropriately based on the user's request.
|
||||
"#;
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Tests
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_system_prompt_not_empty() {
|
||||
assert!(SYSTEM_PROMPT.len() > 100);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_system_prompt_contains_key_instructions() {
|
||||
// Check for critical instruction sections
|
||||
assert!(SYSTEM_PROMPT.contains("CRITICAL INSTRUCTIONS"));
|
||||
assert!(SYSTEM_PROMPT.contains("YOUR CAPABILITIES"));
|
||||
assert!(SYSTEM_PROMPT.contains("YOUR WORKFLOW"));
|
||||
assert!(SYSTEM_PROMPT.contains("CRITICAL RULES"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_system_prompt_mentions_all_tools() {
|
||||
// Verify all tools are mentioned
|
||||
assert!(SYSTEM_PROMPT.contains("read_file"));
|
||||
assert!(SYSTEM_PROMPT.contains("write_file"));
|
||||
assert!(SYSTEM_PROMPT.contains("list_directory"));
|
||||
assert!(SYSTEM_PROMPT.contains("search_files"));
|
||||
assert!(SYSTEM_PROMPT.contains("exec_shell"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_system_prompt_has_example_section() {
|
||||
assert!(SYSTEM_PROMPT.contains("EXAMPLES OF CORRECT BEHAVIOR"));
|
||||
assert!(SYSTEM_PROMPT.contains("EXAMPLES OF INCORRECT BEHAVIOR"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_system_prompt_emphasizes_distinction() {
|
||||
// Check that the prompt emphasizes the difference between examples and implementation
|
||||
assert!(SYSTEM_PROMPT.contains("Distinguish Between Examples and Implementation"));
|
||||
assert!(SYSTEM_PROMPT.contains("Teaching vs Implementing"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_system_prompt_has_security_warning() {
|
||||
// Check for security-related instructions
|
||||
assert!(SYSTEM_PROMPT.contains("Read Before Write"));
|
||||
assert!(SYSTEM_PROMPT.contains("Complete Files Only"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_system_prompt_mentions_write_file_overwrites() {
|
||||
// Important warning about write_file behavior
|
||||
assert!(SYSTEM_PROMPT.contains("OVERWRITES"));
|
||||
assert!(SYSTEM_PROMPT.contains("COMPLETE file content"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_system_prompt_has_correct_examples() {
|
||||
// Check for example scenarios
|
||||
assert!(SYSTEM_PROMPT.contains("Example 1"));
|
||||
assert!(SYSTEM_PROMPT.contains("Example 2"));
|
||||
assert!(SYSTEM_PROMPT.contains("Example 3"));
|
||||
assert!(SYSTEM_PROMPT.contains("Example 4"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_system_prompt_discourages_placeholders() {
|
||||
// Check that it warns against using placeholders
|
||||
assert!(SYSTEM_PROMPT.contains("placeholders"));
|
||||
assert!(SYSTEM_PROMPT.contains("rest of code"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_system_prompt_encourages_autonomy() {
|
||||
// Check that it encourages the agent to take initiative
|
||||
assert!(SYSTEM_PROMPT.contains("Take Initiative"));
|
||||
assert!(SYSTEM_PROMPT.contains("autonomous"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_system_prompt_mentions_living_spec() {
|
||||
// Check for reference to .living_spec
|
||||
assert!(SYSTEM_PROMPT.contains(".living_spec/README.md"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_system_prompt_has_workflow_steps() {
|
||||
// Check for workflow guidance
|
||||
assert!(SYSTEM_PROMPT.contains("Understand"));
|
||||
assert!(SYSTEM_PROMPT.contains("Explore"));
|
||||
assert!(SYSTEM_PROMPT.contains("Implement"));
|
||||
assert!(SYSTEM_PROMPT.contains("Verify"));
|
||||
assert!(SYSTEM_PROMPT.contains("Report"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_system_prompt_uses_past_tense_for_reporting() {
|
||||
// Check that it instructs to report in past tense
|
||||
assert!(SYSTEM_PROMPT.contains("past tense"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_system_prompt_format_is_valid() {
|
||||
// Basic format checks
|
||||
assert!(SYSTEM_PROMPT.starts_with("You are an AI Agent"));
|
||||
assert!(SYSTEM_PROMPT.contains("Remember:"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_system_prompt_has_keyword_guidance() {
|
||||
// Check for keyword-based decision guidance
|
||||
assert!(SYSTEM_PROMPT.contains("show"));
|
||||
assert!(SYSTEM_PROMPT.contains("create"));
|
||||
assert!(SYSTEM_PROMPT.contains("add"));
|
||||
assert!(SYSTEM_PROMPT.contains("implement"));
|
||||
assert!(SYSTEM_PROMPT.contains("fix"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_system_prompt_length_reasonable() {
|
||||
// Ensure prompt is substantial but not excessively long
|
||||
let line_count = SYSTEM_PROMPT.lines().count();
|
||||
assert!(line_count > 50);
|
||||
assert!(line_count < 300);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_system_prompt_has_shell_command_examples() {
|
||||
// Check for shell command mentions
|
||||
assert!(SYSTEM_PROMPT.contains("cargo"));
|
||||
assert!(SYSTEM_PROMPT.contains("npm"));
|
||||
assert!(SYSTEM_PROMPT.contains("git"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_system_prompt_warns_against_announcements() {
|
||||
// Check that it discourages announcing actions
|
||||
assert!(SYSTEM_PROMPT.contains("Don't announce"));
|
||||
assert!(SYSTEM_PROMPT.contains("Be Direct"));
|
||||
}
|
||||
}
|
||||
@@ -5,7 +5,6 @@ use futures::StreamExt;
|
||||
use reqwest::header::{CONTENT_TYPE, HeaderMap, HeaderValue};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde_json::json;
|
||||
use tauri::{AppHandle, Emitter};
|
||||
use tokio::sync::watch::Receiver;
|
||||
|
||||
const ANTHROPIC_API_URL: &str = "https://api.anthropic.com/v1/messages";
|
||||
@@ -70,7 +69,6 @@ impl AnthropicProvider {
|
||||
}
|
||||
}
|
||||
|
||||
/// Convert our internal tool definitions to Anthropic format
|
||||
fn convert_tools(tools: &[ToolDefinition]) -> Vec<AnthropicTool> {
|
||||
tools
|
||||
.iter()
|
||||
@@ -82,16 +80,12 @@ impl AnthropicProvider {
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Convert our internal messages to Anthropic format
|
||||
fn convert_messages(messages: &[Message]) -> Vec<AnthropicMessage> {
|
||||
let mut anthropic_messages: Vec<AnthropicMessage> = Vec::new();
|
||||
|
||||
for msg in messages {
|
||||
match msg.role {
|
||||
Role::System => {
|
||||
// Anthropic doesn't support system messages in the messages array
|
||||
// They should be passed separately in the 'system' parameter
|
||||
// For now, we'll skip them or convert to user messages
|
||||
continue;
|
||||
}
|
||||
Role::User => {
|
||||
@@ -102,17 +96,14 @@ impl AnthropicProvider {
|
||||
}
|
||||
Role::Assistant => {
|
||||
if let Some(tool_calls) = &msg.tool_calls {
|
||||
// Assistant message with tool calls
|
||||
let mut blocks = Vec::new();
|
||||
|
||||
// Add text content if present
|
||||
if !msg.content.is_empty() {
|
||||
blocks.push(AnthropicContentBlock::Text {
|
||||
text: msg.content.clone(),
|
||||
});
|
||||
}
|
||||
|
||||
// Add tool use blocks
|
||||
for call in tool_calls {
|
||||
let input: serde_json::Value =
|
||||
serde_json::from_str(&call.function.arguments).unwrap_or(json!({}));
|
||||
@@ -132,7 +123,6 @@ impl AnthropicProvider {
|
||||
content: AnthropicContent::Blocks(blocks),
|
||||
});
|
||||
} else {
|
||||
// Regular assistant message
|
||||
anthropic_messages.push(AnthropicMessage {
|
||||
role: "assistant".to_string(),
|
||||
content: AnthropicContent::Text(msg.content.clone()),
|
||||
@@ -140,7 +130,6 @@ impl AnthropicProvider {
|
||||
}
|
||||
}
|
||||
Role::Tool => {
|
||||
// Tool result - needs to be sent as a user message with tool_result content
|
||||
let tool_use_id = msg.tool_call_id.clone().unwrap_or_default();
|
||||
anthropic_messages.push(AnthropicMessage {
|
||||
role: "user".to_string(),
|
||||
@@ -158,7 +147,6 @@ impl AnthropicProvider {
|
||||
anthropic_messages
|
||||
}
|
||||
|
||||
/// Extract system prompt from messages
|
||||
fn extract_system_prompt(messages: &[Message]) -> String {
|
||||
messages
|
||||
.iter()
|
||||
@@ -168,20 +156,21 @@ impl AnthropicProvider {
|
||||
.join("\n\n")
|
||||
}
|
||||
|
||||
pub async fn chat_stream(
|
||||
pub async fn chat_stream<F>(
|
||||
&self,
|
||||
app: &AppHandle,
|
||||
model: &str,
|
||||
messages: &[Message],
|
||||
tools: &[ToolDefinition],
|
||||
cancel_rx: &mut Receiver<bool>,
|
||||
) -> Result<CompletionResponse, String> {
|
||||
// Convert messages and tools
|
||||
mut on_token: F,
|
||||
) -> Result<CompletionResponse, String>
|
||||
where
|
||||
F: FnMut(&str),
|
||||
{
|
||||
let anthropic_messages = Self::convert_messages(messages);
|
||||
let anthropic_tools = Self::convert_tools(tools);
|
||||
let system_prompt = Self::extract_system_prompt(messages);
|
||||
|
||||
// Build request
|
||||
let mut request_body = json!({
|
||||
"model": model,
|
||||
"max_tokens": 4096,
|
||||
@@ -197,7 +186,6 @@ impl AnthropicProvider {
|
||||
request_body["tools"] = json!(anthropic_tools);
|
||||
}
|
||||
|
||||
// Build headers
|
||||
let mut headers = HeaderMap::new();
|
||||
headers.insert(CONTENT_TYPE, HeaderValue::from_static("application/json"));
|
||||
headers.insert(
|
||||
@@ -209,7 +197,6 @@ impl AnthropicProvider {
|
||||
HeaderValue::from_static(ANTHROPIC_VERSION),
|
||||
);
|
||||
|
||||
// Make streaming request
|
||||
let response = self
|
||||
.client
|
||||
.post(ANTHROPIC_API_URL)
|
||||
@@ -217,7 +204,7 @@ impl AnthropicProvider {
|
||||
.json(&request_body)
|
||||
.send()
|
||||
.await
|
||||
.map_err(|e| format!("Failed to send request to Anthropic: {}", e))?;
|
||||
.map_err(|e| format!("Failed to send request to Anthropic: {e}"))?;
|
||||
|
||||
if !response.status().is_success() {
|
||||
let status = response.status();
|
||||
@@ -225,14 +212,13 @@ impl AnthropicProvider {
|
||||
.text()
|
||||
.await
|
||||
.unwrap_or_else(|_| "Unknown error".to_string());
|
||||
return Err(format!("Anthropic API error {}: {}", status, error_text));
|
||||
return Err(format!("Anthropic API error {status}: {error_text}"));
|
||||
}
|
||||
|
||||
// Process streaming response
|
||||
let mut stream = response.bytes_stream();
|
||||
let mut accumulated_text = String::new();
|
||||
let mut tool_calls: Vec<ToolCall> = Vec::new();
|
||||
let mut current_tool_use: Option<(String, String, String)> = None; // (id, name, input_json)
|
||||
let mut current_tool_use: Option<(String, String, String)> = None;
|
||||
|
||||
loop {
|
||||
let chunk = tokio::select! {
|
||||
@@ -250,14 +236,11 @@ impl AnthropicProvider {
|
||||
}
|
||||
};
|
||||
|
||||
let bytes = chunk.map_err(|e| format!("Stream error: {}", e))?;
|
||||
let bytes = chunk.map_err(|e| format!("Stream error: {e}"))?;
|
||||
let text = String::from_utf8_lossy(&bytes);
|
||||
|
||||
// Parse SSE events
|
||||
for line in text.lines() {
|
||||
if let Some(json_str) = line.strip_prefix("data: ") {
|
||||
// Remove "data: " prefix
|
||||
|
||||
if json_str == "[DONE]" {
|
||||
break;
|
||||
}
|
||||
@@ -269,7 +252,6 @@ impl AnthropicProvider {
|
||||
|
||||
match event.event_type.as_str() {
|
||||
"content_block_start" => {
|
||||
// Check if this is a tool use block
|
||||
if let Some(content_block) = event.data.get("content_block")
|
||||
&& content_block.get("type") == Some(&json!("tool_use"))
|
||||
{
|
||||
@@ -280,16 +262,12 @@ impl AnthropicProvider {
|
||||
}
|
||||
"content_block_delta" => {
|
||||
if let Some(delta) = event.data.get("delta") {
|
||||
// Text delta
|
||||
if delta.get("type") == Some(&json!("text_delta")) {
|
||||
if let Some(text) = delta.get("text").and_then(|t| t.as_str()) {
|
||||
accumulated_text.push_str(text);
|
||||
// Emit token to frontend
|
||||
let _ = app.emit("chat:token", text);
|
||||
on_token(text);
|
||||
}
|
||||
}
|
||||
// Tool input delta
|
||||
else if delta.get("type") == Some(&json!("input_json_delta"))
|
||||
} else if delta.get("type") == Some(&json!("input_json_delta"))
|
||||
&& let Some((_, _, input_json)) = &mut current_tool_use
|
||||
&& let Some(partial) =
|
||||
delta.get("partial_json").and_then(|p| p.as_str())
|
||||
@@ -299,7 +277,6 @@ impl AnthropicProvider {
|
||||
}
|
||||
}
|
||||
"content_block_stop" => {
|
||||
// Finalize tool use if we have one
|
||||
if let Some((id, name, input_json)) = current_tool_use.take() {
|
||||
tool_calls.push(ToolCall {
|
||||
id: Some(id),
|
||||
@@ -5,7 +5,6 @@ use async_trait::async_trait;
|
||||
use futures::StreamExt;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde_json::Value;
|
||||
use tauri::{AppHandle, Emitter};
|
||||
|
||||
pub struct OllamaProvider {
|
||||
base_url: String,
|
||||
@@ -40,19 +39,21 @@ impl OllamaProvider {
|
||||
Ok(body.models.into_iter().map(|m| m.name).collect())
|
||||
}
|
||||
|
||||
/// Streaming chat that emits tokens via Tauri events
|
||||
pub async fn chat_stream(
|
||||
/// Streaming chat that calls `on_token` for each token chunk.
|
||||
pub async fn chat_stream<F>(
|
||||
&self,
|
||||
app: &AppHandle,
|
||||
model: &str,
|
||||
messages: &[Message],
|
||||
tools: &[ToolDefinition],
|
||||
cancel_rx: &mut tokio::sync::watch::Receiver<bool>,
|
||||
) -> Result<CompletionResponse, String> {
|
||||
mut on_token: F,
|
||||
) -> Result<CompletionResponse, String>
|
||||
where
|
||||
F: FnMut(&str) + Send,
|
||||
{
|
||||
let client = reqwest::Client::new();
|
||||
let url = format!("{}/api/chat", self.base_url.trim_end_matches('/'));
|
||||
|
||||
// Convert domain Messages to Ollama Messages
|
||||
let ollama_messages: Vec<OllamaRequestMessage> = messages
|
||||
.iter()
|
||||
.map(|m| {
|
||||
@@ -86,7 +87,7 @@ impl OllamaProvider {
|
||||
let request_body = OllamaRequest {
|
||||
model,
|
||||
messages: ollama_messages,
|
||||
stream: true, // Enable streaming
|
||||
stream: true,
|
||||
tools,
|
||||
};
|
||||
|
||||
@@ -103,14 +104,12 @@ impl OllamaProvider {
|
||||
return Err(format!("Ollama API error {}: {}", status, text));
|
||||
}
|
||||
|
||||
// Process streaming response
|
||||
let mut stream = res.bytes_stream();
|
||||
let mut buffer = String::new();
|
||||
let mut accumulated_content = String::new();
|
||||
let mut final_tool_calls: Option<Vec<ToolCall>> = None;
|
||||
|
||||
loop {
|
||||
// Check for cancellation
|
||||
if *cancel_rx.borrow() {
|
||||
return Err("Chat cancelled by user".to_string());
|
||||
}
|
||||
@@ -123,7 +122,6 @@ impl OllamaProvider {
|
||||
}
|
||||
}
|
||||
_ = cancel_rx.changed() => {
|
||||
// changed() fires on any change, check if it's actually true
|
||||
if *cancel_rx.borrow() {
|
||||
return Err("Chat cancelled by user".to_string());
|
||||
} else {
|
||||
@@ -135,7 +133,6 @@ impl OllamaProvider {
|
||||
let chunk = chunk_result.map_err(|e| format!("Stream error: {}", e))?;
|
||||
buffer.push_str(&String::from_utf8_lossy(&chunk));
|
||||
|
||||
// Process complete lines (newline-delimited JSON)
|
||||
while let Some(newline_pos) = buffer.find('\n') {
|
||||
let line = buffer[..newline_pos].trim().to_string();
|
||||
buffer = buffer[newline_pos + 1..].to_string();
|
||||
@@ -144,20 +141,14 @@ impl OllamaProvider {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Parse the streaming response
|
||||
let stream_msg: OllamaStreamResponse =
|
||||
serde_json::from_str(&line).map_err(|e| format!("JSON parse error: {}", e))?;
|
||||
|
||||
// Emit token if there's content
|
||||
if !stream_msg.message.content.is_empty() {
|
||||
accumulated_content.push_str(&stream_msg.message.content);
|
||||
|
||||
// Emit chat:token event
|
||||
app.emit("chat:token", &stream_msg.message.content)
|
||||
.map_err(|e| e.to_string())?;
|
||||
on_token(&stream_msg.message.content);
|
||||
}
|
||||
|
||||
// Check for tool calls
|
||||
if let Some(tool_calls) = stream_msg.message.tool_calls {
|
||||
final_tool_calls = Some(
|
||||
tool_calls
|
||||
@@ -174,7 +165,6 @@ impl OllamaProvider {
|
||||
);
|
||||
}
|
||||
|
||||
// If done, break
|
||||
if stream_msg.done {
|
||||
break;
|
||||
}
|
||||
@@ -202,8 +192,6 @@ struct OllamaModelTag {
|
||||
name: String,
|
||||
}
|
||||
|
||||
// --- Request Types ---
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct OllamaRequest<'a> {
|
||||
model: &'a str,
|
||||
@@ -240,34 +228,6 @@ struct OllamaRequestFunctionCall {
|
||||
arguments: Value,
|
||||
}
|
||||
|
||||
// --- Response Types ---
|
||||
|
||||
#[derive(Deserialize)]
|
||||
#[allow(dead_code)]
|
||||
struct OllamaResponse {
|
||||
message: OllamaResponseMessage,
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
#[allow(dead_code)]
|
||||
struct OllamaResponseMessage {
|
||||
content: String,
|
||||
tool_calls: Option<Vec<OllamaResponseToolCall>>,
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
struct OllamaResponseToolCall {
|
||||
function: OllamaResponseFunctionCall,
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
struct OllamaResponseFunctionCall {
|
||||
name: String,
|
||||
arguments: Value, // Ollama returns Object, we convert to String for internal storage
|
||||
}
|
||||
|
||||
// --- Streaming Response Types ---
|
||||
|
||||
#[derive(Deserialize)]
|
||||
struct OllamaStreamResponse {
|
||||
message: OllamaStreamMessage,
|
||||
@@ -282,107 +242,25 @@ struct OllamaStreamMessage {
|
||||
tool_calls: Option<Vec<OllamaResponseToolCall>>,
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
struct OllamaResponseToolCall {
|
||||
function: OllamaResponseFunctionCall,
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
struct OllamaResponseFunctionCall {
|
||||
name: String,
|
||||
arguments: Value,
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl ModelProvider for OllamaProvider {
|
||||
async fn chat(
|
||||
&self,
|
||||
model: &str,
|
||||
messages: &[Message],
|
||||
tools: &[ToolDefinition],
|
||||
_model: &str,
|
||||
_messages: &[Message],
|
||||
_tools: &[ToolDefinition],
|
||||
) -> Result<CompletionResponse, String> {
|
||||
let client = reqwest::Client::new();
|
||||
let url = format!("{}/api/chat", self.base_url.trim_end_matches('/'));
|
||||
|
||||
// Convert domain Messages to Ollama Messages (handling String -> Object args mismatch)
|
||||
let ollama_messages: Vec<OllamaRequestMessage> = messages
|
||||
.iter()
|
||||
.map(|m| {
|
||||
let tool_calls = m.tool_calls.as_ref().map(|calls| {
|
||||
calls
|
||||
.iter()
|
||||
.map(|tc| {
|
||||
// Try to parse string args as JSON, fallback to string value if fails
|
||||
let args_val: Value = serde_json::from_str(&tc.function.arguments)
|
||||
.unwrap_or(Value::String(tc.function.arguments.clone()));
|
||||
|
||||
OllamaRequestToolCall {
|
||||
kind: tc.kind.clone(),
|
||||
function: OllamaRequestFunctionCall {
|
||||
name: tc.function.name.clone(),
|
||||
arguments: args_val,
|
||||
},
|
||||
}
|
||||
})
|
||||
.collect()
|
||||
});
|
||||
|
||||
OllamaRequestMessage {
|
||||
role: m.role.clone(),
|
||||
content: m.content.clone(),
|
||||
tool_calls,
|
||||
tool_call_id: m.tool_call_id.clone(),
|
||||
}
|
||||
})
|
||||
.collect();
|
||||
|
||||
let request_body = OllamaRequest {
|
||||
model,
|
||||
messages: ollama_messages,
|
||||
stream: false,
|
||||
tools,
|
||||
};
|
||||
|
||||
// Debug: Log the request body
|
||||
if let Ok(json_str) = serde_json::to_string_pretty(&request_body) {
|
||||
eprintln!("=== Ollama Request ===\n{}\n===================", json_str);
|
||||
}
|
||||
|
||||
let res = client
|
||||
.post(&url)
|
||||
.json(&request_body)
|
||||
.send()
|
||||
.await
|
||||
.map_err(|e| format!("Request failed: {}", e))?;
|
||||
|
||||
if !res.status().is_success() {
|
||||
let status = res.status();
|
||||
let text = res.text().await.unwrap_or_default();
|
||||
eprintln!(
|
||||
"=== Ollama Error Response ===\n{}\n========================",
|
||||
text
|
||||
);
|
||||
return Err(format!("Ollama API error {}: {}", status, text));
|
||||
}
|
||||
|
||||
let response_body: OllamaResponse = res
|
||||
.json()
|
||||
.await
|
||||
.map_err(|e| format!("Failed to parse response: {}", e))?;
|
||||
|
||||
// Convert Response back to Domain types
|
||||
let content = if response_body.message.content.is_empty() {
|
||||
None
|
||||
} else {
|
||||
Some(response_body.message.content)
|
||||
};
|
||||
|
||||
let tool_calls = response_body.message.tool_calls.map(|calls| {
|
||||
calls
|
||||
.into_iter()
|
||||
.map(|tc| ToolCall {
|
||||
id: None, // Ollama doesn't typically send IDs
|
||||
kind: "function".to_string(),
|
||||
function: FunctionCall {
|
||||
name: tc.function.name,
|
||||
arguments: tc.function.arguments.to_string(), // Convert Object -> String
|
||||
},
|
||||
})
|
||||
.collect()
|
||||
});
|
||||
|
||||
Ok(CompletionResponse {
|
||||
content,
|
||||
tool_calls,
|
||||
})
|
||||
Err("Non-streaming Ollama chat not implemented for server".to_string())
|
||||
}
|
||||
}
|
||||
69
server/src/llm/types.rs
Normal file
@@ -0,0 +1,69 @@
|
||||
use async_trait::async_trait;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::fmt::Debug;
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq)]
|
||||
#[serde(rename_all = "lowercase")]
|
||||
pub enum Role {
|
||||
System,
|
||||
User,
|
||||
Assistant,
|
||||
Tool,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone)]
|
||||
pub struct Message {
|
||||
pub role: Role,
|
||||
pub content: String,
|
||||
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub tool_calls: Option<Vec<ToolCall>>,
|
||||
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub tool_call_id: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone)]
|
||||
pub struct ToolCall {
|
||||
pub id: Option<String>,
|
||||
pub function: FunctionCall,
|
||||
#[serde(rename = "type")]
|
||||
pub kind: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone)]
|
||||
pub struct FunctionCall {
|
||||
pub name: String,
|
||||
pub arguments: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone)]
|
||||
pub struct ToolDefinition {
|
||||
#[serde(rename = "type")]
|
||||
pub kind: String,
|
||||
pub function: ToolFunctionDefinition,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone)]
|
||||
pub struct ToolFunctionDefinition {
|
||||
pub name: String,
|
||||
pub description: String,
|
||||
pub parameters: serde_json::Value,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
pub struct CompletionResponse {
|
||||
pub content: Option<String>,
|
||||
pub tool_calls: Option<Vec<ToolCall>>,
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
#[allow(dead_code)]
|
||||
pub trait ModelProvider: Send + Sync {
|
||||
async fn chat(
|
||||
&self,
|
||||
model: &str,
|
||||
messages: &[Message],
|
||||
tools: &[ToolDefinition],
|
||||
) -> Result<CompletionResponse, String>;
|
||||
}
|
||||
396
server/src/main.rs
Normal file
@@ -0,0 +1,396 @@
|
||||
mod commands;
|
||||
mod llm;
|
||||
mod state;
|
||||
mod store;
|
||||
|
||||
use crate::commands::{chat, fs};
|
||||
use crate::llm::types::Message;
|
||||
use crate::state::SessionState;
|
||||
use crate::store::JsonFileStore;
|
||||
use futures::{SinkExt, StreamExt};
|
||||
use poem::web::websocket::{Message as WsMessage, WebSocket};
|
||||
use poem::{
|
||||
EndpointExt, Response, Route, Server, get, handler,
|
||||
http::{StatusCode, header},
|
||||
listener::TcpListener,
|
||||
web::{Data, Path},
|
||||
};
|
||||
use poem_openapi::{Object, OpenApi, OpenApiService, param::Query, payload::Json};
|
||||
use rust_embed::RustEmbed;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use tokio::sync::mpsc;
|
||||
|
||||
#[derive(Clone)]
|
||||
struct AppContext {
|
||||
state: Arc<SessionState>,
|
||||
store: Arc<JsonFileStore>,
|
||||
}
|
||||
|
||||
#[derive(RustEmbed)]
|
||||
#[folder = "../frontend/dist"]
|
||||
struct EmbeddedAssets;
|
||||
|
||||
type OpenApiResult<T> = poem::Result<T>;
|
||||
|
||||
fn bad_request(message: String) -> poem::Error {
|
||||
poem::Error::from_string(message, StatusCode::BAD_REQUEST)
|
||||
}
|
||||
|
||||
#[handler]
|
||||
fn health() -> &'static str {
|
||||
"ok"
|
||||
}
|
||||
|
||||
fn serve_embedded(path: &str) -> Response {
|
||||
let normalized = if path.is_empty() {
|
||||
"index.html"
|
||||
} else {
|
||||
path.trim_start_matches('/')
|
||||
};
|
||||
let is_asset_request = normalized.starts_with("assets/");
|
||||
let asset = if is_asset_request {
|
||||
EmbeddedAssets::get(normalized)
|
||||
} else {
|
||||
EmbeddedAssets::get(normalized).or_else(|| {
|
||||
if normalized == "index.html" {
|
||||
None
|
||||
} else {
|
||||
EmbeddedAssets::get("index.html")
|
||||
}
|
||||
})
|
||||
};
|
||||
|
||||
match asset {
|
||||
Some(content) => {
|
||||
let body = content.data.into_owned();
|
||||
let mime = mime_guess::from_path(normalized)
|
||||
.first_or_octet_stream()
|
||||
.to_string();
|
||||
|
||||
Response::builder()
|
||||
.status(StatusCode::OK)
|
||||
.header(header::CONTENT_TYPE, mime)
|
||||
.body(body)
|
||||
}
|
||||
None => Response::builder()
|
||||
.status(StatusCode::NOT_FOUND)
|
||||
.body("Not Found"),
|
||||
}
|
||||
}
|
||||
|
||||
#[handler]
|
||||
fn embedded_asset(Path(path): Path<String>) -> Response {
|
||||
let asset_path = format!("assets/{path}");
|
||||
serve_embedded(&asset_path)
|
||||
}
|
||||
|
||||
#[handler]
|
||||
fn embedded_file(Path(path): Path<String>) -> Response {
|
||||
serve_embedded(&path)
|
||||
}
|
||||
|
||||
#[handler]
|
||||
fn embedded_index() -> Response {
|
||||
serve_embedded("index.html")
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Object)]
|
||||
struct PathPayload {
|
||||
path: String,
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Object)]
|
||||
struct ModelPayload {
|
||||
model: String,
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Object)]
|
||||
struct ApiKeyPayload {
|
||||
api_key: String,
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Object)]
|
||||
struct FilePathPayload {
|
||||
path: String,
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Object)]
|
||||
struct WriteFilePayload {
|
||||
path: String,
|
||||
content: String,
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Object)]
|
||||
struct SearchPayload {
|
||||
query: String,
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Object)]
|
||||
struct ExecShellPayload {
|
||||
command: String,
|
||||
args: Vec<String>,
|
||||
}
|
||||
struct Api {
|
||||
ctx: Arc<AppContext>,
|
||||
}
|
||||
|
||||
#[OpenApi]
|
||||
impl Api {
|
||||
#[oai(path = "/project", method = "get")]
|
||||
async fn get_current_project(&self) -> OpenApiResult<Json<Option<String>>> {
|
||||
let ctx = self.ctx.clone();
|
||||
let result =
|
||||
fs::get_current_project(&ctx.state, ctx.store.as_ref()).map_err(bad_request)?;
|
||||
Ok(Json(result))
|
||||
}
|
||||
|
||||
#[oai(path = "/project", method = "post")]
|
||||
async fn open_project(&self, payload: Json<PathPayload>) -> OpenApiResult<Json<String>> {
|
||||
let ctx = self.ctx.clone();
|
||||
let confirmed = fs::open_project(payload.0.path, &ctx.state, ctx.store.as_ref())
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
Ok(Json(confirmed))
|
||||
}
|
||||
|
||||
#[oai(path = "/project", method = "delete")]
|
||||
async fn close_project(&self) -> OpenApiResult<Json<bool>> {
|
||||
let ctx = self.ctx.clone();
|
||||
fs::close_project(&ctx.state, ctx.store.as_ref()).map_err(bad_request)?;
|
||||
Ok(Json(true))
|
||||
}
|
||||
|
||||
#[oai(path = "/model", method = "get")]
|
||||
async fn get_model_preference(&self) -> OpenApiResult<Json<Option<String>>> {
|
||||
let ctx = self.ctx.clone();
|
||||
let result = fs::get_model_preference(ctx.store.as_ref()).map_err(bad_request)?;
|
||||
Ok(Json(result))
|
||||
}
|
||||
|
||||
#[oai(path = "/model", method = "post")]
|
||||
async fn set_model_preference(&self, payload: Json<ModelPayload>) -> OpenApiResult<Json<bool>> {
|
||||
let ctx = self.ctx.clone();
|
||||
fs::set_model_preference(payload.0.model, ctx.store.as_ref()).map_err(bad_request)?;
|
||||
Ok(Json(true))
|
||||
}
|
||||
|
||||
#[oai(path = "/ollama/models", method = "get")]
|
||||
async fn get_ollama_models(
|
||||
&self,
|
||||
base_url: Query<Option<String>>,
|
||||
) -> OpenApiResult<Json<Vec<String>>> {
|
||||
let models = chat::get_ollama_models(base_url.0)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
Ok(Json(models))
|
||||
}
|
||||
|
||||
#[oai(path = "/anthropic/key/exists", method = "get")]
|
||||
async fn get_anthropic_api_key_exists(&self) -> OpenApiResult<Json<bool>> {
|
||||
let ctx = self.ctx.clone();
|
||||
let exists = chat::get_anthropic_api_key_exists(ctx.store.as_ref()).map_err(bad_request)?;
|
||||
Ok(Json(exists))
|
||||
}
|
||||
|
||||
#[oai(path = "/anthropic/key", method = "post")]
|
||||
async fn set_anthropic_api_key(
|
||||
&self,
|
||||
payload: Json<ApiKeyPayload>,
|
||||
) -> OpenApiResult<Json<bool>> {
|
||||
let ctx = self.ctx.clone();
|
||||
chat::set_anthropic_api_key(ctx.store.as_ref(), payload.0.api_key).map_err(bad_request)?;
|
||||
Ok(Json(true))
|
||||
}
|
||||
|
||||
#[oai(path = "/fs/read", method = "post")]
|
||||
async fn read_file(&self, payload: Json<FilePathPayload>) -> OpenApiResult<Json<String>> {
|
||||
let ctx = self.ctx.clone();
|
||||
let content = fs::read_file(payload.0.path, &ctx.state)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
Ok(Json(content))
|
||||
}
|
||||
|
||||
#[oai(path = "/fs/write", method = "post")]
|
||||
async fn write_file(&self, payload: Json<WriteFilePayload>) -> OpenApiResult<Json<bool>> {
|
||||
let ctx = self.ctx.clone();
|
||||
fs::write_file(payload.0.path, payload.0.content, &ctx.state)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
Ok(Json(true))
|
||||
}
|
||||
|
||||
#[oai(path = "/fs/list", method = "post")]
|
||||
async fn list_directory(
|
||||
&self,
|
||||
payload: Json<FilePathPayload>,
|
||||
) -> OpenApiResult<Json<Vec<fs::FileEntry>>> {
|
||||
let ctx = self.ctx.clone();
|
||||
let entries = fs::list_directory(payload.0.path, &ctx.state)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
Ok(Json(entries))
|
||||
}
|
||||
|
||||
#[oai(path = "/fs/search", method = "post")]
|
||||
async fn search_files(
|
||||
&self,
|
||||
payload: Json<SearchPayload>,
|
||||
) -> OpenApiResult<Json<Vec<crate::commands::search::SearchResult>>> {
|
||||
let ctx = self.ctx.clone();
|
||||
let results = crate::commands::search::search_files(payload.0.query, &ctx.state)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
Ok(Json(results))
|
||||
}
|
||||
|
||||
#[oai(path = "/shell/exec", method = "post")]
|
||||
async fn exec_shell(
|
||||
&self,
|
||||
payload: Json<ExecShellPayload>,
|
||||
) -> OpenApiResult<Json<crate::commands::shell::CommandOutput>> {
|
||||
let ctx = self.ctx.clone();
|
||||
let output =
|
||||
crate::commands::shell::exec_shell(payload.0.command, payload.0.args, &ctx.state)
|
||||
.await
|
||||
.map_err(bad_request)?;
|
||||
Ok(Json(output))
|
||||
}
|
||||
|
||||
#[oai(path = "/chat/cancel", method = "post")]
|
||||
async fn cancel_chat(&self) -> OpenApiResult<Json<bool>> {
|
||||
let ctx = self.ctx.clone();
|
||||
chat::cancel_chat(&ctx.state).map_err(bad_request)?;
|
||||
Ok(Json(true))
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
#[serde(tag = "type", rename_all = "snake_case")]
|
||||
enum WsRequest {
|
||||
Chat {
|
||||
messages: Vec<Message>,
|
||||
config: chat::ProviderConfig,
|
||||
},
|
||||
Cancel,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
#[serde(tag = "type", rename_all = "snake_case")]
|
||||
enum WsResponse {
|
||||
Token { content: String },
|
||||
Update { messages: Vec<Message> },
|
||||
Error { message: String },
|
||||
}
|
||||
|
||||
#[handler]
|
||||
async fn ws_handler(ws: WebSocket, ctx: Data<&AppContext>) -> impl poem::IntoResponse {
|
||||
let ctx = ctx.0.clone();
|
||||
ws.on_upgrade(move |socket| async move {
|
||||
let (mut sink, mut stream) = socket.split();
|
||||
let (tx, mut rx) = mpsc::unbounded_channel::<WsResponse>();
|
||||
|
||||
let forward = tokio::spawn(async move {
|
||||
while let Some(msg) = rx.recv().await {
|
||||
if let Ok(text) = serde_json::to_string(&msg)
|
||||
&& sink.send(WsMessage::Text(text)).await.is_err()
|
||||
{
|
||||
break;
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
while let Some(Ok(msg)) = stream.next().await {
|
||||
if let WsMessage::Text(text) = msg {
|
||||
let parsed: Result<WsRequest, _> = serde_json::from_str(&text);
|
||||
match parsed {
|
||||
Ok(WsRequest::Chat { messages, config }) => {
|
||||
let tx_updates = tx.clone();
|
||||
let tx_tokens = tx.clone();
|
||||
let ctx_clone = ctx.clone();
|
||||
|
||||
let result = chat::chat(
|
||||
messages,
|
||||
config,
|
||||
&ctx_clone.state,
|
||||
ctx_clone.store.as_ref(),
|
||||
|history| {
|
||||
let _ = tx_updates.send(WsResponse::Update {
|
||||
messages: history.to_vec(),
|
||||
});
|
||||
},
|
||||
|token| {
|
||||
let _ = tx_tokens.send(WsResponse::Token {
|
||||
content: token.to_string(),
|
||||
});
|
||||
},
|
||||
)
|
||||
.await;
|
||||
|
||||
if let Err(err) = result {
|
||||
let _ = tx.send(WsResponse::Error { message: err });
|
||||
}
|
||||
}
|
||||
Ok(WsRequest::Cancel) => {
|
||||
let _ = chat::cancel_chat(&ctx.state);
|
||||
}
|
||||
Err(err) => {
|
||||
let _ = tx.send(WsResponse::Error {
|
||||
message: format!("Invalid request: {err}"),
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
drop(tx);
|
||||
let _ = forward.await;
|
||||
})
|
||||
}
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() -> Result<(), std::io::Error> {
|
||||
let app_state = Arc::new(SessionState::default());
|
||||
let store = Arc::new(
|
||||
JsonFileStore::from_path(PathBuf::from("store.json")).map_err(std::io::Error::other)?,
|
||||
);
|
||||
|
||||
let ctx = AppContext {
|
||||
state: app_state,
|
||||
store,
|
||||
};
|
||||
let ctx_arc = Arc::new(ctx.clone());
|
||||
|
||||
let api_service = OpenApiService::new(
|
||||
Api {
|
||||
ctx: ctx_arc.clone(),
|
||||
},
|
||||
"Living Spec API",
|
||||
"1.0",
|
||||
)
|
||||
.server("http://127.0.0.1:3001/api");
|
||||
let docs_service = OpenApiService::new(
|
||||
Api {
|
||||
ctx: ctx_arc.clone(),
|
||||
},
|
||||
"Living Spec API",
|
||||
"1.0",
|
||||
)
|
||||
.server("http://127.0.0.1:3001/api");
|
||||
|
||||
let app = Route::new()
|
||||
.nest("/api", api_service)
|
||||
.nest("/docs", docs_service.swagger_ui())
|
||||
.at("/ws", get(ws_handler))
|
||||
.at("/health", get(health))
|
||||
.at("/assets/*path", get(embedded_asset))
|
||||
.at("/", get(embedded_index))
|
||||
.at("/*path", get(embedded_file))
|
||||
.data(ctx);
|
||||
|
||||
Server::new(TcpListener::bind("127.0.0.1:3001"))
|
||||
.run(app)
|
||||
.await
|
||||
}
|
||||
30
server/src/state.rs
Normal file
@@ -0,0 +1,30 @@
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Mutex;
|
||||
use tokio::sync::watch;
|
||||
|
||||
pub struct SessionState {
|
||||
pub project_root: Mutex<Option<PathBuf>>,
|
||||
pub cancel_tx: watch::Sender<bool>,
|
||||
pub cancel_rx: watch::Receiver<bool>,
|
||||
}
|
||||
|
||||
impl Default for SessionState {
|
||||
fn default() -> Self {
|
||||
let (cancel_tx, cancel_rx) = watch::channel(false);
|
||||
Self {
|
||||
project_root: Mutex::new(None),
|
||||
cancel_tx,
|
||||
cancel_rx,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl SessionState {
|
||||
pub fn get_project_root(&self) -> Result<PathBuf, String> {
|
||||
let root_guard = self.project_root.lock().map_err(|e| e.to_string())?;
|
||||
let root = root_guard
|
||||
.as_ref()
|
||||
.ok_or_else(|| "No project is currently open.".to_string())?;
|
||||
Ok(root.clone())
|
||||
}
|
||||
}
|
||||
82
server/src/store.rs
Normal file
@@ -0,0 +1,82 @@
|
||||
use serde_json::Value;
|
||||
use std::collections::HashMap;
|
||||
use std::fs;
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::sync::Mutex;
|
||||
|
||||
pub trait StoreOps: Send + Sync {
|
||||
fn get(&self, key: &str) -> Option<Value>;
|
||||
fn set(&self, key: &str, value: Value);
|
||||
fn delete(&self, key: &str);
|
||||
fn save(&self) -> Result<(), String>;
|
||||
}
|
||||
|
||||
pub struct JsonFileStore {
|
||||
path: PathBuf,
|
||||
data: Mutex<HashMap<String, Value>>,
|
||||
}
|
||||
|
||||
impl JsonFileStore {
|
||||
pub fn new(path: PathBuf) -> Result<Self, String> {
|
||||
let data = if path.exists() {
|
||||
let content =
|
||||
fs::read_to_string(&path).map_err(|e| format!("Failed to read store: {e}"))?;
|
||||
if content.trim().is_empty() {
|
||||
HashMap::new()
|
||||
} else {
|
||||
serde_json::from_str::<HashMap<String, Value>>(&content)
|
||||
.map_err(|e| format!("Failed to parse store: {e}"))?
|
||||
}
|
||||
} else {
|
||||
HashMap::new()
|
||||
};
|
||||
|
||||
Ok(Self {
|
||||
path,
|
||||
data: Mutex::new(data),
|
||||
})
|
||||
}
|
||||
|
||||
pub fn from_path<P: AsRef<Path>>(path: P) -> Result<Self, String> {
|
||||
Self::new(path.as_ref().to_path_buf())
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
pub fn path(&self) -> &Path {
|
||||
&self.path
|
||||
}
|
||||
|
||||
fn ensure_parent_dir(&self) -> Result<(), String> {
|
||||
if let Some(parent) = self.path.parent() {
|
||||
fs::create_dir_all(parent)
|
||||
.map_err(|e| format!("Failed to create store directory: {e}"))?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
impl StoreOps for JsonFileStore {
|
||||
fn get(&self, key: &str) -> Option<Value> {
|
||||
self.data.lock().ok().and_then(|map| map.get(key).cloned())
|
||||
}
|
||||
|
||||
fn set(&self, key: &str, value: Value) {
|
||||
if let Ok(mut map) = self.data.lock() {
|
||||
map.insert(key.to_string(), value);
|
||||
}
|
||||
}
|
||||
|
||||
fn delete(&self, key: &str) {
|
||||
if let Ok(mut map) = self.data.lock() {
|
||||
map.remove(key);
|
||||
}
|
||||
}
|
||||
|
||||
fn save(&self) -> Result<(), String> {
|
||||
self.ensure_parent_dir()?;
|
||||
let map = self.data.lock().map_err(|e| e.to_string())?;
|
||||
let content =
|
||||
serde_json::to_string_pretty(&*map).map_err(|e| format!("Serialize failed: {e}"))?;
|
||||
fs::write(&self.path, content).map_err(|e| format!("Failed to write store: {e}"))
|
||||
}
|
||||
}
|
||||
@@ -1,31 +0,0 @@
|
||||
# Nextest configuration for living-spec-standalone
|
||||
# See https://nexte.st/book/configuration.html for more details
|
||||
|
||||
[profile.default]
|
||||
# Show output for failing tests
|
||||
failure-output = "immediate"
|
||||
# Show output for passing tests as well
|
||||
success-output = "never"
|
||||
# Cancel test run on the first failure
|
||||
fail-fast = false
|
||||
# Number of retries for failing tests
|
||||
retries = 0
|
||||
|
||||
[profile.ci]
|
||||
# CI-specific profile
|
||||
failure-output = "immediate-final"
|
||||
success-output = "never"
|
||||
fail-fast = false
|
||||
# Retry flaky tests once in CI
|
||||
retries = 1
|
||||
|
||||
[profile.coverage]
|
||||
# Profile specifically for code coverage runs
|
||||
failure-output = "immediate-final"
|
||||
success-output = "never"
|
||||
fail-fast = false
|
||||
retries = 0
|
||||
|
||||
# Test groups configuration
|
||||
[test-groups.integration]
|
||||
max-threads = 1
|
||||
7
src-tauri/.gitignore
vendored
@@ -1,7 +0,0 @@
|
||||
# Generated by Cargo
|
||||
# will have compiled files and executables
|
||||
/target/
|
||||
|
||||
# Generated by Tauri
|
||||
# will have schema files for capabilities auto-completion
|
||||
/gen/schemas
|
||||
5859
src-tauri/Cargo.lock
generated
@@ -1,38 +0,0 @@
|
||||
[package]
|
||||
name = "living-spec-standalone"
|
||||
version = "0.1.0"
|
||||
description = "A Tauri App"
|
||||
authors = ["you"]
|
||||
edition = "2024"
|
||||
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
|
||||
[lib]
|
||||
# The `_lib` suffix may seem redundant but it is necessary
|
||||
# to make the lib name unique and wouldn't conflict with the bin name.
|
||||
# This seems to be only an issue on Windows, see https://github.com/rust-lang/cargo/issues/8519
|
||||
name = "living_spec_standalone_lib"
|
||||
crate-type = ["staticlib", "cdylib", "rlib"]
|
||||
|
||||
[build-dependencies]
|
||||
tauri-build = { version = "2", features = [] }
|
||||
|
||||
[dependencies]
|
||||
tauri = { version = "2", features = [] }
|
||||
tauri-plugin-opener = "2"
|
||||
serde = { version = "1", features = ["derive"] }
|
||||
serde_json = "1"
|
||||
tauri-plugin-dialog = "2.6.0"
|
||||
ignore = "0.4.25"
|
||||
walkdir = "2.5.0"
|
||||
reqwest = { version = "0.13.1", features = ["json", "blocking", "stream"] }
|
||||
futures = "0.3"
|
||||
uuid = { version = "1.20.0", features = ["v4", "serde"] }
|
||||
chrono = { version = "0.4.43", features = ["serde"] }
|
||||
async-trait = "0.1.89"
|
||||
tauri-plugin-store = "2.4.2"
|
||||
tokio = { version = "1", features = ["sync"] }
|
||||
eventsource-stream = "0.2.3"
|
||||
|
||||
[dev-dependencies]
|
||||
tempfile = "3"
|
||||
@@ -1,3 +0,0 @@
|
||||
fn main() {
|
||||
tauri_build::build()
|
||||
}
|
||||
@@ -1,12 +0,0 @@
|
||||
{
|
||||
"$schema": "../gen/schemas/desktop-schema.json",
|
||||
"identifier": "default",
|
||||
"description": "Capability for the main window",
|
||||
"windows": ["main"],
|
||||
"permissions": [
|
||||
"core:default",
|
||||
"opener:default",
|
||||
"dialog:default",
|
||||
"store:default"
|
||||
]
|
||||
}
|
||||
|
Before Width: | Height: | Size: 3.4 KiB |
|
Before Width: | Height: | Size: 6.8 KiB |
|
Before Width: | Height: | Size: 974 B |
|
Before Width: | Height: | Size: 2.8 KiB |
|
Before Width: | Height: | Size: 3.8 KiB |
|
Before Width: | Height: | Size: 3.9 KiB |
|
Before Width: | Height: | Size: 7.6 KiB |
|
Before Width: | Height: | Size: 903 B |
|
Before Width: | Height: | Size: 8.4 KiB |
|
Before Width: | Height: | Size: 1.3 KiB |
|
Before Width: | Height: | Size: 2.0 KiB |
|
Before Width: | Height: | Size: 2.4 KiB |
|
Before Width: | Height: | Size: 1.5 KiB |
|
Before Width: | Height: | Size: 85 KiB |
|
Before Width: | Height: | Size: 14 KiB |
@@ -1,857 +0,0 @@
|
||||
use crate::commands::fs::{StoreOps, TauriStoreWrapper};
|
||||
use crate::llm::prompts::SYSTEM_PROMPT;
|
||||
use crate::llm::types::{Message, Role, ToolCall, ToolDefinition, ToolFunctionDefinition};
|
||||
use crate::state::SessionState;
|
||||
use serde::Deserialize;
|
||||
use serde_json::json;
|
||||
use tauri::{AppHandle, State};
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Constants
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
const MAX_TURNS: usize = 30;
|
||||
const STORE_PATH: &str = "store.json";
|
||||
const KEY_ANTHROPIC_API_KEY: &str = "anthropic_api_key";
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Types
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
#[derive(Deserialize, Clone)]
|
||||
pub struct ProviderConfig {
|
||||
pub provider: String,
|
||||
pub model: String,
|
||||
pub base_url: Option<String>,
|
||||
pub enable_tools: Option<bool>,
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Pure Implementation Functions
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
fn get_anthropic_api_key_exists_impl(store: &dyn StoreOps) -> bool {
|
||||
match store.get(KEY_ANTHROPIC_API_KEY) {
|
||||
Some(value) => {
|
||||
if let Some(key) = value.as_str() {
|
||||
!key.is_empty()
|
||||
} else {
|
||||
false
|
||||
}
|
||||
}
|
||||
None => false,
|
||||
}
|
||||
}
|
||||
|
||||
fn set_anthropic_api_key_impl(store: &dyn StoreOps, api_key: &str) -> Result<(), String> {
|
||||
store.set(KEY_ANTHROPIC_API_KEY, json!(api_key));
|
||||
store.save()?;
|
||||
|
||||
// Verify it was saved
|
||||
match store.get(KEY_ANTHROPIC_API_KEY) {
|
||||
Some(value) => {
|
||||
if let Some(retrieved) = value.as_str() {
|
||||
if retrieved != api_key {
|
||||
return Err("Retrieved key does not match saved key".to_string());
|
||||
}
|
||||
} else {
|
||||
return Err("Stored value is not a string".to_string());
|
||||
}
|
||||
}
|
||||
None => {
|
||||
return Err("API key was saved but cannot be retrieved".to_string());
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn get_anthropic_api_key_impl(store: &dyn StoreOps) -> Result<String, String> {
|
||||
match store.get(KEY_ANTHROPIC_API_KEY) {
|
||||
Some(value) => {
|
||||
if let Some(key) = value.as_str() {
|
||||
if key.is_empty() {
|
||||
Err("Anthropic API key is empty. Please set your API key.".to_string())
|
||||
} else {
|
||||
Ok(key.to_string())
|
||||
}
|
||||
} else {
|
||||
Err("Stored API key is not a string".to_string())
|
||||
}
|
||||
}
|
||||
None => Err("Anthropic API key not found. Please set your API key.".to_string()),
|
||||
}
|
||||
}
|
||||
|
||||
fn parse_tool_arguments(args_str: &str) -> Result<serde_json::Value, String> {
|
||||
serde_json::from_str(args_str).map_err(|e| format!("Error parsing arguments: {e}"))
|
||||
}
|
||||
|
||||
pub fn get_tool_definitions() -> Vec<ToolDefinition> {
|
||||
vec![
|
||||
ToolDefinition {
|
||||
kind: "function".to_string(),
|
||||
function: ToolFunctionDefinition {
|
||||
name: "read_file".to_string(),
|
||||
description: "Reads the complete content of a file from the project. Use this to understand existing code before making changes.".to_string(),
|
||||
parameters: json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": { "type": "string", "description": "Relative path to the file from project root" }
|
||||
},
|
||||
"required": ["path"]
|
||||
}),
|
||||
},
|
||||
},
|
||||
ToolDefinition {
|
||||
kind: "function".to_string(),
|
||||
function: ToolFunctionDefinition {
|
||||
name: "write_file".to_string(),
|
||||
description: "Creates or completely overwrites a file with new content. YOU MUST USE THIS to implement code changes - do not suggest code to the user. The content parameter must contain the COMPLETE file including all imports, functions, and unchanged code.".to_string(),
|
||||
parameters: json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": { "type": "string", "description": "Relative path to the file from project root" },
|
||||
"content": { "type": "string", "description": "The complete file content to write (not a diff or partial code)" }
|
||||
},
|
||||
"required": ["path", "content"]
|
||||
}),
|
||||
},
|
||||
},
|
||||
ToolDefinition {
|
||||
kind: "function".to_string(),
|
||||
function: ToolFunctionDefinition {
|
||||
name: "list_directory".to_string(),
|
||||
description: "Lists all files and directories at a given path. Use this to explore the project structure.".to_string(),
|
||||
parameters: json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": { "type": "string", "description": "Relative path to list (use '.' for project root)" }
|
||||
},
|
||||
"required": ["path"]
|
||||
}),
|
||||
},
|
||||
},
|
||||
ToolDefinition {
|
||||
kind: "function".to_string(),
|
||||
function: ToolFunctionDefinition {
|
||||
name: "search_files".to_string(),
|
||||
description: "Searches for text patterns across all files in the project. Use this to find functions, variables, or code patterns when you don't know which file they're in."
|
||||
.to_string(),
|
||||
parameters: json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"query": { "type": "string", "description": "The text pattern to search for across all files" }
|
||||
},
|
||||
"required": ["query"]
|
||||
}),
|
||||
},
|
||||
},
|
||||
ToolDefinition {
|
||||
kind: "function".to_string(),
|
||||
function: ToolFunctionDefinition {
|
||||
name: "exec_shell".to_string(),
|
||||
description: "Executes a shell command in the project root directory. Use this to run tests, build commands, git operations, or any command-line tool. Examples: cargo check, npm test, git status.".to_string(),
|
||||
parameters: json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"command": {
|
||||
"type": "string",
|
||||
"description": "The command binary to execute (e.g., 'git', 'cargo', 'npm', 'ls')"
|
||||
},
|
||||
"args": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Array of arguments to pass to the command (e.g., ['status'] for git status)"
|
||||
}
|
||||
},
|
||||
"required": ["command", "args"]
|
||||
}),
|
||||
},
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Tauri Commands (Thin Wrappers)
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn get_ollama_models(base_url: Option<String>) -> Result<Vec<String>, String> {
|
||||
use crate::llm::providers::ollama::OllamaProvider;
|
||||
let url = base_url.unwrap_or_else(|| "http://localhost:11434".to_string());
|
||||
OllamaProvider::get_models(&url).await
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn get_anthropic_api_key_exists(app: AppHandle) -> Result<bool, String> {
|
||||
use tauri_plugin_store::StoreExt;
|
||||
|
||||
let store = app
|
||||
.store(STORE_PATH)
|
||||
.map_err(|e| format!("Failed to access store: {e}"))?;
|
||||
|
||||
let wrapper = TauriStoreWrapper { store: &store };
|
||||
Ok(get_anthropic_api_key_exists_impl(&wrapper))
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn set_anthropic_api_key(app: AppHandle, api_key: String) -> Result<(), String> {
|
||||
use tauri_plugin_store::StoreExt;
|
||||
|
||||
let store = app
|
||||
.store(STORE_PATH)
|
||||
.map_err(|e| format!("Failed to access store: {e}"))?;
|
||||
|
||||
let wrapper = TauriStoreWrapper { store: &store };
|
||||
set_anthropic_api_key_impl(&wrapper, &api_key)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn chat(
|
||||
app: AppHandle,
|
||||
messages: Vec<Message>,
|
||||
config: ProviderConfig,
|
||||
state: State<'_, SessionState>,
|
||||
) -> Result<Vec<Message>, String> {
|
||||
use crate::llm::providers::anthropic::AnthropicProvider;
|
||||
use crate::llm::providers::ollama::OllamaProvider;
|
||||
use tauri::Emitter;
|
||||
use tauri_plugin_store::StoreExt;
|
||||
|
||||
// Reset cancel flag at start of new request
|
||||
let _ = state.cancel_tx.send(false);
|
||||
|
||||
// Get a clone of the cancellation receiver
|
||||
let mut cancel_rx = state.cancel_rx.clone();
|
||||
|
||||
// Mark the receiver as having seen the current (false) value
|
||||
cancel_rx.borrow_and_update();
|
||||
|
||||
// Setup Provider
|
||||
let base_url = config
|
||||
.base_url
|
||||
.clone()
|
||||
.unwrap_or_else(|| "http://localhost:11434".to_string());
|
||||
|
||||
// Determine provider from model name
|
||||
let is_claude = config.model.starts_with("claude-");
|
||||
|
||||
if !is_claude && config.provider.as_str() != "ollama" {
|
||||
return Err(format!("Unsupported provider: {}", config.provider));
|
||||
}
|
||||
|
||||
// Define Tools
|
||||
let tool_defs = get_tool_definitions();
|
||||
let tools = if config.enable_tools.unwrap_or(true) {
|
||||
tool_defs.as_slice()
|
||||
} else {
|
||||
&[]
|
||||
};
|
||||
|
||||
// Agent Loop
|
||||
let mut current_history = messages.clone();
|
||||
|
||||
// Inject System Prompt
|
||||
current_history.insert(
|
||||
0,
|
||||
Message {
|
||||
role: Role::System,
|
||||
content: SYSTEM_PROMPT.to_string(),
|
||||
tool_calls: None,
|
||||
tool_call_id: None,
|
||||
},
|
||||
);
|
||||
|
||||
// Inject reminder as a second system message
|
||||
current_history.insert(
|
||||
1,
|
||||
Message {
|
||||
role: Role::System,
|
||||
content: "REMINDER: Distinguish between showing examples (use code blocks in chat) vs implementing changes (use write_file tool). Keywords like 'show me', 'example', 'how does' = chat response. Keywords like 'create', 'add', 'implement', 'fix' = use tools.".to_string(),
|
||||
tool_calls: None,
|
||||
tool_call_id: None,
|
||||
},
|
||||
);
|
||||
|
||||
let mut new_messages: Vec<Message> = Vec::new();
|
||||
let mut turn_count = 0;
|
||||
|
||||
loop {
|
||||
// Check for cancellation at start of loop
|
||||
if *cancel_rx.borrow() {
|
||||
return Err("Chat cancelled by user".to_string());
|
||||
}
|
||||
|
||||
if turn_count >= MAX_TURNS {
|
||||
return Err("Max conversation turns reached.".to_string());
|
||||
}
|
||||
turn_count += 1;
|
||||
|
||||
// Call LLM with streaming
|
||||
let response = if is_claude {
|
||||
// Use Anthropic provider
|
||||
let store = app
|
||||
.store(STORE_PATH)
|
||||
.map_err(|e| format!("Failed to access store: {e}"))?;
|
||||
|
||||
let wrapper = TauriStoreWrapper { store: &store };
|
||||
let api_key = get_anthropic_api_key_impl(&wrapper)?;
|
||||
let anthropic_provider = AnthropicProvider::new(api_key);
|
||||
anthropic_provider
|
||||
.chat_stream(&app, &config.model, ¤t_history, tools, &mut cancel_rx)
|
||||
.await
|
||||
.map_err(|e| format!("Anthropic Error: {e}"))?
|
||||
} else {
|
||||
// Use Ollama provider
|
||||
let ollama_provider = OllamaProvider::new(base_url.clone());
|
||||
ollama_provider
|
||||
.chat_stream(&app, &config.model, ¤t_history, tools, &mut cancel_rx)
|
||||
.await
|
||||
.map_err(|e| format!("Ollama Error: {e}"))?
|
||||
};
|
||||
|
||||
// Process Response
|
||||
if let Some(tool_calls) = response.tool_calls {
|
||||
// The Assistant wants to run tools
|
||||
let assistant_msg = Message {
|
||||
role: Role::Assistant,
|
||||
content: response.content.unwrap_or_default(),
|
||||
tool_calls: Some(tool_calls.clone()),
|
||||
tool_call_id: None,
|
||||
};
|
||||
|
||||
current_history.push(assistant_msg.clone());
|
||||
new_messages.push(assistant_msg);
|
||||
// Emit history excluding system prompts (indices 0 and 1)
|
||||
app.emit("chat:update", ¤t_history[2..])
|
||||
.map_err(|e| e.to_string())?;
|
||||
|
||||
// Execute Tools
|
||||
for call in tool_calls {
|
||||
// Check for cancellation before executing each tool
|
||||
if *cancel_rx.borrow() {
|
||||
return Err("Chat cancelled before tool execution".to_string());
|
||||
}
|
||||
|
||||
let output = execute_tool(&call, &state).await;
|
||||
|
||||
let tool_msg = Message {
|
||||
role: Role::Tool,
|
||||
content: output,
|
||||
tool_calls: None,
|
||||
tool_call_id: call.id,
|
||||
};
|
||||
|
||||
current_history.push(tool_msg.clone());
|
||||
new_messages.push(tool_msg);
|
||||
// Emit history excluding system prompts (indices 0 and 1)
|
||||
app.emit("chat:update", ¤t_history[2..])
|
||||
.map_err(|e| e.to_string())?;
|
||||
}
|
||||
} else {
|
||||
// Final text response
|
||||
let assistant_msg = Message {
|
||||
role: Role::Assistant,
|
||||
content: response.content.unwrap_or_default(),
|
||||
tool_calls: None,
|
||||
tool_call_id: None,
|
||||
};
|
||||
|
||||
new_messages.push(assistant_msg.clone());
|
||||
current_history.push(assistant_msg);
|
||||
// Emit history excluding system prompts (indices 0 and 1)
|
||||
app.emit("chat:update", ¤t_history[2..])
|
||||
.map_err(|e| e.to_string())?;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
Ok(new_messages)
|
||||
}
|
||||
|
||||
async fn execute_tool(call: &ToolCall, state: &State<'_, SessionState>) -> String {
|
||||
use crate::commands::{fs, search, shell};
|
||||
|
||||
let name = call.function.name.as_str();
|
||||
// Parse arguments. They come as a JSON string from the LLM abstraction.
|
||||
let args: serde_json::Value = match parse_tool_arguments(&call.function.arguments) {
|
||||
Ok(v) => v,
|
||||
Err(e) => return e,
|
||||
};
|
||||
|
||||
match name {
|
||||
"read_file" => {
|
||||
let path = args["path"].as_str().unwrap_or("").to_string();
|
||||
match fs::read_file(path, state.clone()).await {
|
||||
Ok(content) => content,
|
||||
Err(e) => format!("Error: {e}"),
|
||||
}
|
||||
}
|
||||
"write_file" => {
|
||||
let path = args["path"].as_str().unwrap_or("").to_string();
|
||||
let content = args["content"].as_str().unwrap_or("").to_string();
|
||||
match fs::write_file(path, content, state.clone()).await {
|
||||
Ok(()) => "File written successfully.".to_string(),
|
||||
Err(e) => format!("Error: {e}"),
|
||||
}
|
||||
}
|
||||
"list_directory" => {
|
||||
let path = args["path"].as_str().unwrap_or("").to_string();
|
||||
match fs::list_directory(path, state.clone()).await {
|
||||
Ok(entries) => serde_json::to_string(&entries).unwrap_or_default(),
|
||||
Err(e) => format!("Error: {e}"),
|
||||
}
|
||||
}
|
||||
"search_files" => {
|
||||
let query = args["query"].as_str().unwrap_or("").to_string();
|
||||
match search::search_files(query, state.clone()).await {
|
||||
Ok(results) => serde_json::to_string(&results).unwrap_or_default(),
|
||||
Err(e) => format!("Error: {e}"),
|
||||
}
|
||||
}
|
||||
"exec_shell" => {
|
||||
let command = args["command"].as_str().unwrap_or("").to_string();
|
||||
let args_vec: Vec<String> = args["args"]
|
||||
.as_array()
|
||||
.map(|arr| {
|
||||
arr.iter()
|
||||
.map(|v| v.as_str().unwrap_or("").to_string())
|
||||
.collect()
|
||||
})
|
||||
.unwrap_or_default();
|
||||
|
||||
match shell::exec_shell(command, args_vec, state.clone()).await {
|
||||
Ok(output) => serde_json::to_string(&output).unwrap_or_default(),
|
||||
Err(e) => format!("Error: {e}"),
|
||||
}
|
||||
}
|
||||
_ => format!("Unknown tool: {name}"),
|
||||
}
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn cancel_chat(state: State<'_, SessionState>) -> Result<(), String> {
|
||||
state.cancel_tx.send(true).map_err(|e| e.to_string())?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Tests
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::test_utils::MockStore;
|
||||
use std::collections::HashMap;
|
||||
|
||||
// Tests for get_anthropic_api_key_exists_impl
|
||||
mod get_anthropic_api_key_exists_tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_key_exists_and_not_empty() {
|
||||
let mut data = HashMap::new();
|
||||
data.insert(KEY_ANTHROPIC_API_KEY.to_string(), json!("sk-ant-test123"));
|
||||
let store = MockStore::with_data(data);
|
||||
|
||||
let result = get_anthropic_api_key_exists_impl(&store);
|
||||
|
||||
assert!(result);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_key_exists_but_empty() {
|
||||
let mut data = HashMap::new();
|
||||
data.insert(KEY_ANTHROPIC_API_KEY.to_string(), json!(""));
|
||||
let store = MockStore::with_data(data);
|
||||
|
||||
let result = get_anthropic_api_key_exists_impl(&store);
|
||||
|
||||
assert!(!result);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_key_not_exists() {
|
||||
let store = MockStore::new();
|
||||
|
||||
let result = get_anthropic_api_key_exists_impl(&store);
|
||||
|
||||
assert!(!result);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_key_exists_but_not_string() {
|
||||
let mut data = HashMap::new();
|
||||
data.insert(KEY_ANTHROPIC_API_KEY.to_string(), json!(123));
|
||||
let store = MockStore::with_data(data);
|
||||
|
||||
let result = get_anthropic_api_key_exists_impl(&store);
|
||||
|
||||
assert!(!result);
|
||||
}
|
||||
}
|
||||
|
||||
// Tests for set_anthropic_api_key_impl
|
||||
mod set_anthropic_api_key_tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_set_new_key() {
|
||||
let store = MockStore::new();
|
||||
let api_key = "sk-ant-new-key".to_string();
|
||||
|
||||
let result = set_anthropic_api_key_impl(&store, &api_key);
|
||||
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(
|
||||
store.get(KEY_ANTHROPIC_API_KEY),
|
||||
Some(json!("sk-ant-new-key"))
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_set_overwrites_existing_key() {
|
||||
let mut data = HashMap::new();
|
||||
data.insert(KEY_ANTHROPIC_API_KEY.to_string(), json!("old-key"));
|
||||
let store = MockStore::with_data(data);
|
||||
let api_key = "sk-ant-new-key".to_string();
|
||||
|
||||
let result = set_anthropic_api_key_impl(&store, &api_key);
|
||||
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(
|
||||
store.get(KEY_ANTHROPIC_API_KEY),
|
||||
Some(json!("sk-ant-new-key"))
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_set_empty_string() {
|
||||
let store = MockStore::new();
|
||||
let api_key = "".to_string();
|
||||
|
||||
let result = set_anthropic_api_key_impl(&store, &api_key);
|
||||
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(store.get(KEY_ANTHROPIC_API_KEY), Some(json!("")));
|
||||
}
|
||||
}
|
||||
|
||||
// Tests for get_anthropic_api_key_impl
|
||||
mod get_anthropic_api_key_tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_get_existing_key() {
|
||||
let mut data = HashMap::new();
|
||||
data.insert(KEY_ANTHROPIC_API_KEY.to_string(), json!("sk-ant-test-key"));
|
||||
let store = MockStore::with_data(data);
|
||||
|
||||
let result = get_anthropic_api_key_impl(&store);
|
||||
|
||||
assert_eq!(result, Ok("sk-ant-test-key".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_get_key_not_found() {
|
||||
let store = MockStore::new();
|
||||
|
||||
let result = get_anthropic_api_key_impl(&store);
|
||||
|
||||
assert_eq!(
|
||||
result,
|
||||
Err("Anthropic API key not found. Please set your API key.".to_string())
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_get_empty_key() {
|
||||
let mut data = HashMap::new();
|
||||
data.insert(KEY_ANTHROPIC_API_KEY.to_string(), json!(""));
|
||||
let store = MockStore::with_data(data);
|
||||
|
||||
let result = get_anthropic_api_key_impl(&store);
|
||||
|
||||
assert_eq!(
|
||||
result,
|
||||
Err("Anthropic API key is empty. Please set your API key.".to_string())
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_get_key_not_string() {
|
||||
let mut data = HashMap::new();
|
||||
data.insert(KEY_ANTHROPIC_API_KEY.to_string(), json!(12345));
|
||||
let store = MockStore::with_data(data);
|
||||
|
||||
let result = get_anthropic_api_key_impl(&store);
|
||||
|
||||
assert_eq!(result, Err("Stored API key is not a string".to_string()));
|
||||
}
|
||||
}
|
||||
|
||||
// Tests for parse_tool_arguments
|
||||
mod parse_tool_arguments_tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_parse_valid_json() {
|
||||
let args_str = r#"{"path": "test.txt"}"#;
|
||||
|
||||
let result = parse_tool_arguments(args_str);
|
||||
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["path"], "test.txt");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_parse_complex_json() {
|
||||
let args_str = r#"{"command": "git", "args": ["status", "--short"]}"#;
|
||||
|
||||
let result = parse_tool_arguments(args_str);
|
||||
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value["command"], "git");
|
||||
assert_eq!(value["args"][0], "status");
|
||||
assert_eq!(value["args"][1], "--short");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_parse_invalid_json() {
|
||||
let args_str = r#"{"path": "test.txt"#; // Missing closing brace
|
||||
|
||||
let result = parse_tool_arguments(args_str);
|
||||
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("Error parsing arguments"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_parse_empty_json() {
|
||||
let args_str = "{}";
|
||||
|
||||
let result = parse_tool_arguments(args_str);
|
||||
|
||||
assert!(result.is_ok());
|
||||
let value = result.unwrap();
|
||||
assert_eq!(value, json!({}));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_parse_empty_string() {
|
||||
let args_str = "";
|
||||
|
||||
let result = parse_tool_arguments(args_str);
|
||||
|
||||
assert!(result.is_err());
|
||||
}
|
||||
}
|
||||
|
||||
// Tests for get_tool_definitions
|
||||
mod get_tool_definitions_tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_returns_all_tools() {
|
||||
let tools = get_tool_definitions();
|
||||
|
||||
assert_eq!(tools.len(), 5);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_read_file_tool_exists() {
|
||||
let tools = get_tool_definitions();
|
||||
|
||||
let read_file = tools
|
||||
.iter()
|
||||
.find(|t| t.function.name == "read_file")
|
||||
.expect("read_file tool should exist");
|
||||
|
||||
assert_eq!(read_file.kind, "function");
|
||||
assert!(
|
||||
read_file
|
||||
.function
|
||||
.description
|
||||
.contains("Reads the complete content")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_write_file_tool_exists() {
|
||||
let tools = get_tool_definitions();
|
||||
|
||||
let write_file = tools
|
||||
.iter()
|
||||
.find(|t| t.function.name == "write_file")
|
||||
.expect("write_file tool should exist");
|
||||
|
||||
assert_eq!(write_file.kind, "function");
|
||||
assert!(
|
||||
write_file
|
||||
.function
|
||||
.description
|
||||
.contains("Creates or completely overwrites")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_list_directory_tool_exists() {
|
||||
let tools = get_tool_definitions();
|
||||
|
||||
let list_directory = tools
|
||||
.iter()
|
||||
.find(|t| t.function.name == "list_directory")
|
||||
.expect("list_directory tool should exist");
|
||||
|
||||
assert_eq!(list_directory.kind, "function");
|
||||
assert!(
|
||||
list_directory
|
||||
.function
|
||||
.description
|
||||
.contains("Lists all files and directories")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_search_files_tool_exists() {
|
||||
let tools = get_tool_definitions();
|
||||
|
||||
let search_files = tools
|
||||
.iter()
|
||||
.find(|t| t.function.name == "search_files")
|
||||
.expect("search_files tool should exist");
|
||||
|
||||
assert_eq!(search_files.kind, "function");
|
||||
assert!(
|
||||
search_files
|
||||
.function
|
||||
.description
|
||||
.contains("Searches for text patterns")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_exec_shell_tool_exists() {
|
||||
let tools = get_tool_definitions();
|
||||
|
||||
let exec_shell = tools
|
||||
.iter()
|
||||
.find(|t| t.function.name == "exec_shell")
|
||||
.expect("exec_shell tool should exist");
|
||||
|
||||
assert_eq!(exec_shell.kind, "function");
|
||||
assert!(
|
||||
exec_shell
|
||||
.function
|
||||
.description
|
||||
.contains("Executes a shell command")
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_all_tools_have_function_kind() {
|
||||
let tools = get_tool_definitions();
|
||||
|
||||
for tool in tools {
|
||||
assert_eq!(tool.kind, "function");
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_all_tools_have_non_empty_descriptions() {
|
||||
let tools = get_tool_definitions();
|
||||
|
||||
for tool in tools {
|
||||
assert!(!tool.function.description.is_empty());
|
||||
assert!(!tool.function.name.is_empty());
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_read_file_parameters() {
|
||||
let tools = get_tool_definitions();
|
||||
let read_file = tools
|
||||
.iter()
|
||||
.find(|t| t.function.name == "read_file")
|
||||
.unwrap();
|
||||
|
||||
let params = &read_file.function.parameters;
|
||||
assert_eq!(params["type"], "object");
|
||||
assert!(params["properties"]["path"].is_object());
|
||||
assert_eq!(params["required"][0], "path");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_write_file_parameters() {
|
||||
let tools = get_tool_definitions();
|
||||
let write_file = tools
|
||||
.iter()
|
||||
.find(|t| t.function.name == "write_file")
|
||||
.unwrap();
|
||||
|
||||
let params = &write_file.function.parameters;
|
||||
assert_eq!(params["type"], "object");
|
||||
assert!(params["properties"]["path"].is_object());
|
||||
assert!(params["properties"]["content"].is_object());
|
||||
assert_eq!(params["required"][0], "path");
|
||||
assert_eq!(params["required"][1], "content");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_exec_shell_parameters() {
|
||||
let tools = get_tool_definitions();
|
||||
let exec_shell = tools
|
||||
.iter()
|
||||
.find(|t| t.function.name == "exec_shell")
|
||||
.unwrap();
|
||||
|
||||
let params = &exec_shell.function.parameters;
|
||||
assert_eq!(params["type"], "object");
|
||||
assert!(params["properties"]["command"].is_object());
|
||||
assert!(params["properties"]["args"].is_object());
|
||||
assert_eq!(params["required"][0], "command");
|
||||
assert_eq!(params["required"][1], "args");
|
||||
}
|
||||
}
|
||||
|
||||
// Tests for get_project_root helper
|
||||
mod get_project_root_tests {
|
||||
use super::*;
|
||||
use std::sync::Mutex;
|
||||
use tempfile::TempDir;
|
||||
|
||||
#[test]
|
||||
fn test_get_project_root_no_project() {
|
||||
let state = SessionState {
|
||||
project_root: Mutex::new(None),
|
||||
cancel_tx: tokio::sync::watch::channel(false).0,
|
||||
cancel_rx: tokio::sync::watch::channel(false).1,
|
||||
};
|
||||
|
||||
let result = state.get_project_root();
|
||||
|
||||
assert!(result.is_err());
|
||||
assert_eq!(result.unwrap_err(), "No project is currently open.");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_get_project_root_success() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let path = temp_dir.path().to_path_buf();
|
||||
let state = SessionState {
|
||||
project_root: Mutex::new(Some(path.clone())),
|
||||
cancel_tx: tokio::sync::watch::channel(false).0,
|
||||
cancel_rx: tokio::sync::watch::channel(false).1,
|
||||
};
|
||||
|
||||
let result = state.get_project_root();
|
||||
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(result.unwrap(), path);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,852 +0,0 @@
|
||||
use crate::state::SessionState;
|
||||
#[cfg(test)]
|
||||
use crate::test_utils::MockStore;
|
||||
use serde::Serialize;
|
||||
use serde_json::json;
|
||||
use std::fs;
|
||||
use std::path::PathBuf;
|
||||
use tauri::{AppHandle, State};
|
||||
use tauri_plugin_store::StoreExt;
|
||||
|
||||
const STORE_PATH: &str = "store.json";
|
||||
const KEY_LAST_PROJECT: &str = "last_project_path";
|
||||
const KEY_SELECTED_MODEL: &str = "selected_model";
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Store Abstraction
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
/// Trait to abstract store operations for testing
|
||||
pub trait StoreOps: Send + Sync {
|
||||
fn get(&self, key: &str) -> Option<serde_json::Value>;
|
||||
fn set(&self, key: &str, value: serde_json::Value);
|
||||
fn delete(&self, key: &str);
|
||||
fn save(&self) -> Result<(), String>;
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Store Wrapper for Production
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
/// Wrapper for Tauri Store that implements StoreOps
|
||||
pub struct TauriStoreWrapper<'a> {
|
||||
pub store: &'a tauri_plugin_store::Store<tauri::Wry>,
|
||||
}
|
||||
|
||||
impl<'a> StoreOps for TauriStoreWrapper<'a> {
|
||||
fn get(&self, key: &str) -> Option<serde_json::Value> {
|
||||
self.store.get(key)
|
||||
}
|
||||
|
||||
fn set(&self, key: &str, value: serde_json::Value) {
|
||||
self.store.set(key, value);
|
||||
}
|
||||
|
||||
fn delete(&self, key: &str) {
|
||||
self.store.delete(key);
|
||||
}
|
||||
|
||||
fn save(&self) -> Result<(), String> {
|
||||
self.store
|
||||
.save()
|
||||
.map_err(|e| format!("Failed to save store: {}", e))
|
||||
}
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Helper Functions
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
/// Resolves a relative path against the active project root (pure function for testing).
|
||||
/// Returns error if path attempts traversal (..).
|
||||
fn resolve_path_impl(root: PathBuf, relative_path: &str) -> Result<PathBuf, String> {
|
||||
// specific check for traversal
|
||||
if relative_path.contains("..") {
|
||||
return Err("Security Violation: Directory traversal ('..') is not allowed.".to_string());
|
||||
}
|
||||
|
||||
// Join path
|
||||
let full_path = root.join(relative_path);
|
||||
|
||||
Ok(full_path)
|
||||
}
|
||||
|
||||
/// Resolves a relative path against the active project root.
|
||||
/// Returns error if no project is open or if path attempts traversal (..).
|
||||
fn resolve_path(state: &State<'_, SessionState>, relative_path: &str) -> Result<PathBuf, String> {
|
||||
let root = state.inner().get_project_root()?;
|
||||
resolve_path_impl(root, relative_path)
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Commands
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
/// Validate that a path exists and is a directory (pure function for testing)
|
||||
async fn validate_project_path(path: PathBuf) -> Result<(), String> {
|
||||
tauri::async_runtime::spawn_blocking(move || {
|
||||
if !path.exists() {
|
||||
return Err(format!("Path does not exist: {}", path.display()));
|
||||
}
|
||||
if !path.is_dir() {
|
||||
return Err(format!("Path is not a directory: {}", path.display()));
|
||||
}
|
||||
Ok(())
|
||||
})
|
||||
.await
|
||||
.map_err(|e| format!("Task failed: {}", e))?
|
||||
}
|
||||
|
||||
/// Open project implementation (testable with store abstraction)
|
||||
async fn open_project_impl(
|
||||
path: String,
|
||||
state: &SessionState,
|
||||
store: &dyn StoreOps,
|
||||
) -> Result<String, String> {
|
||||
let p = PathBuf::from(&path);
|
||||
|
||||
// Validate path
|
||||
validate_project_path(p.clone()).await?;
|
||||
|
||||
// Update session state
|
||||
{
|
||||
let mut root = state.project_root.lock().map_err(|e| e.to_string())?;
|
||||
*root = Some(p.clone());
|
||||
}
|
||||
|
||||
// Persist to store
|
||||
store.set(KEY_LAST_PROJECT, json!(path));
|
||||
store.save()?;
|
||||
|
||||
Ok(path)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn open_project(
|
||||
app: AppHandle,
|
||||
path: String,
|
||||
state: State<'_, SessionState>,
|
||||
) -> Result<String, String> {
|
||||
let store = app
|
||||
.store(STORE_PATH)
|
||||
.map_err(|e| format!("Failed to access store: {}", e))?;
|
||||
let wrapper = TauriStoreWrapper { store: &store };
|
||||
open_project_impl(path, state.inner(), &wrapper).await
|
||||
}
|
||||
|
||||
/// Close project implementation (testable with store abstraction)
|
||||
fn close_project_impl(state: &SessionState, store: &dyn StoreOps) -> Result<(), String> {
|
||||
// Clear session state
|
||||
{
|
||||
let mut root = state.project_root.lock().map_err(|e| e.to_string())?;
|
||||
*root = None;
|
||||
}
|
||||
|
||||
// Clear from store
|
||||
store.delete(KEY_LAST_PROJECT);
|
||||
store.save()?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn close_project(app: AppHandle, state: State<'_, SessionState>) -> Result<(), String> {
|
||||
let store = app
|
||||
.store(STORE_PATH)
|
||||
.map_err(|e| format!("Failed to access store: {}", e))?;
|
||||
let wrapper = TauriStoreWrapper { store: &store };
|
||||
close_project_impl(state.inner(), &wrapper)
|
||||
}
|
||||
|
||||
/// Get current project implementation (testable with store abstraction)
|
||||
fn get_current_project_impl(
|
||||
state: &SessionState,
|
||||
store: &dyn StoreOps,
|
||||
) -> Result<Option<String>, String> {
|
||||
// 1. Check in-memory state
|
||||
{
|
||||
let root = state.project_root.lock().map_err(|e| e.to_string())?;
|
||||
if let Some(path) = &*root {
|
||||
return Ok(Some(path.to_string_lossy().to_string()));
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Check store
|
||||
if let Some(path_str) = store
|
||||
.get(KEY_LAST_PROJECT)
|
||||
.as_ref()
|
||||
.and_then(|val| val.as_str())
|
||||
{
|
||||
let p = PathBuf::from(path_str);
|
||||
if p.exists() && p.is_dir() {
|
||||
// Update session state
|
||||
let mut root = state.project_root.lock().map_err(|e| e.to_string())?;
|
||||
*root = Some(p.clone());
|
||||
return Ok(Some(path_str.to_string()));
|
||||
}
|
||||
}
|
||||
|
||||
Ok(None)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn get_current_project(
|
||||
app: AppHandle,
|
||||
state: State<'_, SessionState>,
|
||||
) -> Result<Option<String>, String> {
|
||||
let store = app
|
||||
.store(STORE_PATH)
|
||||
.map_err(|e| format!("Failed to access store: {}", e))?;
|
||||
let wrapper = TauriStoreWrapper { store: &store };
|
||||
get_current_project_impl(state.inner(), &wrapper)
|
||||
}
|
||||
|
||||
/// Get model preference implementation (testable with store abstraction)
|
||||
fn get_model_preference_impl(store: &dyn StoreOps) -> Result<Option<String>, String> {
|
||||
if let Some(model) = store
|
||||
.get(KEY_SELECTED_MODEL)
|
||||
.as_ref()
|
||||
.and_then(|val| val.as_str())
|
||||
{
|
||||
return Ok(Some(model.to_string()));
|
||||
}
|
||||
Ok(None)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn get_model_preference(app: AppHandle) -> Result<Option<String>, String> {
|
||||
let store = app
|
||||
.store(STORE_PATH)
|
||||
.map_err(|e| format!("Failed to access store: {}", e))?;
|
||||
let wrapper = TauriStoreWrapper { store: &store };
|
||||
get_model_preference_impl(&wrapper)
|
||||
}
|
||||
|
||||
/// Set model preference implementation (testable with store abstraction)
|
||||
fn set_model_preference_impl(model: String, store: &dyn StoreOps) -> Result<(), String> {
|
||||
store.set(KEY_SELECTED_MODEL, json!(model));
|
||||
store.save()?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn set_model_preference(app: AppHandle, model: String) -> Result<(), String> {
|
||||
let store = app
|
||||
.store(STORE_PATH)
|
||||
.map_err(|e| format!("Failed to access store: {}", e))?;
|
||||
let wrapper = TauriStoreWrapper { store: &store };
|
||||
set_model_preference_impl(model, &wrapper)
|
||||
}
|
||||
|
||||
/// Read file implementation (pure function for testing)
|
||||
async fn read_file_impl(full_path: PathBuf) -> Result<String, String> {
|
||||
tauri::async_runtime::spawn_blocking(move || {
|
||||
fs::read_to_string(&full_path).map_err(|e| format!("Failed to read file: {}", e))
|
||||
})
|
||||
.await
|
||||
.map_err(|e| format!("Task failed: {}", e))?
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn read_file(path: String, state: State<'_, SessionState>) -> Result<String, String> {
|
||||
let full_path = resolve_path(&state, &path)?;
|
||||
read_file_impl(full_path).await
|
||||
}
|
||||
|
||||
/// Write file implementation (pure function for testing)
|
||||
async fn write_file_impl(full_path: PathBuf, content: String) -> Result<(), String> {
|
||||
tauri::async_runtime::spawn_blocking(move || {
|
||||
// Ensure parent directory exists
|
||||
if let Some(parent) = full_path.parent() {
|
||||
fs::create_dir_all(parent)
|
||||
.map_err(|e| format!("Failed to create directories: {}", e))?;
|
||||
}
|
||||
|
||||
fs::write(&full_path, content).map_err(|e| format!("Failed to write file: {}", e))
|
||||
})
|
||||
.await
|
||||
.map_err(|e| format!("Task failed: {}", e))?
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn write_file(
|
||||
path: String,
|
||||
content: String,
|
||||
state: State<'_, SessionState>,
|
||||
) -> Result<(), String> {
|
||||
let full_path = resolve_path(&state, &path)?;
|
||||
write_file_impl(full_path, content).await
|
||||
}
|
||||
|
||||
#[derive(Serialize, Debug)]
|
||||
pub struct FileEntry {
|
||||
name: String,
|
||||
kind: String, // "file" | "dir"
|
||||
}
|
||||
|
||||
/// List directory implementation (pure function for testing)
|
||||
async fn list_directory_impl(full_path: PathBuf) -> Result<Vec<FileEntry>, String> {
|
||||
tauri::async_runtime::spawn_blocking(move || {
|
||||
let entries = fs::read_dir(&full_path).map_err(|e| format!("Failed to read dir: {}", e))?;
|
||||
|
||||
let mut result = Vec::new();
|
||||
for entry in entries {
|
||||
let entry = entry.map_err(|e| e.to_string())?;
|
||||
let ft = entry.file_type().map_err(|e| e.to_string())?;
|
||||
let name = entry.file_name().to_string_lossy().to_string();
|
||||
|
||||
result.push(FileEntry {
|
||||
name,
|
||||
kind: if ft.is_dir() {
|
||||
"dir".to_string()
|
||||
} else {
|
||||
"file".to_string()
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
// Sort: directories first, then files
|
||||
result.sort_by(|a, b| match (a.kind.as_str(), b.kind.as_str()) {
|
||||
("dir", "file") => std::cmp::Ordering::Less,
|
||||
("file", "dir") => std::cmp::Ordering::Greater,
|
||||
_ => a.name.cmp(&b.name),
|
||||
});
|
||||
|
||||
Ok(result)
|
||||
})
|
||||
.await
|
||||
.map_err(|e| format!("Task failed: {}", e))?
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn list_directory(
|
||||
path: String,
|
||||
state: State<'_, SessionState>,
|
||||
) -> Result<Vec<FileEntry>, String> {
|
||||
let full_path = resolve_path(&state, &path)?;
|
||||
list_directory_impl(full_path).await
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Tests
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use std::fs;
|
||||
use std::sync::Mutex;
|
||||
use tempfile::TempDir;
|
||||
|
||||
/// Helper to create a test SessionState with a given root path
|
||||
fn create_test_state(root: Option<PathBuf>) -> SessionState {
|
||||
let (cancel_tx, cancel_rx) = tokio::sync::watch::channel(false);
|
||||
SessionState {
|
||||
project_root: Mutex::new(root),
|
||||
cancel_tx,
|
||||
cancel_rx,
|
||||
}
|
||||
}
|
||||
|
||||
// Tests for validate_project_path function
|
||||
mod validate_project_path_tests {
|
||||
use super::*;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_validate_project_path_valid_directory() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let path = temp_dir.path().to_path_buf();
|
||||
|
||||
let result = validate_project_path(path).await;
|
||||
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_validate_project_path_not_exists() {
|
||||
let path = PathBuf::from("/nonexistent/path/xyz");
|
||||
|
||||
let result = validate_project_path(path).await;
|
||||
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("Path does not exist"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_validate_project_path_is_file() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let file_path = temp_dir.path().join("test.txt");
|
||||
fs::write(&file_path, "content").unwrap();
|
||||
|
||||
let result = validate_project_path(file_path).await;
|
||||
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("not a directory"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_validate_project_path_nested_directory() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let nested = temp_dir.path().join("nested/dir");
|
||||
fs::create_dir_all(&nested).unwrap();
|
||||
|
||||
let result = validate_project_path(nested).await;
|
||||
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_validate_project_path_empty_directory() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let empty_dir = temp_dir.path().join("empty");
|
||||
fs::create_dir(&empty_dir).unwrap();
|
||||
|
||||
let result = validate_project_path(empty_dir).await;
|
||||
|
||||
assert!(result.is_ok());
|
||||
}
|
||||
}
|
||||
|
||||
// Tests for open_project_impl
|
||||
mod open_project_tests {
|
||||
use super::*;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_open_project_impl_success() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let path = temp_dir.path().to_string_lossy().to_string();
|
||||
let state = create_test_state(None);
|
||||
let store = MockStore::new();
|
||||
|
||||
let result = open_project_impl(path.clone(), &state, &store).await;
|
||||
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(result.unwrap(), path);
|
||||
|
||||
// Verify state was updated
|
||||
let root = state.project_root.lock().unwrap();
|
||||
assert!(root.is_some());
|
||||
|
||||
// Verify store was updated
|
||||
assert!(store.get(KEY_LAST_PROJECT).is_some());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_open_project_impl_invalid_path() {
|
||||
let state = create_test_state(None);
|
||||
let store = MockStore::new();
|
||||
|
||||
let result = open_project_impl("/nonexistent/path".to_string(), &state, &store).await;
|
||||
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("does not exist"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_open_project_impl_file_not_directory() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let file_path = temp_dir.path().join("file.txt");
|
||||
fs::write(&file_path, "content").unwrap();
|
||||
let path = file_path.to_string_lossy().to_string();
|
||||
|
||||
let state = create_test_state(None);
|
||||
let store = MockStore::new();
|
||||
|
||||
let result = open_project_impl(path, &state, &store).await;
|
||||
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("not a directory"));
|
||||
}
|
||||
}
|
||||
|
||||
// Tests for close_project_impl
|
||||
mod close_project_tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_close_project_impl() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let state = create_test_state(Some(temp_dir.path().to_path_buf()));
|
||||
let store = MockStore::new();
|
||||
store.set(KEY_LAST_PROJECT, json!("/some/path"));
|
||||
|
||||
let result = close_project_impl(&state, &store);
|
||||
|
||||
assert!(result.is_ok());
|
||||
|
||||
// Verify state was cleared
|
||||
let root = state.project_root.lock().unwrap();
|
||||
assert!(root.is_none());
|
||||
|
||||
// Verify store was cleared
|
||||
assert!(store.get(KEY_LAST_PROJECT).is_none());
|
||||
}
|
||||
}
|
||||
|
||||
// Tests for get_current_project_impl
|
||||
mod get_current_project_tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_get_current_project_impl_from_memory() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let path = temp_dir.path().to_path_buf();
|
||||
let state = create_test_state(Some(path.clone()));
|
||||
let store = MockStore::new();
|
||||
|
||||
let result = get_current_project_impl(&state, &store);
|
||||
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(result.unwrap(), Some(path.to_string_lossy().to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_get_current_project_impl_from_store() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let path = temp_dir.path().to_string_lossy().to_string();
|
||||
let state = create_test_state(None);
|
||||
let store = MockStore::new();
|
||||
store.set(KEY_LAST_PROJECT, json!(path.clone()));
|
||||
|
||||
let result = get_current_project_impl(&state, &store);
|
||||
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(result.unwrap(), Some(path));
|
||||
|
||||
// Verify state was updated
|
||||
let root = state.project_root.lock().unwrap();
|
||||
assert!(root.is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_get_current_project_impl_no_project() {
|
||||
let state = create_test_state(None);
|
||||
let store = MockStore::new();
|
||||
|
||||
let result = get_current_project_impl(&state, &store);
|
||||
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(result.unwrap(), None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_get_current_project_impl_store_path_invalid() {
|
||||
let state = create_test_state(None);
|
||||
let store = MockStore::new();
|
||||
store.set(KEY_LAST_PROJECT, json!("/nonexistent/path"));
|
||||
|
||||
let result = get_current_project_impl(&state, &store);
|
||||
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(result.unwrap(), None);
|
||||
}
|
||||
}
|
||||
|
||||
// Tests for model preference functions
|
||||
mod model_preference_tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_get_model_preference_impl_exists() {
|
||||
let store = MockStore::new();
|
||||
store.set(KEY_SELECTED_MODEL, json!("gpt-4"));
|
||||
|
||||
let result = get_model_preference_impl(&store);
|
||||
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(result.unwrap(), Some("gpt-4".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_get_model_preference_impl_not_exists() {
|
||||
let store = MockStore::new();
|
||||
|
||||
let result = get_model_preference_impl(&store);
|
||||
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(result.unwrap(), None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_set_model_preference_impl() {
|
||||
let store = MockStore::new();
|
||||
|
||||
let result = set_model_preference_impl("claude-3".to_string(), &store);
|
||||
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(store.get(KEY_SELECTED_MODEL), Some(json!("claude-3")));
|
||||
}
|
||||
}
|
||||
|
||||
// Tests for resolve_path helper function
|
||||
mod resolve_path_tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_resolve_path_no_project_open() {
|
||||
let state = create_test_state(None);
|
||||
|
||||
let result = state.get_project_root();
|
||||
|
||||
assert!(result.is_err());
|
||||
assert_eq!(result.unwrap_err(), "No project is currently open.");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_resolve_path_valid() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let root = temp_dir.path().to_path_buf();
|
||||
|
||||
let result = resolve_path_impl(root.clone(), "test.txt");
|
||||
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(result.unwrap(), root.join("test.txt"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_resolve_path_blocks_traversal() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let root = temp_dir.path().to_path_buf();
|
||||
|
||||
let result = resolve_path_impl(root, "../etc/passwd");
|
||||
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("Directory traversal"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_resolve_path_nested() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let root = temp_dir.path().to_path_buf();
|
||||
|
||||
let result = resolve_path_impl(root.clone(), "src/main.rs");
|
||||
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(result.unwrap(), root.join("src/main.rs"));
|
||||
}
|
||||
}
|
||||
|
||||
// Tests for read_file command
|
||||
mod read_file_tests {
|
||||
use super::*;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_read_file_success() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let file_path = temp_dir.path().join("test.txt");
|
||||
fs::write(&file_path, "Hello, World!").unwrap();
|
||||
|
||||
let result = read_file_impl(file_path).await;
|
||||
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(result.unwrap(), "Hello, World!");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_read_file_not_found() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let file_path = temp_dir.path().join("nonexistent.txt");
|
||||
|
||||
let result = read_file_impl(file_path).await;
|
||||
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("Failed to read file"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_read_file_no_project() {
|
||||
let state = create_test_state(None);
|
||||
|
||||
let result = state.get_project_root();
|
||||
|
||||
assert!(result.is_err());
|
||||
assert_eq!(result.unwrap_err(), "No project is currently open.");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_read_file_blocks_traversal() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let root = temp_dir.path().to_path_buf();
|
||||
|
||||
let result = resolve_path_impl(root, "../etc/passwd");
|
||||
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("Directory traversal"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_read_file_nested_path() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let nested_dir = temp_dir.path().join("src");
|
||||
fs::create_dir(&nested_dir).unwrap();
|
||||
let file_path = nested_dir.join("lib.rs");
|
||||
fs::write(&file_path, "pub fn main() {}").unwrap();
|
||||
|
||||
let result = read_file_impl(file_path).await;
|
||||
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(result.unwrap(), "pub fn main() {}");
|
||||
}
|
||||
}
|
||||
|
||||
// Tests for write_file command
|
||||
mod write_file_tests {
|
||||
use super::*;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_write_file_success() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let file_path = temp_dir.path().join("test.txt");
|
||||
|
||||
let result = write_file_impl(file_path.clone(), "Hello, World!".to_string()).await;
|
||||
|
||||
assert!(result.is_ok());
|
||||
let content = fs::read_to_string(file_path).unwrap();
|
||||
assert_eq!(content, "Hello, World!");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_write_file_creates_parent_dirs() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let file_path = temp_dir.path().join("src/nested/test.txt");
|
||||
|
||||
let result = write_file_impl(file_path.clone(), "content".to_string()).await;
|
||||
|
||||
assert!(result.is_ok());
|
||||
assert!(file_path.exists());
|
||||
let content = fs::read_to_string(file_path).unwrap();
|
||||
assert_eq!(content, "content");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_write_file_overwrites_existing() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let file_path = temp_dir.path().join("test.txt");
|
||||
fs::write(&file_path, "old content").unwrap();
|
||||
|
||||
let result = write_file_impl(file_path.clone(), "new content".to_string()).await;
|
||||
|
||||
assert!(result.is_ok());
|
||||
let content = fs::read_to_string(file_path).unwrap();
|
||||
assert_eq!(content, "new content");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_write_file_no_project() {
|
||||
let state = create_test_state(None);
|
||||
|
||||
let result = state.get_project_root();
|
||||
|
||||
assert!(result.is_err());
|
||||
assert_eq!(result.unwrap_err(), "No project is currently open.");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_write_file_blocks_traversal() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let root = temp_dir.path().to_path_buf();
|
||||
|
||||
let result = resolve_path_impl(root, "../etc/passwd");
|
||||
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("Directory traversal"));
|
||||
}
|
||||
}
|
||||
|
||||
// Tests for list_directory command
|
||||
mod list_directory_tests {
|
||||
use super::*;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_directory_success() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
fs::write(temp_dir.path().join("file1.txt"), "").unwrap();
|
||||
fs::write(temp_dir.path().join("file2.txt"), "").unwrap();
|
||||
fs::create_dir(temp_dir.path().join("dir1")).unwrap();
|
||||
|
||||
let result = list_directory_impl(temp_dir.path().to_path_buf()).await;
|
||||
|
||||
assert!(result.is_ok());
|
||||
let entries = result.unwrap();
|
||||
assert_eq!(entries.len(), 3);
|
||||
|
||||
// Check that directories come first
|
||||
assert_eq!(entries[0].kind, "dir");
|
||||
assert_eq!(entries[0].name, "dir1");
|
||||
|
||||
// Files should be sorted alphabetically after directories
|
||||
assert_eq!(entries[1].kind, "file");
|
||||
assert_eq!(entries[2].kind, "file");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_directory_empty() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let empty_dir = temp_dir.path().join("empty");
|
||||
fs::create_dir(&empty_dir).unwrap();
|
||||
|
||||
let result = list_directory_impl(empty_dir).await;
|
||||
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(result.unwrap().len(), 0);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_directory_not_found() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let nonexistent = temp_dir.path().join("nonexistent");
|
||||
|
||||
let result = list_directory_impl(nonexistent).await;
|
||||
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("Failed to read dir"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_directory_no_project() {
|
||||
let state = create_test_state(None);
|
||||
|
||||
let result = state.get_project_root();
|
||||
|
||||
assert!(result.is_err());
|
||||
assert_eq!(result.unwrap_err(), "No project is currently open.");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_directory_blocks_traversal() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let root = temp_dir.path().to_path_buf();
|
||||
|
||||
let result = resolve_path_impl(root, "../etc");
|
||||
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("Directory traversal"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_list_directory_sorting() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
fs::write(temp_dir.path().join("zebra.txt"), "").unwrap();
|
||||
fs::write(temp_dir.path().join("apple.txt"), "").unwrap();
|
||||
fs::create_dir(temp_dir.path().join("zoo")).unwrap();
|
||||
fs::create_dir(temp_dir.path().join("animal")).unwrap();
|
||||
|
||||
let result = list_directory_impl(temp_dir.path().to_path_buf()).await;
|
||||
|
||||
assert!(result.is_ok());
|
||||
let entries = result.unwrap();
|
||||
|
||||
// Directories first (alphabetically)
|
||||
assert_eq!(entries[0].name, "animal");
|
||||
assert_eq!(entries[0].kind, "dir");
|
||||
assert_eq!(entries[1].name, "zoo");
|
||||
assert_eq!(entries[1].kind, "dir");
|
||||
|
||||
// Files next (alphabetically)
|
||||
assert_eq!(entries[2].name, "apple.txt");
|
||||
assert_eq!(entries[2].kind, "file");
|
||||
assert_eq!(entries[3].name, "zebra.txt");
|
||||
assert_eq!(entries[3].kind, "file");
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,373 +0,0 @@
|
||||
use crate::state::SessionState;
|
||||
use ignore::WalkBuilder;
|
||||
use serde::Serialize;
|
||||
use std::fs;
|
||||
use std::path::PathBuf;
|
||||
use tauri::State;
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Helper Functions
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
/// Helper to get the root path (cloned) without joining
|
||||
fn get_project_root(state: &State<'_, SessionState>) -> Result<PathBuf, String> {
|
||||
state.inner().get_project_root()
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Commands
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
#[derive(Serialize, Debug)]
|
||||
pub struct SearchResult {
|
||||
path: String, // Relative path
|
||||
matches: usize,
|
||||
}
|
||||
|
||||
/// Searches for files containing the specified query string within the current project.
|
||||
///
|
||||
/// This command performs a case-sensitive substring search across all files in the project,
|
||||
/// respecting `.gitignore` rules by default. The search is executed on a blocking thread
|
||||
/// to avoid blocking the async runtime.
|
||||
///
|
||||
/// # Arguments
|
||||
///
|
||||
/// * `query` - The search string to look for in file contents
|
||||
/// * `state` - The session state containing the project root path
|
||||
///
|
||||
/// # Returns
|
||||
///
|
||||
/// Returns a `Vec<SearchResult>` containing:
|
||||
/// - `path`: The relative path of each matching file
|
||||
/// - `matches`: The number of matches (currently simplified to 1 per file)
|
||||
///
|
||||
/// # Errors
|
||||
///
|
||||
/// Returns an error if:
|
||||
/// - No project is currently open
|
||||
/// - The project root lock cannot be acquired
|
||||
/// - The search task fails to execute
|
||||
///
|
||||
/// # Note
|
||||
///
|
||||
/// This is a naive implementation that reads entire files into memory.
|
||||
/// For production use, consider using streaming/buffered reads or the `grep-searcher` crate.
|
||||
/// Search files implementation (pure function for testing)
|
||||
pub async fn search_files_impl(query: String, root: PathBuf) -> Result<Vec<SearchResult>, String> {
|
||||
let root_clone = root.clone();
|
||||
|
||||
// Run computationally expensive search on a blocking thread
|
||||
let results = tauri::async_runtime::spawn_blocking(move || {
|
||||
let mut matches = Vec::new();
|
||||
// default to respecting .gitignore
|
||||
let walker = WalkBuilder::new(&root_clone).git_ignore(true).build();
|
||||
|
||||
for result in walker {
|
||||
match result {
|
||||
Ok(entry) => {
|
||||
if !entry.file_type().map(|ft| ft.is_file()).unwrap_or(false) {
|
||||
continue;
|
||||
}
|
||||
|
||||
let path = entry.path();
|
||||
// Try to read file
|
||||
// Note: This is a naive implementation reading whole files into memory.
|
||||
// For production, we should stream/buffer reads or use grep-searcher.
|
||||
if let Ok(content) = fs::read_to_string(path) {
|
||||
// Simple substring search (case-sensitive)
|
||||
if content.contains(&query) {
|
||||
// Compute relative path for display
|
||||
let relative = path
|
||||
.strip_prefix(&root_clone)
|
||||
.unwrap_or(path)
|
||||
.to_string_lossy()
|
||||
.to_string();
|
||||
|
||||
matches.push(SearchResult {
|
||||
path: relative,
|
||||
matches: 1, // Simplified count for now
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(err) => eprintln!("Error walking dir: {}", err),
|
||||
}
|
||||
}
|
||||
matches
|
||||
})
|
||||
.await
|
||||
.map_err(|e| format!("Search task failed: {}", e))?;
|
||||
|
||||
Ok(results)
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn search_files(
|
||||
query: String,
|
||||
state: State<'_, SessionState>,
|
||||
) -> Result<Vec<SearchResult>, String> {
|
||||
let root = get_project_root(&state)?;
|
||||
search_files_impl(query, root).await
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Tests
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use std::fs;
|
||||
use std::sync::Mutex;
|
||||
use tempfile::TempDir;
|
||||
|
||||
/// Helper to create a test SessionState with a given root path
|
||||
fn create_test_state(root: Option<PathBuf>) -> SessionState {
|
||||
let (cancel_tx, cancel_rx) = tokio::sync::watch::channel(false);
|
||||
SessionState {
|
||||
project_root: Mutex::new(root),
|
||||
cancel_tx,
|
||||
cancel_rx,
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_search_files_no_project_open() {
|
||||
let state = create_test_state(None);
|
||||
|
||||
let result = state.get_project_root();
|
||||
|
||||
assert!(result.is_err());
|
||||
assert_eq!(result.unwrap_err(), "No project is currently open.");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_search_files_finds_matching_file() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let test_file = temp_dir.path().join("test.txt");
|
||||
fs::write(&test_file, "This is a test file with some content").unwrap();
|
||||
|
||||
let state = create_test_state(Some(temp_dir.path().to_path_buf()));
|
||||
|
||||
let root = state.get_project_root().unwrap();
|
||||
let results = search_files_impl("test".to_string(), root).await.unwrap();
|
||||
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].path, "test.txt");
|
||||
assert_eq!(results[0].matches, 1);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_search_files_multiple_matches() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
|
||||
// Create multiple files with matching content
|
||||
fs::write(temp_dir.path().join("file1.txt"), "hello world").unwrap();
|
||||
fs::write(temp_dir.path().join("file2.txt"), "hello again").unwrap();
|
||||
fs::write(temp_dir.path().join("file3.txt"), "goodbye").unwrap();
|
||||
|
||||
let state = create_test_state(Some(temp_dir.path().to_path_buf()));
|
||||
|
||||
let root = state.get_project_root().unwrap();
|
||||
let results = search_files_impl("hello".to_string(), root).await.unwrap();
|
||||
|
||||
assert_eq!(results.len(), 2);
|
||||
let paths: Vec<&str> = results.iter().map(|r| r.path.as_str()).collect();
|
||||
assert!(paths.contains(&"file1.txt"));
|
||||
assert!(paths.contains(&"file2.txt"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_search_files_no_matches() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
fs::write(temp_dir.path().join("test.txt"), "This is some content").unwrap();
|
||||
|
||||
let state = create_test_state(Some(temp_dir.path().to_path_buf()));
|
||||
|
||||
let root = state.get_project_root().unwrap();
|
||||
let results = search_files_impl("nonexistent".to_string(), root)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(results.len(), 0);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_search_files_case_sensitive() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
fs::write(temp_dir.path().join("test.txt"), "Hello World").unwrap();
|
||||
|
||||
let state = create_test_state(Some(temp_dir.path().to_path_buf()));
|
||||
|
||||
// Search for lowercase - should not match
|
||||
let root = state.get_project_root().unwrap();
|
||||
let results = search_files_impl("hello".to_string(), root.clone())
|
||||
.await
|
||||
.unwrap();
|
||||
assert_eq!(results.len(), 0);
|
||||
|
||||
// Search for correct case - should match
|
||||
let results = search_files_impl("Hello".to_string(), root).await.unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_search_files_nested_directories() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
|
||||
// Create nested directory structure
|
||||
let nested_dir = temp_dir.path().join("subdir");
|
||||
fs::create_dir(&nested_dir).unwrap();
|
||||
|
||||
fs::write(temp_dir.path().join("root.txt"), "match").unwrap();
|
||||
fs::write(nested_dir.join("nested.txt"), "match").unwrap();
|
||||
|
||||
let state = create_test_state(Some(temp_dir.path().to_path_buf()));
|
||||
|
||||
let root = state.get_project_root().unwrap();
|
||||
let results = search_files_impl("match".to_string(), root).await.unwrap();
|
||||
|
||||
assert_eq!(results.len(), 2);
|
||||
let paths: Vec<&str> = results.iter().map(|r| r.path.as_str()).collect();
|
||||
assert!(paths.contains(&"root.txt"));
|
||||
assert!(paths.contains(&"subdir/nested.txt") || paths.contains(&"subdir\\nested.txt"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_search_files_respects_gitignore() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
|
||||
// Initialize git repo (required for ignore crate to respect .gitignore)
|
||||
std::process::Command::new("git")
|
||||
.args(["init"])
|
||||
.current_dir(temp_dir.path())
|
||||
.output()
|
||||
.unwrap();
|
||||
|
||||
// Create .gitignore
|
||||
fs::write(temp_dir.path().join(".gitignore"), "ignored.txt\n").unwrap();
|
||||
|
||||
// Create files
|
||||
fs::write(temp_dir.path().join("included.txt"), "searchterm").unwrap();
|
||||
fs::write(temp_dir.path().join("ignored.txt"), "searchterm").unwrap();
|
||||
|
||||
let state = create_test_state(Some(temp_dir.path().to_path_buf()));
|
||||
|
||||
let root = state.get_project_root().unwrap();
|
||||
let results = search_files_impl("searchterm".to_string(), root)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Should find the non-ignored file, but not the ignored one
|
||||
// The gitignore file itself might be included
|
||||
let has_included = results.iter().any(|r| r.path == "included.txt");
|
||||
let has_ignored = results.iter().any(|r| r.path == "ignored.txt");
|
||||
|
||||
assert!(has_included, "included.txt should be found");
|
||||
assert!(
|
||||
!has_ignored,
|
||||
"ignored.txt should NOT be found (it's in .gitignore)"
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_search_files_skips_binary_files() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
|
||||
// Create a text file
|
||||
fs::write(temp_dir.path().join("text.txt"), "searchable").unwrap();
|
||||
|
||||
// Create a binary file (will fail to read as UTF-8)
|
||||
fs::write(temp_dir.path().join("binary.bin"), [0xFF, 0xFE, 0xFD]).unwrap();
|
||||
|
||||
let state = create_test_state(Some(temp_dir.path().to_path_buf()));
|
||||
|
||||
let root = state.get_project_root().unwrap();
|
||||
let results = search_files_impl("searchable".to_string(), root)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Should only find the text file
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].path, "text.txt");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_search_files_empty_query() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
fs::write(temp_dir.path().join("test.txt"), "content").unwrap();
|
||||
|
||||
let state = create_test_state(Some(temp_dir.path().to_path_buf()));
|
||||
|
||||
let root = state.get_project_root().unwrap();
|
||||
let results = search_files_impl("".to_string(), root).await.unwrap();
|
||||
|
||||
// Empty string is contained in all strings, so should match
|
||||
assert_eq!(results.len(), 1);
|
||||
}
|
||||
|
||||
#[cfg(unix)]
|
||||
#[tokio::test]
|
||||
async fn test_search_files_handles_permission_errors() {
|
||||
use std::os::unix::fs::PermissionsExt;
|
||||
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
|
||||
// Create a subdirectory with a file
|
||||
let restricted_dir = temp_dir.path().join("restricted");
|
||||
fs::create_dir(&restricted_dir).unwrap();
|
||||
fs::write(restricted_dir.join("secret.txt"), "searchterm").unwrap();
|
||||
|
||||
// Remove read permissions from the directory
|
||||
let mut perms = fs::metadata(&restricted_dir).unwrap().permissions();
|
||||
perms.set_mode(0o000);
|
||||
fs::set_permissions(&restricted_dir, perms).unwrap();
|
||||
|
||||
// Create an accessible file
|
||||
fs::write(temp_dir.path().join("accessible.txt"), "searchterm").unwrap();
|
||||
|
||||
let root = temp_dir.path().to_path_buf();
|
||||
let results = search_files_impl("searchterm".to_string(), root)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Should find the accessible file, walker error for restricted dir is logged but not fatal
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].path, "accessible.txt");
|
||||
|
||||
// Restore permissions for cleanup
|
||||
let mut perms = fs::metadata(&restricted_dir).unwrap().permissions();
|
||||
perms.set_mode(0o755);
|
||||
fs::set_permissions(&restricted_dir, perms).unwrap();
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_search_files_handles_broken_symlink() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
|
||||
// Create a broken symlink (points to non-existent file)
|
||||
#[cfg(unix)]
|
||||
{
|
||||
use std::os::unix::fs as unix_fs;
|
||||
unix_fs::symlink("/nonexistent/path", temp_dir.path().join("broken_link")).unwrap();
|
||||
}
|
||||
#[cfg(windows)]
|
||||
{
|
||||
use std::os::windows::fs as windows_fs;
|
||||
windows_fs::symlink_file("C:\\nonexistent\\path", temp_dir.path().join("broken_link"))
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
// Create a normal file
|
||||
fs::write(temp_dir.path().join("normal.txt"), "searchterm").unwrap();
|
||||
|
||||
let root = temp_dir.path().to_path_buf();
|
||||
let results = search_files_impl("searchterm".to_string(), root)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Should find the normal file, broken symlink error is logged but not fatal
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].path, "normal.txt");
|
||||
}
|
||||
}
|
||||
@@ -1,280 +0,0 @@
|
||||
use crate::state::SessionState;
|
||||
use serde::Serialize;
|
||||
use std::path::PathBuf;
|
||||
use std::process::Command;
|
||||
use tauri::State;
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Helper Functions
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
/// Helper to get the root path (cloned) without joining
|
||||
fn get_project_root(state: &State<'_, SessionState>) -> Result<PathBuf, String> {
|
||||
state.inner().get_project_root()
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Commands
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
#[derive(Serialize, Debug)]
|
||||
pub struct CommandOutput {
|
||||
stdout: String,
|
||||
stderr: String,
|
||||
exit_code: i32,
|
||||
}
|
||||
|
||||
/// Execute shell command logic (pure function for testing)
|
||||
async fn exec_shell_impl(
|
||||
command: String,
|
||||
args: Vec<String>,
|
||||
root: PathBuf,
|
||||
) -> Result<CommandOutput, String> {
|
||||
// Security Allowlist
|
||||
let allowed_commands = [
|
||||
"git", "cargo", "npm", "yarn", "pnpm", "node", "bun", "ls", "find", "grep", "mkdir", "rm",
|
||||
"mv", "cp", "touch", "rustc", "rustfmt",
|
||||
];
|
||||
|
||||
if !allowed_commands.contains(&command.as_str()) {
|
||||
return Err(format!("Command '{}' is not in the allowlist.", command));
|
||||
}
|
||||
|
||||
let output = tauri::async_runtime::spawn_blocking(move || {
|
||||
Command::new(&command)
|
||||
.args(&args)
|
||||
.current_dir(root)
|
||||
.output()
|
||||
})
|
||||
.await
|
||||
.map_err(|e| format!("Task join error: {}", e))?
|
||||
.map_err(|e| format!("Failed to execute command: {}", e))?;
|
||||
|
||||
Ok(CommandOutput {
|
||||
stdout: String::from_utf8_lossy(&output.stdout).to_string(),
|
||||
stderr: String::from_utf8_lossy(&output.stderr).to_string(),
|
||||
exit_code: output.status.code().unwrap_or(-1),
|
||||
})
|
||||
}
|
||||
|
||||
#[tauri::command]
|
||||
pub async fn exec_shell(
|
||||
command: String,
|
||||
args: Vec<String>,
|
||||
state: State<'_, SessionState>,
|
||||
) -> Result<CommandOutput, String> {
|
||||
let root = get_project_root(&state)?;
|
||||
exec_shell_impl(command, args, root).await
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Tests
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use std::fs;
|
||||
use std::sync::Mutex;
|
||||
use tempfile::TempDir;
|
||||
|
||||
/// Helper to create a test SessionState with a given root path
|
||||
fn create_test_state(root: Option<PathBuf>) -> SessionState {
|
||||
let (cancel_tx, cancel_rx) = tokio::sync::watch::channel(false);
|
||||
SessionState {
|
||||
project_root: Mutex::new(root),
|
||||
cancel_tx,
|
||||
cancel_rx,
|
||||
}
|
||||
}
|
||||
|
||||
// Tests for get_project_root helper function
|
||||
mod get_project_root_tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_get_project_root_no_project() {
|
||||
let state = create_test_state(None);
|
||||
let result = state.get_project_root();
|
||||
|
||||
assert!(result.is_err());
|
||||
assert_eq!(result.unwrap_err(), "No project is currently open.");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_get_project_root_success() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let path = temp_dir.path().to_path_buf();
|
||||
let state = create_test_state(Some(path.clone()));
|
||||
let result = state.get_project_root();
|
||||
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(result.unwrap(), path);
|
||||
}
|
||||
}
|
||||
|
||||
// Tests for exec_shell command
|
||||
mod exec_shell_tests {
|
||||
use super::*;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_exec_shell_success() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
fs::write(temp_dir.path().join("test.txt"), "hello").unwrap();
|
||||
|
||||
let state = create_test_state(Some(temp_dir.path().to_path_buf()));
|
||||
|
||||
let root = state.get_project_root().unwrap();
|
||||
let result = exec_shell_impl("ls".to_string(), vec![], root).await;
|
||||
|
||||
assert!(result.is_ok());
|
||||
let output = result.unwrap();
|
||||
assert!(output.stdout.contains("test.txt"));
|
||||
assert_eq!(output.exit_code, 0);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_exec_shell_with_args() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
fs::write(temp_dir.path().join("test.txt"), "hello world").unwrap();
|
||||
|
||||
let state = create_test_state(Some(temp_dir.path().to_path_buf()));
|
||||
|
||||
// Use grep to search in file
|
||||
let root = state.get_project_root().unwrap();
|
||||
let result = exec_shell_impl(
|
||||
"grep".to_string(),
|
||||
vec!["hello".to_string(), "test.txt".to_string()],
|
||||
root,
|
||||
)
|
||||
.await;
|
||||
|
||||
assert!(result.is_ok());
|
||||
let output = result.unwrap();
|
||||
assert!(output.stdout.contains("hello world"));
|
||||
assert_eq!(output.exit_code, 0);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_exec_shell_command_not_in_allowlist() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let state = create_test_state(Some(temp_dir.path().to_path_buf()));
|
||||
|
||||
let root = state.get_project_root().unwrap();
|
||||
let result = exec_shell_impl("curl".to_string(), vec![], root).await;
|
||||
|
||||
assert!(result.is_err());
|
||||
assert!(result.unwrap_err().contains("not in the allowlist"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_exec_shell_no_project_open() {
|
||||
let state = create_test_state(None);
|
||||
|
||||
let result = state.get_project_root();
|
||||
|
||||
assert!(result.is_err());
|
||||
assert_eq!(result.unwrap_err(), "No project is currently open.");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_exec_shell_command_failure() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let state = create_test_state(Some(temp_dir.path().to_path_buf()));
|
||||
|
||||
// Try to list a nonexistent file
|
||||
let root = state.get_project_root().unwrap();
|
||||
let result = exec_shell_impl(
|
||||
"ls".to_string(),
|
||||
vec!["nonexistent_file_xyz.txt".to_string()],
|
||||
root,
|
||||
)
|
||||
.await;
|
||||
|
||||
assert!(result.is_ok());
|
||||
let output = result.unwrap();
|
||||
assert!(!output.stderr.is_empty());
|
||||
assert_ne!(output.exit_code, 0);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_exec_shell_git_command() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
|
||||
// Initialize git repo
|
||||
std::process::Command::new("git")
|
||||
.args(["init"])
|
||||
.current_dir(temp_dir.path())
|
||||
.output()
|
||||
.unwrap();
|
||||
|
||||
let state = create_test_state(Some(temp_dir.path().to_path_buf()));
|
||||
|
||||
let root = state.get_project_root().unwrap();
|
||||
let result = exec_shell_impl("git".to_string(), vec!["status".to_string()], root).await;
|
||||
|
||||
assert!(result.is_ok());
|
||||
let output = result.unwrap();
|
||||
assert_eq!(output.exit_code, 0);
|
||||
assert!(!output.stdout.is_empty());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_exec_shell_mkdir_command() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let state = create_test_state(Some(temp_dir.path().to_path_buf()));
|
||||
|
||||
let root = state.get_project_root().unwrap();
|
||||
let result =
|
||||
exec_shell_impl("mkdir".to_string(), vec!["test_dir".to_string()], root).await;
|
||||
|
||||
assert!(result.is_ok());
|
||||
let output = result.unwrap();
|
||||
assert_eq!(output.exit_code, 0);
|
||||
assert!(temp_dir.path().join("test_dir").exists());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_exec_shell_all_allowed_commands() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
let state = create_test_state(Some(temp_dir.path().to_path_buf()));
|
||||
|
||||
let allowed_commands = [
|
||||
"git", "cargo", "npm", "yarn", "pnpm", "node", "bun", "ls", "find", "grep",
|
||||
"mkdir", "rm", "mv", "cp", "touch", "rustc", "rustfmt",
|
||||
];
|
||||
|
||||
let root = state.get_project_root().unwrap();
|
||||
for cmd in allowed_commands {
|
||||
// Just verify the command is allowed, not necessarily successful
|
||||
let result = exec_shell_impl(cmd.to_string(), vec![], root.clone()).await;
|
||||
|
||||
// Should not fail with "not in allowlist" error
|
||||
if result.is_err() {
|
||||
assert!(!result.unwrap_err().contains("not in the allowlist"));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_exec_shell_output_encoding() {
|
||||
let temp_dir = TempDir::new().unwrap();
|
||||
fs::write(temp_dir.path().join("test.txt"), "Hello 世界").unwrap();
|
||||
|
||||
let state = create_test_state(Some(temp_dir.path().to_path_buf()));
|
||||
|
||||
let root = state.get_project_root().unwrap();
|
||||
let result = exec_shell_impl(
|
||||
"grep".to_string(),
|
||||
vec!["Hello".to_string(), "test.txt".to_string()],
|
||||
root,
|
||||
)
|
||||
.await;
|
||||
|
||||
assert!(result.is_ok());
|
||||
let output = result.unwrap();
|
||||
// Should handle UTF-8 content
|
||||
assert!(output.stdout.contains("Hello"));
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,36 +0,0 @@
|
||||
mod commands;
|
||||
mod llm;
|
||||
mod state;
|
||||
|
||||
#[cfg(test)]
|
||||
pub mod test_utils;
|
||||
|
||||
use state::SessionState;
|
||||
|
||||
#[cfg_attr(mobile, tauri::mobile_entry_point)]
|
||||
pub fn run() {
|
||||
tauri::Builder::default()
|
||||
.plugin(tauri_plugin_opener::init())
|
||||
.plugin(tauri_plugin_dialog::init())
|
||||
.plugin(tauri_plugin_store::Builder::default().build())
|
||||
.manage(SessionState::default())
|
||||
.invoke_handler(tauri::generate_handler![
|
||||
commands::fs::open_project,
|
||||
commands::fs::close_project,
|
||||
commands::fs::get_current_project,
|
||||
commands::fs::get_model_preference,
|
||||
commands::fs::set_model_preference,
|
||||
commands::fs::read_file,
|
||||
commands::fs::write_file,
|
||||
commands::fs::list_directory,
|
||||
commands::search::search_files,
|
||||
commands::shell::exec_shell,
|
||||
commands::chat::chat,
|
||||
commands::chat::get_ollama_models,
|
||||
commands::chat::cancel_chat,
|
||||
commands::chat::get_anthropic_api_key_exists,
|
||||
commands::chat::set_anthropic_api_key
|
||||
])
|
||||
.run(tauri::generate_context!())
|
||||
.expect("error while running tauri application");
|
||||
}
|
||||
@@ -1,353 +0,0 @@
|
||||
use async_trait::async_trait;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::fmt::Debug;
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq)]
|
||||
#[serde(rename_all = "lowercase")]
|
||||
pub enum Role {
|
||||
System,
|
||||
User,
|
||||
Assistant,
|
||||
Tool,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone)]
|
||||
pub struct Message {
|
||||
pub role: Role,
|
||||
pub content: String,
|
||||
|
||||
// For assistant messages that request tool execution
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub tool_calls: Option<Vec<ToolCall>>,
|
||||
|
||||
// For tool output messages, we need to link back to the call ID
|
||||
// Note: OpenAI uses 'tool_call_id', Ollama sometimes just relies on sequence.
|
||||
// We will include it for compatibility.
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub tool_call_id: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone)]
|
||||
pub struct ToolCall {
|
||||
// ID is required by OpenAI, optional/generated for Ollama depending on version
|
||||
pub id: Option<String>,
|
||||
pub function: FunctionCall,
|
||||
#[serde(rename = "type")]
|
||||
pub kind: String, // usually "function"
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone)]
|
||||
pub struct FunctionCall {
|
||||
pub name: String,
|
||||
pub arguments: String, // JSON string of arguments
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone)]
|
||||
pub struct ToolDefinition {
|
||||
#[serde(rename = "type")]
|
||||
pub kind: String, // "function"
|
||||
pub function: ToolFunctionDefinition,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone)]
|
||||
pub struct ToolFunctionDefinition {
|
||||
pub name: String,
|
||||
pub description: String,
|
||||
pub parameters: serde_json::Value, // JSON Schema object
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
pub struct CompletionResponse {
|
||||
pub content: Option<String>,
|
||||
pub tool_calls: Option<Vec<ToolCall>>,
|
||||
}
|
||||
|
||||
/// The abstraction for different LLM providers (Ollama, Anthropic, etc.)
|
||||
#[async_trait]
|
||||
#[allow(dead_code)]
|
||||
pub trait ModelProvider: Send + Sync {
|
||||
async fn chat(
|
||||
&self,
|
||||
model: &str,
|
||||
messages: &[Message],
|
||||
tools: &[ToolDefinition],
|
||||
) -> Result<CompletionResponse, String>;
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Tests
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_role_serialization() {
|
||||
let system = Role::System;
|
||||
let user = Role::User;
|
||||
let assistant = Role::Assistant;
|
||||
let tool = Role::Tool;
|
||||
|
||||
assert_eq!(serde_json::to_string(&system).unwrap(), r#""system""#);
|
||||
assert_eq!(serde_json::to_string(&user).unwrap(), r#""user""#);
|
||||
assert_eq!(serde_json::to_string(&assistant).unwrap(), r#""assistant""#);
|
||||
assert_eq!(serde_json::to_string(&tool).unwrap(), r#""tool""#);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_role_deserialization() {
|
||||
let system: Role = serde_json::from_str(r#""system""#).unwrap();
|
||||
let user: Role = serde_json::from_str(r#""user""#).unwrap();
|
||||
let assistant: Role = serde_json::from_str(r#""assistant""#).unwrap();
|
||||
let tool: Role = serde_json::from_str(r#""tool""#).unwrap();
|
||||
|
||||
assert_eq!(system, Role::System);
|
||||
assert_eq!(user, Role::User);
|
||||
assert_eq!(assistant, Role::Assistant);
|
||||
assert_eq!(tool, Role::Tool);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_message_serialization_simple() {
|
||||
let msg = Message {
|
||||
role: Role::User,
|
||||
content: "Hello".to_string(),
|
||||
tool_calls: None,
|
||||
tool_call_id: None,
|
||||
};
|
||||
|
||||
let json = serde_json::to_string(&msg).unwrap();
|
||||
assert!(json.contains(r#""role":"user""#));
|
||||
assert!(json.contains(r#""content":"Hello""#));
|
||||
assert!(!json.contains("tool_calls"));
|
||||
assert!(!json.contains("tool_call_id"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_message_serialization_with_tool_calls() {
|
||||
let msg = Message {
|
||||
role: Role::Assistant,
|
||||
content: "I'll help you with that".to_string(),
|
||||
tool_calls: Some(vec![ToolCall {
|
||||
id: Some("call_123".to_string()),
|
||||
function: FunctionCall {
|
||||
name: "read_file".to_string(),
|
||||
arguments: r#"{"path":"test.txt"}"#.to_string(),
|
||||
},
|
||||
kind: "function".to_string(),
|
||||
}]),
|
||||
tool_call_id: None,
|
||||
};
|
||||
|
||||
let json = serde_json::to_string(&msg).unwrap();
|
||||
assert!(json.contains(r#""role":"assistant""#));
|
||||
assert!(json.contains("tool_calls"));
|
||||
assert!(json.contains("read_file"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_message_deserialization() {
|
||||
let json = r#"{
|
||||
"role": "user",
|
||||
"content": "Hello world"
|
||||
}"#;
|
||||
|
||||
let msg: Message = serde_json::from_str(json).unwrap();
|
||||
assert_eq!(msg.role, Role::User);
|
||||
assert_eq!(msg.content, "Hello world");
|
||||
assert!(msg.tool_calls.is_none());
|
||||
assert!(msg.tool_call_id.is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_tool_call_serialization() {
|
||||
let tool_call = ToolCall {
|
||||
id: Some("call_abc".to_string()),
|
||||
function: FunctionCall {
|
||||
name: "write_file".to_string(),
|
||||
arguments: r#"{"path":"out.txt","content":"data"}"#.to_string(),
|
||||
},
|
||||
kind: "function".to_string(),
|
||||
};
|
||||
|
||||
let json = serde_json::to_string(&tool_call).unwrap();
|
||||
assert!(json.contains(r#""id":"call_abc""#));
|
||||
assert!(json.contains(r#""name":"write_file""#));
|
||||
assert!(json.contains(r#""type":"function""#));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_tool_call_deserialization() {
|
||||
let json = r#"{
|
||||
"id": "call_xyz",
|
||||
"function": {
|
||||
"name": "list_directory",
|
||||
"arguments": "{\"path\":\".\"}"
|
||||
},
|
||||
"type": "function"
|
||||
}"#;
|
||||
|
||||
let tool_call: ToolCall = serde_json::from_str(json).unwrap();
|
||||
assert_eq!(tool_call.id, Some("call_xyz".to_string()));
|
||||
assert_eq!(tool_call.function.name, "list_directory");
|
||||
assert_eq!(tool_call.kind, "function");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_function_call_with_complex_arguments() {
|
||||
let func_call = FunctionCall {
|
||||
name: "exec_shell".to_string(),
|
||||
arguments: r#"{"command":"git","args":["status","--short"]}"#.to_string(),
|
||||
};
|
||||
|
||||
let json = serde_json::to_string(&func_call).unwrap();
|
||||
assert!(json.contains("exec_shell"));
|
||||
assert!(json.contains("git"));
|
||||
assert!(json.contains("status"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_tool_definition_serialization() {
|
||||
let tool_def = ToolDefinition {
|
||||
kind: "function".to_string(),
|
||||
function: ToolFunctionDefinition {
|
||||
name: "read_file".to_string(),
|
||||
description: "Reads a file from disk".to_string(),
|
||||
parameters: serde_json::json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"path": {
|
||||
"type": "string",
|
||||
"description": "File path"
|
||||
}
|
||||
},
|
||||
"required": ["path"]
|
||||
}),
|
||||
},
|
||||
};
|
||||
|
||||
let json = serde_json::to_string(&tool_def).unwrap();
|
||||
assert!(json.contains(r#""type":"function""#));
|
||||
assert!(json.contains("read_file"));
|
||||
assert!(json.contains("Reads a file from disk"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_tool_definition_deserialization() {
|
||||
let json = r#"{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "search_files",
|
||||
"description": "Search for files",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"query": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": ["query"]
|
||||
}
|
||||
}
|
||||
}"#;
|
||||
|
||||
let tool_def: ToolDefinition = serde_json::from_str(json).unwrap();
|
||||
assert_eq!(tool_def.kind, "function");
|
||||
assert_eq!(tool_def.function.name, "search_files");
|
||||
assert_eq!(tool_def.function.description, "Search for files");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_completion_response_with_content() {
|
||||
let response = CompletionResponse {
|
||||
content: Some("Here is the answer".to_string()),
|
||||
tool_calls: None,
|
||||
};
|
||||
|
||||
assert!(response.content.is_some());
|
||||
assert!(response.tool_calls.is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_completion_response_with_tool_calls() {
|
||||
let response = CompletionResponse {
|
||||
content: None,
|
||||
tool_calls: Some(vec![ToolCall {
|
||||
id: Some("call_1".to_string()),
|
||||
function: FunctionCall {
|
||||
name: "test_func".to_string(),
|
||||
arguments: "{}".to_string(),
|
||||
},
|
||||
kind: "function".to_string(),
|
||||
}]),
|
||||
};
|
||||
|
||||
assert!(response.content.is_none());
|
||||
assert!(response.tool_calls.is_some());
|
||||
assert_eq!(response.tool_calls.unwrap().len(), 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_message_with_tool_call_id() {
|
||||
let msg = Message {
|
||||
role: Role::Tool,
|
||||
content: "File content here".to_string(),
|
||||
tool_calls: None,
|
||||
tool_call_id: Some("call_123".to_string()),
|
||||
};
|
||||
|
||||
let json = serde_json::to_string(&msg).unwrap();
|
||||
assert!(json.contains(r#""role":"tool""#));
|
||||
assert!(json.contains(r#""tool_call_id":"call_123""#));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_tool_call_without_id() {
|
||||
let tool_call = ToolCall {
|
||||
id: None,
|
||||
function: FunctionCall {
|
||||
name: "test".to_string(),
|
||||
arguments: "{}".to_string(),
|
||||
},
|
||||
kind: "function".to_string(),
|
||||
};
|
||||
|
||||
let json = serde_json::to_string(&tool_call).unwrap();
|
||||
// When id is None, it serializes as null, not omitted
|
||||
assert!(json.contains(r#""id":null"#));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_message_clone() {
|
||||
let msg = Message {
|
||||
role: Role::User,
|
||||
content: "test".to_string(),
|
||||
tool_calls: None,
|
||||
tool_call_id: None,
|
||||
};
|
||||
|
||||
let cloned = msg.clone();
|
||||
assert_eq!(msg.role, cloned.role);
|
||||
assert_eq!(msg.content, cloned.content);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_role_equality() {
|
||||
assert_eq!(Role::User, Role::User);
|
||||
assert_ne!(Role::User, Role::Assistant);
|
||||
assert_ne!(Role::System, Role::Tool);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_tool_function_definition_with_no_parameters() {
|
||||
let func_def = ToolFunctionDefinition {
|
||||
name: "simple_tool".to_string(),
|
||||
description: "A simple tool".to_string(),
|
||||
parameters: serde_json::json!({}),
|
||||
};
|
||||
|
||||
assert_eq!(func_def.name, "simple_tool");
|
||||
assert_eq!(func_def.parameters, serde_json::json!({}));
|
||||
}
|
||||
}
|
||||
@@ -1,6 +0,0 @@
|
||||
// Prevents additional console window on Windows in release, DO NOT REMOVE!!
|
||||
#![cfg_attr(not(debug_assertions), windows_subsystem = "windows")]
|
||||
|
||||
fn main() {
|
||||
living_spec_standalone_lib::run()
|
||||
}
|
||||
@@ -1,215 +0,0 @@
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Mutex;
|
||||
use tokio::sync::watch;
|
||||
|
||||
pub struct SessionState {
|
||||
pub project_root: Mutex<Option<PathBuf>>,
|
||||
pub cancel_tx: watch::Sender<bool>,
|
||||
pub cancel_rx: watch::Receiver<bool>,
|
||||
}
|
||||
|
||||
impl Default for SessionState {
|
||||
fn default() -> Self {
|
||||
let (cancel_tx, cancel_rx) = watch::channel(false);
|
||||
Self {
|
||||
project_root: Mutex::new(None),
|
||||
cancel_tx,
|
||||
cancel_rx,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl SessionState {
|
||||
/// Get the project root path from the session state
|
||||
/// Returns an error if no project is currently open
|
||||
pub fn get_project_root(&self) -> Result<PathBuf, String> {
|
||||
let root_guard = self.project_root.lock().map_err(|e| e.to_string())?;
|
||||
let root = root_guard
|
||||
.as_ref()
|
||||
.ok_or_else(|| "No project is currently open.".to_string())?;
|
||||
Ok(root.clone())
|
||||
}
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Tests
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use std::path::PathBuf;
|
||||
|
||||
#[test]
|
||||
fn test_session_state_default() {
|
||||
let state = SessionState::default();
|
||||
|
||||
// Check that project_root is None
|
||||
let root = state.project_root.lock().unwrap();
|
||||
assert!(root.is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_session_state_cancel_channel() {
|
||||
let state = SessionState::default();
|
||||
|
||||
// Initial value should be false
|
||||
assert!(!(*state.cancel_rx.borrow()));
|
||||
|
||||
// Send a cancel signal
|
||||
state.cancel_tx.send(true).unwrap();
|
||||
|
||||
// Receiver should now see true
|
||||
assert!(*state.cancel_rx.borrow());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_session_state_set_project_root() {
|
||||
let state = SessionState::default();
|
||||
|
||||
// Set a project root
|
||||
let test_path = PathBuf::from("/test/path");
|
||||
{
|
||||
let mut root = state.project_root.lock().unwrap();
|
||||
*root = Some(test_path.clone());
|
||||
}
|
||||
|
||||
// Verify it was set
|
||||
let root = state.project_root.lock().unwrap();
|
||||
assert_eq!(root.as_ref().unwrap(), &test_path);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_session_state_clear_project_root() {
|
||||
let state = SessionState::default();
|
||||
|
||||
// Set a project root
|
||||
{
|
||||
let mut root = state.project_root.lock().unwrap();
|
||||
*root = Some(PathBuf::from("/test/path"));
|
||||
}
|
||||
|
||||
// Clear it
|
||||
{
|
||||
let mut root = state.project_root.lock().unwrap();
|
||||
*root = None;
|
||||
}
|
||||
|
||||
// Verify it's cleared
|
||||
let root = state.project_root.lock().unwrap();
|
||||
assert!(root.is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_session_state_multiple_cancel_signals() {
|
||||
let state = SessionState::default();
|
||||
|
||||
// Send multiple signals
|
||||
state.cancel_tx.send(true).unwrap();
|
||||
assert!(*state.cancel_rx.borrow());
|
||||
|
||||
state.cancel_tx.send(false).unwrap();
|
||||
assert!(!(*state.cancel_rx.borrow()));
|
||||
|
||||
state.cancel_tx.send(true).unwrap();
|
||||
assert!(*state.cancel_rx.borrow());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_session_state_cancel_rx_clone() {
|
||||
let state = SessionState::default();
|
||||
|
||||
// Clone the receiver
|
||||
let rx_clone = state.cancel_rx.clone();
|
||||
|
||||
// Send a signal
|
||||
state.cancel_tx.send(true).unwrap();
|
||||
|
||||
// Both receivers should see the new value
|
||||
assert!(*state.cancel_rx.borrow());
|
||||
assert!(*rx_clone.borrow());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_session_state_mutex_not_poisoned() {
|
||||
let state = SessionState::default();
|
||||
|
||||
// Lock and unlock multiple times
|
||||
for i in 0..5 {
|
||||
let mut root = state.project_root.lock().unwrap();
|
||||
*root = Some(PathBuf::from(format!("/path/{}", i)));
|
||||
}
|
||||
|
||||
// Should still be able to lock
|
||||
let root = state.project_root.lock().unwrap();
|
||||
assert!(root.is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_session_state_project_root_with_different_paths() {
|
||||
let state = SessionState::default();
|
||||
|
||||
let paths = vec![
|
||||
PathBuf::from("/absolute/path"),
|
||||
PathBuf::from("relative/path"),
|
||||
PathBuf::from("./current/dir"),
|
||||
PathBuf::from("../parent/dir"),
|
||||
];
|
||||
|
||||
for path in paths {
|
||||
let mut root = state.project_root.lock().unwrap();
|
||||
*root = Some(path.clone());
|
||||
drop(root);
|
||||
|
||||
let root = state.project_root.lock().unwrap();
|
||||
assert_eq!(root.as_ref().unwrap(), &path);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_session_state_cancel_channel_independent_of_root() {
|
||||
let state = SessionState::default();
|
||||
|
||||
// Set project root
|
||||
{
|
||||
let mut root = state.project_root.lock().unwrap();
|
||||
*root = Some(PathBuf::from("/test"));
|
||||
}
|
||||
|
||||
// Cancel channel should still work independently
|
||||
state.cancel_tx.send(true).unwrap();
|
||||
assert!(*state.cancel_rx.borrow());
|
||||
|
||||
// Clear project root
|
||||
{
|
||||
let mut root = state.project_root.lock().unwrap();
|
||||
*root = None;
|
||||
}
|
||||
|
||||
// Cancel channel should still work
|
||||
state.cancel_tx.send(false).unwrap();
|
||||
assert!(!(*state.cancel_rx.borrow()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_session_state_multiple_instances() {
|
||||
let state1 = SessionState::default();
|
||||
let state2 = SessionState::default();
|
||||
|
||||
// Set different values
|
||||
{
|
||||
let mut root1 = state1.project_root.lock().unwrap();
|
||||
*root1 = Some(PathBuf::from("/path1"));
|
||||
}
|
||||
{
|
||||
let mut root2 = state2.project_root.lock().unwrap();
|
||||
*root2 = Some(PathBuf::from("/path2"));
|
||||
}
|
||||
|
||||
// Verify they're independent
|
||||
let root1 = state1.project_root.lock().unwrap();
|
||||
let root2 = state2.project_root.lock().unwrap();
|
||||
assert_eq!(root1.as_ref().unwrap(), &PathBuf::from("/path1"));
|
||||
assert_eq!(root2.as_ref().unwrap(), &PathBuf::from("/path2"));
|
||||
}
|
||||
}
|
||||
@@ -1,107 +0,0 @@
|
||||
use crate::commands::fs::StoreOps;
|
||||
use serde_json::json;
|
||||
use std::collections::HashMap;
|
||||
use std::sync::Mutex;
|
||||
|
||||
/// Mock store for testing - stores data in memory
|
||||
pub struct MockStore {
|
||||
data: Mutex<HashMap<String, serde_json::Value>>,
|
||||
}
|
||||
|
||||
impl MockStore {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
data: Mutex::new(HashMap::new()),
|
||||
}
|
||||
}
|
||||
|
||||
/// Create a MockStore with initial data
|
||||
pub fn with_data(initial: HashMap<String, serde_json::Value>) -> Self {
|
||||
Self {
|
||||
data: Mutex::new(initial),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for MockStore {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
impl StoreOps for MockStore {
|
||||
fn get(&self, key: &str) -> Option<serde_json::Value> {
|
||||
self.data.lock().unwrap().get(key).cloned()
|
||||
}
|
||||
|
||||
fn set(&self, key: &str, value: serde_json::Value) {
|
||||
self.data.lock().unwrap().insert(key.to_string(), value);
|
||||
}
|
||||
|
||||
fn delete(&self, key: &str) {
|
||||
self.data.lock().unwrap().remove(key);
|
||||
}
|
||||
|
||||
fn save(&self) -> Result<(), String> {
|
||||
// Mock implementation - always succeeds
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_mock_store_new() {
|
||||
let store = MockStore::new();
|
||||
assert!(store.get("key").is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_mock_store_set_and_get() {
|
||||
let store = MockStore::new();
|
||||
store.set("key", json!("value"));
|
||||
assert_eq!(store.get("key"), Some(json!("value")));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_mock_store_delete() {
|
||||
let store = MockStore::new();
|
||||
store.set("key", json!("value"));
|
||||
store.delete("key");
|
||||
assert!(store.get("key").is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_mock_store_save() {
|
||||
let store = MockStore::new();
|
||||
assert!(store.save().is_ok());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_mock_store_overwrite() {
|
||||
let store = MockStore::new();
|
||||
store.set("key", json!("old"));
|
||||
store.set("key", json!("new"));
|
||||
assert_eq!(store.get("key"), Some(json!("new")));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_mock_store_multiple_keys() {
|
||||
let store = MockStore::new();
|
||||
store.set("key1", json!("value1"));
|
||||
store.set("key2", json!("value2"));
|
||||
assert_eq!(store.get("key1"), Some(json!("value1")));
|
||||
assert_eq!(store.get("key2"), Some(json!("value2")));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_mock_store_with_data() {
|
||||
let mut initial = HashMap::new();
|
||||
initial.insert("existing".to_string(), json!("data"));
|
||||
|
||||
let store = MockStore::with_data(initial);
|
||||
assert_eq!(store.get("existing"), Some(json!("data")));
|
||||
}
|
||||
}
|
||||
@@ -1,35 +0,0 @@
|
||||
{
|
||||
"$schema": "https://schema.tauri.app/config/2",
|
||||
"productName": "living-spec-standalone",
|
||||
"version": "0.1.0",
|
||||
"identifier": "io.crashlabs.living-spec-standalone",
|
||||
"build": {
|
||||
"beforeDevCommand": "pnpm dev",
|
||||
"devUrl": "http://localhost:1420",
|
||||
"beforeBuildCommand": "pnpm build",
|
||||
"frontendDist": "../dist"
|
||||
},
|
||||
"app": {
|
||||
"windows": [
|
||||
{
|
||||
"title": "living-spec-standalone",
|
||||
"width": 800,
|
||||
"height": 600
|
||||
}
|
||||
],
|
||||
"security": {
|
||||
"csp": null
|
||||
}
|
||||
},
|
||||
"bundle": {
|
||||
"active": true,
|
||||
"targets": "all",
|
||||
"icon": [
|
||||
"icons/32x32.png",
|
||||
"icons/128x128.png",
|
||||
"icons/128x128@2x.png",
|
||||
"icons/icon.icns",
|
||||
"icons/icon.ico"
|
||||
]
|
||||
}
|
||||
}
|
||||
90
src/App.tsx
@@ -1,90 +0,0 @@
|
||||
import { invoke } from "@tauri-apps/api/core";
|
||||
import { open } from "@tauri-apps/plugin-dialog";
|
||||
import { useEffect, useState } from "react";
|
||||
import { Chat } from "./components/Chat";
|
||||
import "./App.css";
|
||||
|
||||
function App() {
|
||||
const [projectPath, setProjectPath] = useState<string | null>(null);
|
||||
const [errorMsg, setErrorMsg] = useState<string | null>(null);
|
||||
|
||||
useEffect(() => {
|
||||
invoke<string | null>("get_current_project")
|
||||
.then((path) => {
|
||||
if (path) setProjectPath(path);
|
||||
})
|
||||
.catch((e) => console.error(e));
|
||||
}, []);
|
||||
|
||||
async function closeProject() {
|
||||
try {
|
||||
await invoke("close_project");
|
||||
setProjectPath(null);
|
||||
} catch (e) {
|
||||
console.error(e);
|
||||
}
|
||||
}
|
||||
|
||||
async function selectProject() {
|
||||
try {
|
||||
setErrorMsg(null);
|
||||
// Open native folder picker
|
||||
const selected = await open({
|
||||
directory: true,
|
||||
multiple: false,
|
||||
});
|
||||
|
||||
if (selected === null) {
|
||||
// User cancelled selection
|
||||
return;
|
||||
}
|
||||
|
||||
// Invoke backend command to verify and set state
|
||||
// Note: invoke argument names must match Rust function args
|
||||
const confirmedPath = await invoke<string>("open_project", {
|
||||
path: selected,
|
||||
});
|
||||
setProjectPath(confirmedPath);
|
||||
} catch (e) {
|
||||
console.error(e);
|
||||
setErrorMsg(
|
||||
typeof e === "string" ? e : "An error occurred opening the project.",
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
return (
|
||||
<main
|
||||
className="container"
|
||||
style={{ height: "100vh", padding: 0, maxWidth: "100%" }}
|
||||
>
|
||||
{!projectPath ? (
|
||||
<div
|
||||
className="selection-screen"
|
||||
style={{ padding: "2rem", maxWidth: "800px", margin: "0 auto" }}
|
||||
>
|
||||
<h1>AI Code Assistant</h1>
|
||||
<p>
|
||||
Please select a project folder to start the Story-Driven Spec
|
||||
Workflow.
|
||||
</p>
|
||||
<button type="button" onClick={selectProject}>
|
||||
Open Project Directory
|
||||
</button>
|
||||
</div>
|
||||
) : (
|
||||
<div className="workspace" style={{ height: "100%" }}>
|
||||
<Chat projectPath={projectPath} onCloseProject={closeProject} />
|
||||
</div>
|
||||
)}
|
||||
|
||||
{errorMsg && (
|
||||
<div className="error-message" style={{ marginTop: "20px" }}>
|
||||
<p style={{ color: "red" }}>Error: {errorMsg}</p>
|
||||
</div>
|
||||
)}
|
||||
</main>
|
||||
);
|
||||
}
|
||||
|
||||
export default App;
|
||||
@@ -1,933 +0,0 @@
|
||||
import { invoke } from "@tauri-apps/api/core";
|
||||
import { listen } from "@tauri-apps/api/event";
|
||||
import { ask } from "@tauri-apps/plugin-dialog";
|
||||
import { useEffect, useRef, useState } from "react";
|
||||
import Markdown from "react-markdown";
|
||||
import { Prism as SyntaxHighlighter } from "react-syntax-highlighter";
|
||||
import { oneDark } from "react-syntax-highlighter/dist/esm/styles/prism";
|
||||
import type { Message, ProviderConfig } from "../types";
|
||||
|
||||
interface ChatProps {
|
||||
projectPath: string;
|
||||
onCloseProject: () => void;
|
||||
}
|
||||
|
||||
export function Chat({ projectPath, onCloseProject }: ChatProps) {
|
||||
const [messages, setMessages] = useState<Message[]>([]);
|
||||
const [input, setInput] = useState("");
|
||||
const [loading, setLoading] = useState(false);
|
||||
const [model, setModel] = useState("llama3.1"); // Default local model
|
||||
const [enableTools, setEnableTools] = useState(true);
|
||||
const [availableModels, setAvailableModels] = useState<string[]>([]);
|
||||
const [claudeModels] = useState<string[]>([
|
||||
"claude-3-5-sonnet-20241022",
|
||||
"claude-3-5-haiku-20241022",
|
||||
]);
|
||||
const [streamingContent, setStreamingContent] = useState("");
|
||||
const [showApiKeyDialog, setShowApiKeyDialog] = useState(false);
|
||||
const [apiKeyInput, setApiKeyInput] = useState("");
|
||||
const messagesEndRef = useRef<HTMLDivElement>(null);
|
||||
const inputRef = useRef<HTMLInputElement>(null);
|
||||
const scrollContainerRef = useRef<HTMLDivElement>(null);
|
||||
const shouldAutoScrollRef = useRef(true);
|
||||
const lastScrollTopRef = useRef(0);
|
||||
const userScrolledUpRef = useRef(false);
|
||||
const pendingMessageRef = useRef<string>("");
|
||||
|
||||
// Token estimation and context window tracking
|
||||
const estimateTokens = (text: string): number => {
|
||||
return Math.ceil(text.length / 4);
|
||||
};
|
||||
|
||||
const getContextWindowSize = (modelName: string): number => {
|
||||
if (modelName.startsWith("claude-")) return 200000;
|
||||
if (modelName.includes("llama3")) return 8192;
|
||||
if (modelName.includes("qwen2.5")) return 32768;
|
||||
if (modelName.includes("deepseek")) return 16384;
|
||||
return 8192; // Default
|
||||
};
|
||||
|
||||
const calculateContextUsage = (): {
|
||||
used: number;
|
||||
total: number;
|
||||
percentage: number;
|
||||
} => {
|
||||
let totalTokens = 0;
|
||||
|
||||
// System prompts (approximate)
|
||||
totalTokens += 200;
|
||||
|
||||
// All messages
|
||||
for (const msg of messages) {
|
||||
totalTokens += estimateTokens(msg.content);
|
||||
if (msg.tool_calls) {
|
||||
totalTokens += estimateTokens(JSON.stringify(msg.tool_calls));
|
||||
}
|
||||
}
|
||||
|
||||
// Streaming content
|
||||
if (streamingContent) {
|
||||
totalTokens += estimateTokens(streamingContent);
|
||||
}
|
||||
|
||||
const contextWindow = getContextWindowSize(model);
|
||||
const percentage = Math.round((totalTokens / contextWindow) * 100);
|
||||
|
||||
return {
|
||||
used: totalTokens,
|
||||
total: contextWindow,
|
||||
percentage,
|
||||
};
|
||||
};
|
||||
|
||||
const contextUsage = calculateContextUsage();
|
||||
|
||||
const getContextEmoji = (percentage: number): string => {
|
||||
if (percentage >= 90) return "🔴";
|
||||
if (percentage >= 75) return "🟡";
|
||||
return "🟢";
|
||||
};
|
||||
|
||||
useEffect(() => {
|
||||
invoke<string[]>("get_ollama_models")
|
||||
.then(async (models) => {
|
||||
if (models.length > 0) {
|
||||
// Sort models alphabetically (case-insensitive)
|
||||
const sortedModels = models.sort((a, b) =>
|
||||
a.toLowerCase().localeCompare(b.toLowerCase()),
|
||||
);
|
||||
setAvailableModels(sortedModels);
|
||||
|
||||
// Check backend store for saved model
|
||||
try {
|
||||
const savedModel = await invoke<string | null>(
|
||||
"get_model_preference",
|
||||
);
|
||||
if (savedModel) {
|
||||
setModel(savedModel);
|
||||
} else if (models.length > 0) {
|
||||
setModel(models[0]);
|
||||
}
|
||||
} catch (e) {
|
||||
console.error(e);
|
||||
}
|
||||
}
|
||||
})
|
||||
.catch((err) => console.error(err));
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
}, []);
|
||||
|
||||
useEffect(() => {
|
||||
const unlistenUpdatePromise = listen<Message[]>("chat:update", (event) => {
|
||||
setMessages(event.payload);
|
||||
setStreamingContent(""); // Clear streaming content when final update arrives
|
||||
});
|
||||
|
||||
const unlistenTokenPromise = listen<string>("chat:token", (event) => {
|
||||
setStreamingContent((prev) => prev + event.payload);
|
||||
});
|
||||
|
||||
return () => {
|
||||
unlistenUpdatePromise.then((unlisten) => unlisten());
|
||||
unlistenTokenPromise.then((unlisten) => unlisten());
|
||||
};
|
||||
}, []);
|
||||
|
||||
const scrollToBottom = () => {
|
||||
const element = scrollContainerRef.current;
|
||||
if (element) {
|
||||
element.scrollTop = element.scrollHeight;
|
||||
lastScrollTopRef.current = element.scrollHeight;
|
||||
}
|
||||
};
|
||||
|
||||
const handleScroll = () => {
|
||||
const element = scrollContainerRef.current;
|
||||
if (!element) return;
|
||||
|
||||
const currentScrollTop = element.scrollTop;
|
||||
const isAtBottom =
|
||||
element.scrollHeight - element.scrollTop - element.clientHeight < 5;
|
||||
|
||||
// Detect if user scrolled UP
|
||||
if (currentScrollTop < lastScrollTopRef.current) {
|
||||
userScrolledUpRef.current = true;
|
||||
shouldAutoScrollRef.current = false;
|
||||
}
|
||||
|
||||
// If user scrolled back to bottom, re-enable auto-scroll
|
||||
if (isAtBottom) {
|
||||
userScrolledUpRef.current = false;
|
||||
shouldAutoScrollRef.current = true;
|
||||
}
|
||||
|
||||
lastScrollTopRef.current = currentScrollTop;
|
||||
};
|
||||
|
||||
// Smart auto-scroll: only scroll if user hasn't scrolled up
|
||||
// biome-ignore lint/correctness/useExhaustiveDependencies: We intentionally trigger on messages/streamingContent changes
|
||||
useEffect(() => {
|
||||
if (shouldAutoScrollRef.current && !userScrolledUpRef.current) {
|
||||
scrollToBottom();
|
||||
}
|
||||
}, [messages, streamingContent]);
|
||||
|
||||
useEffect(() => {
|
||||
inputRef.current?.focus();
|
||||
}, []);
|
||||
|
||||
const cancelGeneration = async () => {
|
||||
try {
|
||||
await invoke("cancel_chat");
|
||||
|
||||
// Preserve any partial streaming content as a message
|
||||
if (streamingContent) {
|
||||
setMessages((prev) => [
|
||||
...prev,
|
||||
{ role: "assistant", content: streamingContent },
|
||||
]);
|
||||
setStreamingContent("");
|
||||
}
|
||||
|
||||
setLoading(false);
|
||||
} catch (e) {
|
||||
console.error("Failed to cancel chat:", e);
|
||||
}
|
||||
};
|
||||
|
||||
const sendMessage = async (messageOverride?: string) => {
|
||||
const messageToSend = messageOverride ?? input;
|
||||
if (!messageToSend.trim() || loading) return;
|
||||
|
||||
// Check if using Claude and API key is required
|
||||
if (model.startsWith("claude-")) {
|
||||
const hasKey = await invoke<boolean>("get_anthropic_api_key_exists");
|
||||
if (!hasKey) {
|
||||
// Store the pending message before showing the dialog
|
||||
pendingMessageRef.current = messageToSend;
|
||||
setShowApiKeyDialog(true);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
const userMsg: Message = { role: "user", content: messageToSend };
|
||||
const newHistory = [...messages, userMsg];
|
||||
|
||||
setMessages(newHistory);
|
||||
// Clear input field (works for both direct input and override scenarios)
|
||||
if (!messageOverride || messageOverride === input) {
|
||||
setInput("");
|
||||
}
|
||||
setLoading(true);
|
||||
setStreamingContent(""); // Clear any previous streaming content
|
||||
|
||||
try {
|
||||
const config: ProviderConfig = {
|
||||
provider: model.startsWith("claude-") ? "anthropic" : "ollama",
|
||||
model: model,
|
||||
base_url: "http://localhost:11434",
|
||||
enable_tools: enableTools,
|
||||
};
|
||||
|
||||
// Invoke backend chat command
|
||||
// We rely on 'chat:update' events to update the state in real-time
|
||||
await invoke("chat", {
|
||||
messages: newHistory,
|
||||
config: config,
|
||||
});
|
||||
} catch (e) {
|
||||
console.error("Chat error:", e);
|
||||
// Don't show error message if user cancelled
|
||||
const errorMessage = String(e);
|
||||
if (!errorMessage.includes("Chat cancelled by user")) {
|
||||
setMessages((prev) => [
|
||||
...prev,
|
||||
{ role: "assistant", content: `**Error:** ${e}` },
|
||||
]);
|
||||
}
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
const handleSaveApiKey = async () => {
|
||||
if (!apiKeyInput.trim()) return;
|
||||
|
||||
try {
|
||||
await invoke("set_anthropic_api_key", { apiKey: apiKeyInput });
|
||||
setShowApiKeyDialog(false);
|
||||
setApiKeyInput("");
|
||||
|
||||
// Restore the pending message and retry
|
||||
const pendingMessage = pendingMessageRef.current;
|
||||
pendingMessageRef.current = "";
|
||||
|
||||
if (pendingMessage.trim()) {
|
||||
// Pass the message directly to avoid state timing issues
|
||||
sendMessage(pendingMessage);
|
||||
}
|
||||
} catch (e) {
|
||||
console.error("Failed to save API key:", e);
|
||||
alert(`Failed to save API key: ${e}`);
|
||||
}
|
||||
};
|
||||
|
||||
const clearSession = async () => {
|
||||
const confirmed = await ask(
|
||||
"Are you sure? This will clear all messages and reset the conversation context.",
|
||||
{
|
||||
title: "New Session",
|
||||
kind: "warning",
|
||||
},
|
||||
);
|
||||
|
||||
if (confirmed) {
|
||||
// Cancel any in-flight backend requests first
|
||||
try {
|
||||
await invoke("cancel_chat");
|
||||
} catch (e) {
|
||||
console.error("Failed to cancel chat:", e);
|
||||
}
|
||||
|
||||
// Then clear frontend state
|
||||
setMessages([]);
|
||||
setStreamingContent("");
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<div
|
||||
className="chat-container"
|
||||
style={{
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
height: "100%",
|
||||
backgroundColor: "#171717",
|
||||
color: "#ececec",
|
||||
}}
|
||||
>
|
||||
{/* Sticky Header */}
|
||||
<div
|
||||
style={{
|
||||
padding: "12px 24px",
|
||||
borderBottom: "1px solid #333",
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
justifyContent: "space-between",
|
||||
background: "#171717",
|
||||
flexShrink: 0,
|
||||
fontSize: "0.9rem",
|
||||
color: "#ececec",
|
||||
}}
|
||||
>
|
||||
{/* Project Info */}
|
||||
<div
|
||||
style={{
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "12px",
|
||||
overflow: "hidden",
|
||||
flex: 1,
|
||||
marginRight: "20px",
|
||||
}}
|
||||
>
|
||||
<div
|
||||
title={projectPath}
|
||||
style={{
|
||||
whiteSpace: "nowrap",
|
||||
overflow: "hidden",
|
||||
textOverflow: "ellipsis",
|
||||
fontWeight: "500",
|
||||
color: "#aaa",
|
||||
direction: "rtl",
|
||||
textAlign: "left",
|
||||
fontFamily: "monospace",
|
||||
fontSize: "0.85em",
|
||||
}}
|
||||
>
|
||||
{projectPath}
|
||||
</div>
|
||||
<button
|
||||
type="button"
|
||||
onClick={onCloseProject}
|
||||
style={{
|
||||
background: "transparent",
|
||||
border: "none",
|
||||
cursor: "pointer",
|
||||
color: "#999",
|
||||
fontSize: "0.8em",
|
||||
padding: "4px 8px",
|
||||
borderRadius: "4px",
|
||||
}}
|
||||
onMouseOver={(e) => {
|
||||
e.currentTarget.style.background = "#333";
|
||||
}}
|
||||
onMouseOut={(e) => {
|
||||
e.currentTarget.style.background = "transparent";
|
||||
}}
|
||||
onFocus={(e) => {
|
||||
e.currentTarget.style.background = "#333";
|
||||
}}
|
||||
onBlur={(e) => {
|
||||
e.currentTarget.style.background = "transparent";
|
||||
}}
|
||||
>
|
||||
✕
|
||||
</button>
|
||||
</div>
|
||||
|
||||
{/* Model Controls */}
|
||||
<div style={{ display: "flex", alignItems: "center", gap: "16px" }}>
|
||||
{/* Context Usage Indicator */}
|
||||
<div
|
||||
style={{
|
||||
fontSize: "0.9em",
|
||||
color: "#ccc",
|
||||
whiteSpace: "nowrap",
|
||||
}}
|
||||
title={`Context: ${contextUsage.used.toLocaleString()} / ${contextUsage.total.toLocaleString()} tokens (${contextUsage.percentage}%)`}
|
||||
>
|
||||
{getContextEmoji(contextUsage.percentage)} {contextUsage.percentage}
|
||||
%
|
||||
</div>
|
||||
|
||||
<button
|
||||
type="button"
|
||||
onClick={clearSession}
|
||||
style={{
|
||||
padding: "6px 12px",
|
||||
borderRadius: "99px",
|
||||
border: "none",
|
||||
fontSize: "0.85em",
|
||||
backgroundColor: "#2f2f2f",
|
||||
color: "#888",
|
||||
cursor: "pointer",
|
||||
outline: "none",
|
||||
transition: "all 0.2s",
|
||||
}}
|
||||
onMouseOver={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#3f3f3f";
|
||||
e.currentTarget.style.color = "#ccc";
|
||||
}}
|
||||
onMouseOut={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#2f2f2f";
|
||||
e.currentTarget.style.color = "#888";
|
||||
}}
|
||||
onFocus={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#3f3f3f";
|
||||
e.currentTarget.style.color = "#ccc";
|
||||
}}
|
||||
onBlur={(e) => {
|
||||
e.currentTarget.style.backgroundColor = "#2f2f2f";
|
||||
e.currentTarget.style.color = "#888";
|
||||
}}
|
||||
>
|
||||
🔄 New Session
|
||||
</button>
|
||||
{availableModels.length > 0 || claudeModels.length > 0 ? (
|
||||
<select
|
||||
value={model}
|
||||
onChange={(e) => {
|
||||
const newModel = e.target.value;
|
||||
setModel(newModel);
|
||||
invoke("set_model_preference", { model: newModel }).catch(
|
||||
console.error,
|
||||
);
|
||||
}}
|
||||
style={{
|
||||
padding: "6px 32px 6px 16px",
|
||||
borderRadius: "99px",
|
||||
border: "none",
|
||||
fontSize: "0.9em",
|
||||
backgroundColor: "#2f2f2f",
|
||||
color: "#ececec",
|
||||
cursor: "pointer",
|
||||
outline: "none",
|
||||
appearance: "none",
|
||||
WebkitAppearance: "none",
|
||||
backgroundImage: `url("data:image/svg+xml;charset=US-ASCII,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20width%3D%22292.4%22%20height%3D%22292.4%22%3E%3Cpath%20fill%3D%22%23ececec%22%20d%3D%22M287%2069.4a17.6%2017.6%200%200%200-13-5.4H18.4c-5%200-9.3%201.8-12.9%205.4A17.6%2017.6%200%200%200%200%2082.2c0%205%201.8%209.3%205.4%2012.9l128%20127.9c3.6%203.6%207.8%205.4%2012.8%205.4s9.2-1.8%2012.8-5.4L287%2095c3.5-3.5%205.4-7.8%205.4-12.8%200-5-1.9-9.2-5.5-12.8z%22%2F%3E%3C%2Fsvg%3E")`,
|
||||
backgroundRepeat: "no-repeat",
|
||||
backgroundPosition: "right 12px center",
|
||||
backgroundSize: "10px",
|
||||
}}
|
||||
>
|
||||
{claudeModels.length > 0 && (
|
||||
<optgroup label="Anthropic">
|
||||
{claudeModels.map((m) => (
|
||||
<option key={m} value={m}>
|
||||
{m}
|
||||
</option>
|
||||
))}
|
||||
</optgroup>
|
||||
)}
|
||||
{availableModels.length > 0 && (
|
||||
<optgroup label="Ollama">
|
||||
{availableModels.map((m) => (
|
||||
<option key={m} value={m}>
|
||||
{m}
|
||||
</option>
|
||||
))}
|
||||
</optgroup>
|
||||
)}
|
||||
</select>
|
||||
) : (
|
||||
<input
|
||||
value={model}
|
||||
onChange={(e) => {
|
||||
const newModel = e.target.value;
|
||||
setModel(newModel);
|
||||
invoke("set_model_preference", { model: newModel }).catch(
|
||||
console.error,
|
||||
);
|
||||
}}
|
||||
placeholder="Model"
|
||||
style={{
|
||||
padding: "6px 12px",
|
||||
borderRadius: "99px",
|
||||
border: "none",
|
||||
fontSize: "0.9em",
|
||||
background: "#2f2f2f",
|
||||
color: "#ececec",
|
||||
outline: "none",
|
||||
}}
|
||||
/>
|
||||
)}
|
||||
<label
|
||||
style={{
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "6px",
|
||||
cursor: "pointer",
|
||||
fontSize: "0.9em",
|
||||
color: "#aaa",
|
||||
}}
|
||||
title="Allow the Agent to read/write files"
|
||||
>
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={enableTools}
|
||||
onChange={(e) => setEnableTools(e.target.checked)}
|
||||
style={{ accentColor: "#000" }}
|
||||
/>
|
||||
<span>Tools</span>
|
||||
</label>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Messages Area */}
|
||||
<div
|
||||
ref={scrollContainerRef}
|
||||
onScroll={handleScroll}
|
||||
style={{
|
||||
flex: 1,
|
||||
overflowY: "auto",
|
||||
padding: "20px 0",
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
gap: "24px",
|
||||
}}
|
||||
>
|
||||
<div
|
||||
style={{
|
||||
maxWidth: "768px",
|
||||
margin: "0 auto",
|
||||
width: "100%",
|
||||
padding: "0 24px",
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
gap: "24px",
|
||||
}}
|
||||
>
|
||||
{messages.map((msg, idx) => (
|
||||
<div
|
||||
key={`msg-${idx}-${msg.role}-${msg.content.substring(0, 20)}`}
|
||||
style={{
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
alignItems: msg.role === "user" ? "flex-end" : "flex-start",
|
||||
}}
|
||||
>
|
||||
<div
|
||||
style={{
|
||||
maxWidth: "100%",
|
||||
padding: msg.role === "user" ? "10px 16px" : "0",
|
||||
borderRadius: msg.role === "user" ? "20px" : "0",
|
||||
background:
|
||||
msg.role === "user"
|
||||
? "#2f2f2f"
|
||||
: msg.role === "tool"
|
||||
? "#222"
|
||||
: "transparent",
|
||||
color: "#ececec",
|
||||
border: msg.role === "tool" ? "1px solid #333" : "none",
|
||||
fontFamily: msg.role === "tool" ? "monospace" : "inherit",
|
||||
fontSize: msg.role === "tool" ? "0.85em" : "1em",
|
||||
fontWeight: "500",
|
||||
whiteSpace: msg.role === "tool" ? "pre-wrap" : "normal",
|
||||
lineHeight: "1.6",
|
||||
}}
|
||||
>
|
||||
{msg.role === "user" ? (
|
||||
msg.content
|
||||
) : msg.role === "tool" ? (
|
||||
<details style={{ cursor: "pointer" }}>
|
||||
<summary
|
||||
style={{
|
||||
color: "#aaa",
|
||||
fontSize: "0.9em",
|
||||
marginBottom: "8px",
|
||||
listStyle: "none",
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "6px",
|
||||
}}
|
||||
>
|
||||
<span style={{ fontSize: "0.8em" }}>▶</span>
|
||||
<span>
|
||||
Tool Output
|
||||
{msg.tool_call_id && ` (${msg.tool_call_id})`}
|
||||
</span>
|
||||
</summary>
|
||||
<pre
|
||||
style={{
|
||||
maxHeight: "300px",
|
||||
overflow: "auto",
|
||||
margin: 0,
|
||||
padding: "8px",
|
||||
background: "#1a1a1a",
|
||||
borderRadius: "4px",
|
||||
fontSize: "0.85em",
|
||||
whiteSpace: "pre-wrap",
|
||||
wordBreak: "break-word",
|
||||
}}
|
||||
>
|
||||
{msg.content}
|
||||
</pre>
|
||||
</details>
|
||||
) : (
|
||||
<div className="markdown-body">
|
||||
<Markdown
|
||||
components={{
|
||||
// react-markdown types are incompatible with strict typing
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
// biome-ignore lint/suspicious/noExplicitAny: react-markdown requires any for component props
|
||||
code: ({ className, children, ...props }: any) => {
|
||||
const match = /language-(\w+)/.exec(className || "");
|
||||
const isInline = !className;
|
||||
return !isInline && match ? (
|
||||
<SyntaxHighlighter
|
||||
// biome-ignore lint/suspicious/noExplicitAny: oneDark style types are incompatible
|
||||
style={oneDark as any}
|
||||
language={match[1]}
|
||||
PreTag="div"
|
||||
>
|
||||
{String(children).replace(/\n$/, "")}
|
||||
</SyntaxHighlighter>
|
||||
) : (
|
||||
<code className={className} {...props}>
|
||||
{children}
|
||||
</code>
|
||||
);
|
||||
},
|
||||
}}
|
||||
>
|
||||
{msg.content}
|
||||
</Markdown>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Show Tool Calls if present */}
|
||||
{msg.tool_calls && (
|
||||
<div
|
||||
style={{
|
||||
marginTop: "12px",
|
||||
fontSize: "0.85em",
|
||||
color: "#aaa",
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
gap: "8px",
|
||||
}}
|
||||
>
|
||||
{msg.tool_calls.map((tc, i) => {
|
||||
// Parse arguments to extract key info
|
||||
let argsSummary = "";
|
||||
try {
|
||||
const args = JSON.parse(tc.function.arguments);
|
||||
const firstKey = Object.keys(args)[0];
|
||||
if (firstKey && args[firstKey]) {
|
||||
argsSummary = String(args[firstKey]);
|
||||
// Truncate if too long
|
||||
if (argsSummary.length > 50) {
|
||||
argsSummary = `${argsSummary.substring(0, 47)}...`;
|
||||
}
|
||||
}
|
||||
} catch (_e) {
|
||||
// If parsing fails, just show empty
|
||||
}
|
||||
|
||||
return (
|
||||
<div
|
||||
key={`tool-${i}-${tc.function.name}`}
|
||||
style={{
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
gap: "8px",
|
||||
fontFamily: "monospace",
|
||||
}}
|
||||
>
|
||||
<span style={{ color: "#888" }}>▶</span>
|
||||
<span
|
||||
style={{
|
||||
background: "#333",
|
||||
padding: "2px 6px",
|
||||
borderRadius: "4px",
|
||||
}}
|
||||
>
|
||||
{tc.function.name}
|
||||
{argsSummary && `(${argsSummary})`}
|
||||
</span>
|
||||
</div>
|
||||
);
|
||||
})}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
))}
|
||||
{loading && streamingContent && (
|
||||
<div
|
||||
style={{
|
||||
display: "flex",
|
||||
flexDirection: "column",
|
||||
alignItems: "flex-start",
|
||||
}}
|
||||
>
|
||||
<div
|
||||
style={{
|
||||
maxWidth: "85%",
|
||||
padding: "16px 20px",
|
||||
borderRadius: "12px",
|
||||
background: "#262626",
|
||||
color: "#fff",
|
||||
border: "1px solid #404040",
|
||||
fontFamily: "system-ui, -apple-system, sans-serif",
|
||||
fontSize: "0.95rem",
|
||||
fontWeight: 400,
|
||||
whiteSpace: "pre-wrap",
|
||||
lineHeight: 1.6,
|
||||
}}
|
||||
>
|
||||
<Markdown
|
||||
components={{
|
||||
// react-markdown types are incompatible with strict typing
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
// biome-ignore lint/suspicious/noExplicitAny: react-markdown requires any for component props
|
||||
code: ({ className, children, ...props }: any) => {
|
||||
const match = /language-(\w+)/.exec(className || "");
|
||||
const isInline = !className;
|
||||
return !isInline && match ? (
|
||||
<SyntaxHighlighter
|
||||
// biome-ignore lint/suspicious/noExplicitAny: oneDark style types are incompatible
|
||||
style={oneDark as any}
|
||||
language={match[1]}
|
||||
PreTag="div"
|
||||
>
|
||||
{String(children).replace(/\n$/, "")}
|
||||
</SyntaxHighlighter>
|
||||
) : (
|
||||
<code className={className} {...props}>
|
||||
{children}
|
||||
</code>
|
||||
);
|
||||
},
|
||||
}}
|
||||
>
|
||||
{streamingContent}
|
||||
</Markdown>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
{loading && !streamingContent && (
|
||||
<div
|
||||
style={{
|
||||
alignSelf: "flex-start",
|
||||
color: "#888",
|
||||
fontSize: "0.9em",
|
||||
marginTop: "10px",
|
||||
}}
|
||||
>
|
||||
<span className="pulse">Thinking...</span>
|
||||
</div>
|
||||
)}
|
||||
<div ref={messagesEndRef} />
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Input Area */}
|
||||
<div
|
||||
style={{
|
||||
padding: "24px",
|
||||
background: "#171717",
|
||||
display: "flex",
|
||||
justifyContent: "center",
|
||||
}}
|
||||
>
|
||||
<div
|
||||
style={{
|
||||
maxWidth: "768px",
|
||||
width: "100%",
|
||||
display: "flex",
|
||||
gap: "8px",
|
||||
alignItems: "center",
|
||||
}}
|
||||
>
|
||||
<input
|
||||
ref={inputRef}
|
||||
value={input}
|
||||
onChange={(e) => setInput(e.target.value)}
|
||||
onKeyDown={(e) => {
|
||||
if (e.key === "Enter") {
|
||||
sendMessage();
|
||||
}
|
||||
}}
|
||||
placeholder="Send a message..."
|
||||
style={{
|
||||
flex: 1,
|
||||
padding: "14px 20px",
|
||||
borderRadius: "24px",
|
||||
border: "1px solid #333",
|
||||
outline: "none",
|
||||
fontSize: "1rem",
|
||||
fontWeight: "500",
|
||||
background: "#2f2f2f",
|
||||
color: "#ececec",
|
||||
boxShadow: "0 2px 6px rgba(0,0,0,0.02)",
|
||||
}}
|
||||
/>
|
||||
<button
|
||||
type="button"
|
||||
onClick={loading ? cancelGeneration : () => sendMessage()}
|
||||
disabled={!loading && !input.trim()}
|
||||
style={{
|
||||
background: "#ececec",
|
||||
color: "black",
|
||||
border: "none",
|
||||
borderRadius: "50%",
|
||||
width: "32px",
|
||||
height: "32px",
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
justifyContent: "center",
|
||||
cursor: "pointer",
|
||||
opacity: !loading && !input.trim() ? 0.5 : 1,
|
||||
flexShrink: 0,
|
||||
}}
|
||||
>
|
||||
{loading ? "■" : "↑"}
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* API Key Dialog */}
|
||||
{showApiKeyDialog && (
|
||||
<div
|
||||
style={{
|
||||
position: "fixed",
|
||||
top: 0,
|
||||
left: 0,
|
||||
right: 0,
|
||||
bottom: 0,
|
||||
backgroundColor: "rgba(0, 0, 0, 0.7)",
|
||||
display: "flex",
|
||||
alignItems: "center",
|
||||
justifyContent: "center",
|
||||
zIndex: 1000,
|
||||
}}
|
||||
>
|
||||
<div
|
||||
style={{
|
||||
backgroundColor: "#2f2f2f",
|
||||
padding: "32px",
|
||||
borderRadius: "12px",
|
||||
maxWidth: "500px",
|
||||
width: "90%",
|
||||
border: "1px solid #444",
|
||||
}}
|
||||
>
|
||||
<h2 style={{ marginTop: 0, color: "#ececec" }}>
|
||||
Enter Anthropic API Key
|
||||
</h2>
|
||||
<p
|
||||
style={{ color: "#aaa", fontSize: "0.9em", marginBottom: "20px" }}
|
||||
>
|
||||
To use Claude models, please enter your Anthropic API key. Your
|
||||
key will be stored securely in your system keychain.
|
||||
</p>
|
||||
<input
|
||||
type="password"
|
||||
value={apiKeyInput}
|
||||
onChange={(e) => setApiKeyInput(e.target.value)}
|
||||
onKeyDown={(e) => e.key === "Enter" && handleSaveApiKey()}
|
||||
placeholder="sk-ant-..."
|
||||
style={{
|
||||
width: "100%",
|
||||
padding: "12px",
|
||||
borderRadius: "8px",
|
||||
border: "1px solid #555",
|
||||
backgroundColor: "#1a1a1a",
|
||||
color: "#ececec",
|
||||
fontSize: "1em",
|
||||
marginBottom: "20px",
|
||||
outline: "none",
|
||||
}}
|
||||
/>
|
||||
<div
|
||||
style={{
|
||||
display: "flex",
|
||||
gap: "12px",
|
||||
justifyContent: "flex-end",
|
||||
}}
|
||||
>
|
||||
<button
|
||||
type="button"
|
||||
onClick={() => {
|
||||
setShowApiKeyDialog(false);
|
||||
setApiKeyInput("");
|
||||
pendingMessageRef.current = ""; // Clear pending message on cancel
|
||||
}}
|
||||
style={{
|
||||
padding: "10px 20px",
|
||||
borderRadius: "8px",
|
||||
border: "1px solid #555",
|
||||
backgroundColor: "transparent",
|
||||
color: "#aaa",
|
||||
cursor: "pointer",
|
||||
fontSize: "0.9em",
|
||||
}}
|
||||
>
|
||||
Cancel
|
||||
</button>
|
||||
<button
|
||||
type="button"
|
||||
onClick={handleSaveApiKey}
|
||||
disabled={!apiKeyInput.trim()}
|
||||
style={{
|
||||
padding: "10px 20px",
|
||||
borderRadius: "8px",
|
||||
border: "none",
|
||||
backgroundColor: apiKeyInput.trim() ? "#ececec" : "#555",
|
||||
color: apiKeyInput.trim() ? "#000" : "#888",
|
||||
cursor: apiKeyInput.trim() ? "pointer" : "not-allowed",
|
||||
fontSize: "0.9em",
|
||||
}}
|
||||
>
|
||||
Save Key
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
1
src/vite-env.d.ts
vendored
@@ -1 +0,0 @@
|
||||
/// <reference types="vite/client" />
|
||||
4
store.json
Normal file
@@ -0,0 +1,4 @@
|
||||
{
|
||||
"last_project_path": "/Users/dave/workspace/projects/crashlabs/labs/materialist.tech/so-101",
|
||||
"selected_model": "claude-3-5-sonnet-20241022"
|
||||
}
|
||||
@@ -1,32 +0,0 @@
|
||||
import { defineConfig } from "vite";
|
||||
import react from "@vitejs/plugin-react";
|
||||
|
||||
// @ts-expect-error process is a nodejs global
|
||||
const host = process.env.TAURI_DEV_HOST;
|
||||
|
||||
// https://vite.dev/config/
|
||||
export default defineConfig(async () => ({
|
||||
plugins: [react()],
|
||||
|
||||
// Vite options tailored for Tauri development and only applied in `tauri dev` or `tauri build`
|
||||
//
|
||||
// 1. prevent Vite from obscuring rust errors
|
||||
clearScreen: false,
|
||||
// 2. tauri expects a fixed port, fail if that port is not available
|
||||
server: {
|
||||
port: 1420,
|
||||
strictPort: true,
|
||||
host: host || false,
|
||||
hmr: host
|
||||
? {
|
||||
protocol: "ws",
|
||||
host,
|
||||
port: 1421,
|
||||
}
|
||||
: undefined,
|
||||
watch: {
|
||||
// 3. tell Vite to ignore watching `src-tauri`
|
||||
ignored: ["**/src-tauri/**"],
|
||||
},
|
||||
},
|
||||
}));
|
||||