Files
storkit/.living_spec/specs/tech/STACK.md
Dave e71dcd8226 Story 12: Update story and specs for Claude integration
Story Updates:
- Unified model dropdown with section headers (Anthropic, Ollama)
- Auto-detect provider from model name (claude-* prefix)
- API key prompt on first Claude model use
- Secure storage in OS keychain via keyring crate
- 200k token context window for Claude models

Spec Updates (AI_INTEGRATION.md):
- Document Anthropic provider implementation
- Anthropic API protocol (SSE streaming, tool format)
- Tool format conversion between internal and Anthropic formats
- API key storage in OS keychain
- Unified dropdown UI flow

Spec Updates (STACK.md):
- Add keyring crate for secure API key storage
- Add eventsource-stream for Anthropic SSE streaming
- Document automatic provider detection
- Update API key management approach
2025-12-27 19:37:01 +00:00

5.5 KiB

Tech Stack & Constraints

Overview

This project is a desktop application built with Tauri. It functions as an Agentic Code Assistant capable of safely executing tools on the host system.

Core Stack

  • Backend: Rust (Tauri Core)
    • MSRV: Stable (latest)
    • Framework: Tauri v2
  • Frontend: TypeScript + React
    • Build Tool: Vite
    • Styling: CSS Modules or Tailwind (TBD - Defaulting to CSS Modules)
    • State Management: React Context / Hooks
    • Chat UI: Rendered Markdown with syntax highlighting.

Agent Architecture

The application follows a Tool-Use (Function Calling) architecture:

  1. Frontend: Collects user input and sends it to the LLM.
  2. LLM: Decides to generate text OR request a Tool Call (e.g., execute_shell, read_file).
  3. Tauri Backend (The "Hand"):
    • Intercepts Tool Calls.
    • Validates the request against the Safety Policy.
    • Executes the native code (File I/O, Shell Process, Search).
    • Returns the output (stdout/stderr/file content) to the LLM.
    • Event Loop: The backend emits real-time events (chat:update) to the frontend to ensure UI responsiveness during long-running Agent tasks.

LLM Provider Abstraction

To support both Remote and Local models, the system implements a ModelProvider abstraction layer.

  • Strategy:
    • Abstract the differences between API formats (OpenAI-compatible vs Anthropic vs Gemini).
    • Normalize "Tool Use" definitions, as each provider handles function calling schemas differently.
  • Supported Providers:
    • Ollama: Local inference (e.g., Llama 3, DeepSeek Coder) for privacy and offline usage.
    • Anthropic: Claude 3.5 models (Sonnet, Haiku) via API for coding tasks (Story 12).
  • Provider Selection:
    • Automatic detection based on model name prefix:
      • claude- → Anthropic API
      • Otherwise → Ollama
    • Single unified model dropdown with section headers ("Anthropic", "Ollama")
  • API Key Management:
    • Anthropic API key stored in OS keychain (macOS Keychain, Windows Credential Manager, Linux Secret Service)
    • Uses keyring crate for cross-platform secure storage
    • On first use of Claude model, user prompted to enter API key
    • Key persists across sessions (no re-entry needed)

Tooling Capabilities

1. Filesystem (Native)

  • Scope: Strictly limited to the user-selected project_root.
  • Operations: Read, Write, List, Delete.
  • Constraint: Modifications to .git/ are strictly forbidden via file APIs (use Git tools instead).

2. Shell Execution

  • Library: tokio::process for async execution.
  • Constraint: We do not run an interactive shell (repl). We run discrete, stateless commands.
  • Allowlist: The agent may only execute specific binaries:
    • git
    • cargo, rustc, rustfmt, clippy
    • npm, node, yarn, pnpm, bun
    • ls, find, grep (if not using internal search)
    • mkdir, rm, touch, mv, cp

3. Search & Navigation

  • Library: ignore (by BurntSushi) + grep logic.
  • Behavior:
    • Must respect .gitignore files automatically.
    • Must be performant (parallel traversal).

Coding Standards

Rust

  • Style: rustfmt standard.
  • Linter: clippy - Must pass with 0 warnings before merging.
  • Error Handling: Custom AppError type deriving thiserror. All Commands return Result<T, AppError>.
  • Concurrency: Heavy tools (Search, Shell) must run on tokio threads to avoid blocking the UI.
  • Quality Gates:
    • cargo clippy --all-targets --all-features must show 0 errors, 0 warnings
    • cargo check must succeed
    • cargo test must pass all tests

TypeScript / React

  • Style: Biome formatter (replaces Prettier/ESLint).
  • Linter: Biome - Must pass with 0 errors, 0 warnings before merging.
  • Types: Shared types with Rust (via tauri-specta or manual interface matching) are preferred to ensure type safety across the bridge.
  • Quality Gates:
    • npx @biomejs/biome check src/ must show 0 errors, 0 warnings
    • npm run build must succeed
    • No any types allowed (use proper types or unknown)
    • React keys must use stable IDs, not array indices
    • All buttons must have explicit type attribute

Libraries (Approved)

  • Rust:
    • serde, serde_json: Serialization.
    • ignore: Fast recursive directory iteration respecting gitignore.
    • walkdir: Simple directory traversal.
    • tokio: Async runtime.
    • reqwest: For LLM API calls (Anthropic, Ollama).
    • eventsource-stream: For Server-Sent Events (Anthropic streaming).
    • keyring: Secure API key storage in OS keychain.
    • uuid: For unique message IDs.
    • chrono: For timestamps.
    • tauri-plugin-dialog: Native system dialogs.
    • tauri-plugin-store: Persistent key-value storage.
  • JavaScript:
    • @tauri-apps/api: Tauri Bridge.
    • @tauri-apps/plugin-dialog: Dialog API.
    • @tauri-apps/plugin-store: Store API.
    • react-markdown: For rendering chat responses.

Safety & Sandbox

  1. Project Scope: The application must strictly enforce that it does not read/write outside the project_root selected by the user.
  2. Human in the Loop:
    • Shell commands that modify state (non-readonly) should ideally require a UI confirmation (configurable).
    • File writes must be confirmed or revertible.