diff --git a/.storkit/work/5_done/345_story_gemini_agent_backend_via_google_ai_api.md b/.storkit/work/5_done/345_story_gemini_agent_backend_via_google_ai_api.md new file mode 100644 index 0000000..8c1d2fd --- /dev/null +++ b/.storkit/work/5_done/345_story_gemini_agent_backend_via_google_ai_api.md @@ -0,0 +1,25 @@ +--- +name: "Gemini agent backend via Google AI API" +--- + +# Story 345: Gemini agent backend via Google AI API + +## User Story + +As a project owner, I want to run agents using Gemini (2.5 Pro, etc.) via the Google AI API, so that I can use Google models for coding tasks alongside Claude and ChatGPT. + +## Acceptance Criteria + +- [ ] Implement GeminiRuntime using the AgentRuntime trait from refactor 343 +- [ ] Supports Gemini 2.5 Pro and other Gemini models via the Google AI generativeai API +- [ ] Manages a conversation loop: send prompt + tool definitions, execute tool calls, continue until done +- [ ] Agents connect to storkit's MCP server for all tool operations — no custom file/bash tools needed +- [ ] MCP tool definitions are converted to Gemini function calling format +- [ ] Configurable in project.toml: runtime = 'gemini', model = 'gemini-2.5-pro' +- [ ] GOOGLE_AI_API_KEY passed via environment variable +- [ ] Token usage tracked and logged to token_usage.jsonl +- [ ] Agent output streams to the same event system (web UI, bot notifications) + +## Out of Scope + +- TBD