storkit: accept 357_story_bot_assign_command_to_pre_assign_a_model_to_a_story
This commit is contained in:
@@ -1,25 +0,0 @@
|
|||||||
---
|
|
||||||
name: "Gemini agent backend via Google AI API"
|
|
||||||
---
|
|
||||||
|
|
||||||
# Story 345: Gemini agent backend via Google AI API
|
|
||||||
|
|
||||||
## User Story
|
|
||||||
|
|
||||||
As a project owner, I want to run agents using Gemini (2.5 Pro, etc.) via the Google AI API, so that I can use Google models for coding tasks alongside Claude and ChatGPT.
|
|
||||||
|
|
||||||
## Acceptance Criteria
|
|
||||||
|
|
||||||
- [ ] Implement GeminiRuntime using the AgentRuntime trait from refactor 343
|
|
||||||
- [ ] Supports Gemini 2.5 Pro and other Gemini models via the Google AI generativeai API
|
|
||||||
- [ ] Manages a conversation loop: send prompt + tool definitions, execute tool calls, continue until done
|
|
||||||
- [ ] Agents connect to storkit's MCP server for all tool operations — no custom file/bash tools needed
|
|
||||||
- [ ] MCP tool definitions are converted to Gemini function calling format
|
|
||||||
- [ ] Configurable in project.toml: runtime = 'gemini', model = 'gemini-2.5-pro'
|
|
||||||
- [ ] GOOGLE_AI_API_KEY passed via environment variable
|
|
||||||
- [ ] Token usage tracked and logged to token_usage.jsonl
|
|
||||||
- [ ] Agent output streams to the same event system (web UI, bot notifications)
|
|
||||||
|
|
||||||
## Out of Scope
|
|
||||||
|
|
||||||
- TBD
|
|
||||||
Reference in New Issue
Block a user