GitHub Issue

[Feature] Ollama / local model LLMAdapter

Discovered On Apr 1, 2026
Primary Metric open
## Summary Add an LLMAdapter implementation for Ollama, enabling local model support (Qwen, etc.). ## Motivation Many users (especially from r/LocalLLaMA) want to run multi-agent workflows without depending on cloud APIs. The `LLMAdapter` interface only requires two methods (`chat()` and `stream()`), so the implementation cost should be low. ## Proposed Approach - Implement `OllamaAdapter` that calls Ollama's `/api/chat` endpoint - Support tool calling via Ollama's function calling format - Handle streaming via SSE - Allow configuring base URL (default `http://localhost:11434`) ## Acceptance Criteria - [ ] `OllamaAdapter` implements `LLMAdapter` interface - [ ] Works with `chat()` and `stream()` methods - [ ] Tool calling support - [ ] Example in README or docs - [ ] Unit tests Community contributions welcome! This is a great first issue if you want to get involved.
View Raw Thread