Ollama integration (local/remote) #21

Closed
opened 2026-01-28 19:05:02 +00:00 by jason.woltje · 0 comments
Owner

Implement configurable Ollama integration.

Configuration:

  • OLLAMA_MODE: local | remote
  • OLLAMA_ENDPOINT: URL
  • OLLAMA_MODEL: model name
  • OLLAMA_EMBEDDING_MODEL: embedding model

Features:

  • Chat completions
  • Embeddings generation
  • Model switching
  • Fallback handling

API:

  • POST /api/ai/chat - Chat with AI
  • POST /api/ai/embed - Generate embeddings

Service layer:

  • OllamaService with configurable endpoint
  • Health check on startup
  • Timeout handling
  • Rate limiting

Requirements:

  • Works with local Ollama (Docker)
  • Works with remote Ollama (homelab)
  • Works with OpenAI-compatible APIs (together.ai)
Implement configurable Ollama integration. **Configuration:** - OLLAMA_MODE: local | remote - OLLAMA_ENDPOINT: URL - OLLAMA_MODEL: model name - OLLAMA_EMBEDDING_MODEL: embedding model **Features:** - Chat completions - Embeddings generation - Model switching - Fallback handling **API:** - POST /api/ai/chat - Chat with AI - POST /api/ai/embed - Generate embeddings **Service layer:** - OllamaService with configurable endpoint - Health check on startup - Timeout handling - Rate limiting **Requirements:** - Works with local Ollama (Docker) - Works with remote Ollama (homelab) - Works with OpenAI-compatible APIs (together.ai)
jason.woltje added this to the M3-Features (0.0.3) milestone 2026-01-28 19:05:02 +00:00
jason.woltje added the p1aiapi labels 2026-01-28 19:05:02 +00:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: mosaic/stack#21