Commit Graph

4 Commits

Author SHA1 Message Date
eca2c46e9d merge: resolve conflicts with develop (telemetry + lockfile)
Some checks failed
ci/woodpecker/push/infra Pipeline was successful
ci/woodpecker/push/api Pipeline failed
ci/woodpecker/push/web Pipeline failed
ci/woodpecker/push/orchestrator Pipeline failed
ci/woodpecker/push/coordinator Pipeline was successful
Keep both Mosaic Telemetry section (from develop) and Matrix Dev
Environment section (from feature branch) in .env.example.
Regenerate pnpm-lock.yaml with both dependency trees merged.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-15 12:12:43 -06:00
fcecf3654b feat(#371): track LLM task completions via Mosaic Telemetry
- Create LlmTelemetryTrackerService for non-blocking event emission
- Normalize token usage across Anthropic, OpenAI, Ollama providers
- Add cost table with per-token pricing in microdollars
- Instrument chat, chatStream, and embed methods
- Infer task type from calling context
- Aggregate streaming tokens after stream ends with fallback estimation
- Add 69 unit tests for tracker service, cost table, and LLM service

Refs #371

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-15 02:10:22 -06:00
1f97e6de40 feat(#127): refactor LlmService to use provider pattern
Refactor LlmService to delegate to LlmManagerService instead of using
Ollama directly. This enables multiple provider support and user-specific
provider configuration.

Changes:
- Remove direct Ollama client from LlmService
- Delegate all LLM operations to provider via LlmManagerService
- Update health status to use provider-agnostic interface
- Add PrismaModule to LlmModule for manager service
- Maintain backward compatibility with existing API
- Achieve 89.74% test coverage

Fixes #127

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 12:33:56 -06:00
Jason Woltje
f706b3b982 feat(#21): implement ollama integration
- Add Ollama client library (ollama npm package)
- Create LlmService for chat completion and embeddings
- Support streaming responses via Server-Sent Events
- Add configuration via env vars (OLLAMA_HOST, OLLAMA_TIMEOUT)
- Create endpoints: GET /llm/health, GET /llm/models, POST /llm/chat, POST /llm/embed
- Replace old OllamaModule with new LlmModule
- Add comprehensive tests with >85% coverage

Closes #21
2026-01-29 19:28:31 -06:00