Implemented abstract LLM provider interface to enable multi-provider support. Key components: - LlmProviderInterface: Abstract contract for all LLM providers - LlmProviderConfig: Base configuration interface - LlmProviderHealthStatus: Standardized health check response - LlmProviderType: Type discriminator for runtime checks Methods defined: - initialize(): Async provider setup - checkHealth(): Health status verification - listModels(): Available model enumeration - chat(): Synchronous completion - chatStream(): Streaming completion (async generator) - embed(): Embedding generation - getConfig(): Configuration access All methods fully documented with JSDoc. 13 tests written and passing. Type checking verified. Fixes #122 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
82 lines
2.1 KiB
Markdown
82 lines
2.1 KiB
Markdown
# Issue #122: Create LLM Provider Interface
|
|
|
|
## Objective
|
|
|
|
Define the abstract contract that all LLM providers (Ollama, Claude, OpenAI) must implement to enable multi-provider support.
|
|
|
|
## Approach
|
|
|
|
### Current State
|
|
|
|
- `LlmService` is hardcoded to Ollama
|
|
- Direct coupling to `ollama` npm package
|
|
- Methods: `chat()`, `chatStream()`, `embed()`, `listModels()`, `checkHealth()`
|
|
|
|
### Target Architecture
|
|
|
|
```
|
|
LlmProviderInterface (abstract)
|
|
├── OllamaProvider (implements)
|
|
├── ClaudeProvider (implements)
|
|
└── OpenAIProvider (implements)
|
|
|
|
LlmManagerService
|
|
└── manages provider instances
|
|
└── routes requests to appropriate provider
|
|
```
|
|
|
|
### Interface Methods (from current LlmService)
|
|
|
|
1. `chat(request)` - Synchronous chat completion
|
|
2. `chatStream(request)` - Streaming chat completion (async generator)
|
|
3. `embed(request)` - Generate embeddings
|
|
4. `listModels()` - List available models
|
|
5. `checkHealth()` - Health check
|
|
|
|
### Provider Configuration
|
|
|
|
Each provider needs different config:
|
|
|
|
- Ollama: `{ host, timeout }`
|
|
- Claude: `{ apiKey, baseUrl?, timeout? }`
|
|
- OpenAI: `{ apiKey, baseUrl?, organization?, timeout? }`
|
|
|
|
Need generic config interface that providers can extend.
|
|
|
|
## Progress
|
|
|
|
- [x] Write interface tests (TDD - RED)
|
|
- [x] Create base types and DTOs
|
|
- [x] Implement LlmProviderInterface
|
|
- [x] Implement LlmProviderConfig interface
|
|
- [x] Add JSDoc documentation
|
|
- [x] Run tests (TDD - GREEN) - All 13 tests passed
|
|
- [x] Type checking passed
|
|
- [x] Refactor if needed (TDD - REFACTOR) - No refactoring needed
|
|
|
|
## Testing
|
|
|
|
Created a mock provider implementation to test interface contract.
|
|
|
|
**Test Results:**
|
|
|
|
```
|
|
✓ src/llm/providers/llm-provider.interface.spec.ts (13 tests) 7ms
|
|
- initialization (2 tests)
|
|
- checkHealth (2 tests)
|
|
- listModels (2 tests)
|
|
- chat (2 tests)
|
|
- chatStream (1 test)
|
|
- embed (2 tests)
|
|
- getConfig (2 tests)
|
|
```
|
|
|
|
**Type Check:** ✅ Passed
|
|
|
|
## Notes
|
|
|
|
- Interface should be provider-agnostic
|
|
- Use existing DTOs from current LlmService
|
|
- Consider async initialization for providers
|
|
- Health check should return standardized status
|