- Multi-provider LLM abstraction plan - OpenTelemetry tracing integration - Personality system backend implementation - MCP infrastructure migration - Database-backed configuration pattern - 5-phase migration plan with milestones - Maps to existing issues #21, #22-27, #30-32, #82 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
20 KiB
Jarvis r1 Backend Migration Plan
Overview
Migrate production-ready backend features from ~/src/jarvis (r1) to ~/src/mosaic-stack (r2) to accelerate development and leverage proven patterns.
Source: ~/src/jarvis/backend/src/
Target: ~/src/mosaic-stack/apps/api/src/
Related Issue: #30 (Migration scripts from jarvis-brain)
Stack Compatibility
| Aspect | Jarvis r1 | Mosaic Stack | Compatible | Notes |
|---|---|---|---|---|
| Framework | NestJS 10 | NestJS 11 | ✅ | Minor version bump |
| ORM | Prisma | Prisma 6.19.2 | ✅ | Same tool |
| Database | PostgreSQL | PostgreSQL 17 | ✅ | Version upgrade |
| Cache | Redis | Valkey | ✅ | Redis-compatible |
| Tracing | OpenTelemetry | None | ⚠️ | Need to add |
| LLM | Multi-provider | Ollama only | ⚠️ | Need abstraction |
| Config | Database-backed | YAML files | ⚠️ | Architecture difference |
Key Features to Migrate
1. Multi-Provider LLM Abstraction
Source: ~/src/jarvis/backend/src/llm/providers/
Value: Supports Ollama, Claude (Anthropic), OpenAI, and Z.ai with unified interface.
Core Components:
base-llm-provider.interface.ts- Abstract provider interfaceollama.provider.ts- Local LLM provider with retry/backoffclaude.provider.ts- Anthropic API provideropenai.provider.ts- OpenAI API providerllm.manager.ts- Provider instance managementllm.service.ts- High-level service orchestration
Target Location: apps/api/src/llm/providers/
Milestone: M3-Features (#21 Ollama integration) + M4-MoltBot (agent support)
Adapter Needed:
- Refactor existing
LlmServiceto use provider pattern - Move Ollama-specific code into
OllamaProvider - Add provider registration/discovery
2. Database-Backed Configuration Pattern
Source: ~/src/jarvis/backend/src/prisma/schema.prisma (llm_provider_instances table)
Value: Single source of truth for all service instances. No YAML duplication, hot reload without restart.
Schema Pattern:
model LlmProviderInstance {
id String @id @default(uuid())
provider_type String // 'ollama' | 'claude' | 'openai' | 'zai'
display_name String @db.VarChar(100)
user_id String? @db.Uuid // NULL = system-level
config Json // Provider-specific settings
is_default Boolean @default(false)
is_enabled Boolean @default(true)
created_at DateTime @default(now())
updated_at DateTime @updatedAt
user User? @relation(fields: [user_id], references: [id], onDelete: Cascade)
@@index([user_id])
@@index([is_default, is_enabled])
}
Target Location: apps/api/prisma/schema.prisma
Milestone: M3-Features (#21 Ollama integration)
Extension Opportunity: Generalize to ServiceInstance for database, cache, etc.
3. OpenTelemetry Tracing Infrastructure
Source: ~/src/jarvis/backend/src/telemetry/
Value: Distributed tracing, performance monitoring, GenAI semantic conventions.
Core Components:
telemetry.service.ts- OTEL SDK initializationtelemetry.interceptor.ts- Automatic span creation for HTTP/GraphQLllm-telemetry.decorator.ts- GenAI-specific spansspan-context.service.ts- Context propagation
Features:
- Automatic request tracing
- LLM call instrumentation (token counts, latency, costs)
- Error tracking with stack traces
- Jaeger/Zipkin export
- GenAI semantic conventions (gen_ai.request.model, gen_ai.response.finish_reasons)
Target Location: apps/api/src/telemetry/
Milestone: M3-Features (new infrastructure)
Integration Points:
- Instrument all LLM provider calls
- Add to MoltBot agent execution
- Connect to existing logging
4. Personality System Backend
Source: ~/src/jarvis/backend/src/personality/
Value: Dynamic personality/assistant configuration with prompt templates.
Status in Mosaic Stack:
- ✅ UI exists (
apps/web/src/app/admin/personality/page.tsx) - ❌ Backend implementation missing
Core Components:
personality.entity.ts- Personality modelpersonality.service.ts- CRUD + template renderingpersonality-prompt.builder.ts- Context injection
Schema:
model Personality {
id String @id @default(uuid())
name String @unique @db.VarChar(100)
display_name String @db.VarChar(200)
system_prompt String @db.Text
temperature Float @default(0.7)
max_tokens Int @default(4000)
llm_provider_id String? @db.Uuid
is_active Boolean @default(true)
created_at DateTime @default(now())
updated_at DateTime @updatedAt
llm_provider LlmProviderInstance? @relation(fields: [llm_provider_id], references: [id])
conversations Conversation[]
}
Target Location: apps/api/src/personality/
Milestone: M3-Features (#82 Personality Module)
5. Workspace-Scoped LLM Configuration
Source: ~/src/jarvis/backend/src/workspaces/workspace-llm.service.ts
Value: Per-workspace LLM provider selection and settings.
Pattern:
- System-level providers (user_id = NULL)
- User-level providers (user_id = UUID)
- Workspace-level overrides (via WorkspaceSettings)
Target Schema Addition:
model WorkspaceSettings {
id String @id @default(uuid())
workspace_id String @unique @db.Uuid
default_llm_provider String? @db.Uuid
default_personality String? @db.Uuid
settings Json // Other workspace preferences
workspace Workspace @relation(fields: [workspace_id], references: [id], onDelete: Cascade)
llm_provider LlmProviderInstance? @relation(fields: [default_llm_provider], references: [id])
personality Personality? @relation(fields: [default_personality], references: [id])
}
Target Location: Extend existing Workspace model
Milestone: M3-Features
6. MCP (Model Context Protocol) Infrastructure
Source: ~/src/jarvis/backend/src/mcp/
Status: Phase 1 complete (hub, stdio transport, 26 passing tests)
Value: Agent tool integration framework.
Core Components:
mcp-hub.service.ts- Central registry for MCP serversmcp-server.interface.ts- Server lifecycle managementstdio-transport.ts- Process-based communicationtool-registry.ts- Available tools catalog
Target Location: apps/api/src/mcp/
Milestone: M4-MoltBot (agent skills)
Integration Points:
- Connect to Brain query API (#22)
- Provide tools for agent sessions
- Enable skill discovery
Migration Phases
Phase 1: LLM Abstraction (5-7 days)
Milestone: M3-Features
| Task | Files | Priority |
|---|---|---|
| Create provider interface | llm/providers/base-llm-provider.interface.ts |
P0 |
| Port Ollama provider | llm/providers/ollama.provider.ts |
P0 |
| Add OpenAI provider | llm/providers/openai.provider.ts |
P1 |
| Add Claude provider | llm/providers/claude.provider.ts |
P1 |
| Create LLM manager | llm/llm.manager.ts |
P0 |
| Refactor existing LlmService | llm/llm.service.ts |
P0 |
| Add Prisma schema | prisma/schema.prisma (LlmProviderInstance) |
P0 |
| Create admin API endpoints | llm/llm.controller.ts |
P1 |
| Update tests | llm/*.spec.ts |
P1 |
Success Criteria:
- ✅ All existing Ollama functionality works
- ✅ Can add/remove/switch providers via API
- ✅ No YAML configuration needed
- ✅ Tests pass
Phase 2: Personality Backend (2-3 days)
Milestone: M3-Features (#82)
| Task | Files | Priority |
|---|---|---|
| Add Prisma schema | prisma/schema.prisma (Personality) |
P0 |
| Create personality service | personality/personality.service.ts |
P0 |
| Create CRUD endpoints | personality/personality.controller.ts |
P0 |
| Build prompt builder | personality/personality-prompt.builder.ts |
P1 |
| Connect to existing UI | Wire up API calls | P0 |
| Add tests | personality/*.spec.ts |
P1 |
Success Criteria:
- ✅ UI can create/edit personalities
- ✅ Personalities persist to database
- ✅ Chat uses selected personality
- ✅ Prompt templates render correctly
Phase 3: OpenTelemetry (3-4 days)
Milestone: M3-Features (infrastructure)
| Task | Files | Priority |
|---|---|---|
| Add OTEL dependencies | package.json |
P0 |
| Create telemetry service | telemetry/telemetry.service.ts |
P0 |
| Add HTTP interceptor | telemetry/telemetry.interceptor.ts |
P0 |
| Create LLM decorator | telemetry/llm-telemetry.decorator.ts |
P1 |
| Instrument LLM providers | All provider files | P1 |
| Configure Jaeger export | main.ts |
P2 |
| Add trace context to logs | Update logging | P2 |
| Documentation | docs/3-architecture/telemetry.md |
P1 |
Success Criteria:
- ✅ All HTTP requests create spans
- ✅ LLM calls instrumented with token counts
- ✅ Traces viewable in Jaeger
- ✅ No performance degradation
Phase 4: MCP Integration (2-3 days)
Milestone: M4-MoltBot
| Task | Files | Priority |
|---|---|---|
| Port MCP hub | mcp/mcp-hub.service.ts |
P0 |
| Port stdio transport | mcp/stdio-transport.ts |
P0 |
| Create tool registry | mcp/tool-registry.ts |
P0 |
| Connect to Brain API | Integration with #22 | P0 |
| Add MCP endpoints | mcp/mcp.controller.ts |
P1 |
| Port tests | mcp/*.spec.ts |
P1 |
Success Criteria:
- ✅ MCP servers can register
- ✅ Tools discoverable by agents
- ✅ Brain query accessible via MCP
- ✅ Tests pass
Phase 5: Workspace LLM Config (1-2 days)
Milestone: M3-Features
| Task | Files | Priority |
|---|---|---|
| Extend workspace schema | prisma/schema.prisma (WorkspaceSettings) |
P0 |
| Add settings service | workspaces/workspace-settings.service.ts |
P0 |
| Update workspace UI | Frontend settings page | P1 |
| Add provider selection API | workspaces/workspaces.controller.ts |
P0 |
Success Criteria:
- ✅ Workspaces can select LLM provider
- ✅ Workspaces can set default personality
- ✅ Settings persist correctly
- ✅ Multi-tenant isolation maintained
Gap Analysis
What Mosaic Stack Already Has ✅
- Workspace isolation (RLS)
- Teams and RBAC
- Knowledge management with pgvector
- Kanban, Gantt, Dashboard widgets
- Basic Ollama integration
- BetterAuth + Authentik ready
- Socket.io real-time updates
What jarvis r1 Provides ✅
- Multi-provider LLM (Ollama, Claude, OpenAI)
- Database-backed configuration (no YAML)
- OpenTelemetry tracing
- Personality system backend
- MCP Phase 1 infrastructure
- Hot reload capability
- Multi-instance pattern (system + user-scoped)
Combined Platform Advantages 🚀
- Enterprise multi-tenancy + flexible LLM
- Semantic search + multi-provider chat
- Agent orchestration + MCP tools
- Observability from day one
- Workspace-scoped AI configuration
Data Migration
Source: ~/src/jarvis-brain/data/*.json (JSON files)
Target: Mosaic Stack PostgreSQL database
Migration Script Location: apps/api/src/scripts/migrate-jarvis-brain.ts
Mapping:
| jarvis-brain | Mosaic Stack | Notes |
|---|---|---|
tasks.json |
Task table | Map domain → workspace |
events.json |
Event table | Calendar integration |
projects.json |
Project table | Direct mapping |
agents.json |
Agent/AgentSession | Active sessions |
tickets.json |
Custom Ticket table | GLPI sync |
verizon-*.json |
KnowledgeEntry | Flatten to knowledge items |
Related Issues:
- #30 Migration scripts from jarvis-brain
- #31 Data validation and integrity checks
- #32 Parallel operation testing
Milestone: M5-Migration (0.1.0 MVP)
Integration with Existing Milestones
| Migration Phase | Milestone | Issues |
|---|---|---|
| LLM Abstraction | M3-Features | #21 Ollama integration |
| Personality Backend | M3-Features | #82 Personality Module |
| OpenTelemetry | M3-Features | (New infrastructure) |
| MCP Integration | M4-MoltBot | #22 Brain API, #23-27 Skills |
| Workspace LLM Config | M3-Features | (Workspace enhancement) |
| Data Migration | M5-Migration | #30, #31, #32 |
Recommended Execution Strategy
Option A: Sequential (Conservative)
- Phase 1 (LLM) → Phase 2 (Personality) → Phase 3 (OTEL) → Phase 4 (MCP) → Phase 5 (Workspace)
- Timeline: ~15-18 days
- Advantage: Each phase fully tested before next
- Disadvantage: Slower overall
Option B: Parallel (Aggressive)
- Agent 1: Phase 1 (LLM Abstraction)
- Agent 2: Phase 2 (Personality Backend)
- Agent 3: Phase 3 (OpenTelemetry)
- Agent 4: Phase 4 (MCP Integration)
- Timeline: ~7-10 days
- Advantage: Fast completion
- Disadvantage: Integration complexity, merge conflicts
Option C: Hybrid (Recommended)
- Week 1: Phase 1 (LLM) + Phase 2 (Personality) in parallel
- Week 2: Phase 3 (OTEL) + Phase 4 (MCP) in parallel
- Week 3: Phase 5 (Workspace) + Integration testing
- Timeline: ~12-14 days
- Advantage: Balanced speed and stability
Dependencies
┌─────────────────────────────┐
│ Phase 1: LLM Abstraction │
│ (Foundation) │
└──────────┬──────────────────┘
│
┌─────┴─────┐
│ │
▼ ▼
┌─────────┐ ┌──────────┐
│Phase 2 │ │ Phase 3 │
│Persona │ │ OTEL │
│lity │ │ │
└─────────┘ └────┬─────┘
│
▼
┌──────────────┐
│ Phase 4 │
│ MCP │
└──────┬───────┘
│
▼
┌──────────────┐
│ Phase 5 │
│ Workspace │
│ Config │
└──────────────┘
Key Dependencies:
- Phase 2, 3 depend on Phase 1 (LLM providers must exist)
- Phase 4 can parallel with Phase 3 (independent)
- Phase 5 depends on Phase 1, 2 (needs providers + personalities)
Testing Strategy
Unit Tests
- All providers implement interface correctly
- Manager handles provider selection
- Personality templates render
- OTEL spans created/exported
Integration Tests
- End-to-end LLM calls through manager
- Workspace settings override system defaults
- MCP tools callable from agents
- Telemetry spans link correctly
Migration Tests
- All jarvis-brain data imports without errors
- Referential integrity maintained
- No data loss
- Domain → Workspace mapping correct
Risks & Mitigations
| Risk | Impact | Mitigation |
|---|---|---|
| API breaking changes between NestJS 10→11 | High | Review migration guide, test thoroughly |
| Prisma schema conflicts | Medium | Use separate migration for new tables |
| Performance regression from OTEL | Medium | Benchmark before/after, make OTEL optional |
| jarvis-brain data corruption on migration | High | Backup all JSON files, dry-run script first |
| Provider API rate limits during testing | Low | Use mocks in tests, real providers in E2E only |
Success Criteria
Phase 1 Complete ✅
- Can switch between Ollama, Claude, OpenAI via UI
- No hardcoded provider configurations
- All providers pass test suite
- Existing chat functionality unbroken
Phase 2 Complete ✅
- Personality UI functional end-to-end
- Personalities persist to database
- Chat uses selected personality
- Prompt templates support variables
Phase 3 Complete ✅
- Traces visible in Jaeger
- LLM calls show token counts
- HTTP requests traced automatically
- No >5% performance overhead
Phase 4 Complete ✅
- MCP servers can register
- Brain query accessible via MCP
- Agents can discover tools
- All jarvis r1 MCP tests pass
Phase 5 Complete ✅
- Workspaces can configure LLM provider
- Settings isolated per workspace
- Multi-tenant configuration verified
- No cross-workspace data leakage
Migration Complete ✅
- All jarvis-brain data in PostgreSQL
- Data validation passes (#31)
- Parallel operation verified (#32)
- Ready for 0.1.0 MVP release
Files to Create
Backend (apps/api/src/)
llm/
├─ providers/
│ ├─ base-llm-provider.interface.ts
│ ├─ ollama.provider.ts
│ ├─ openai.provider.ts
│ └─ claude.provider.ts
├─ llm.manager.ts
├─ llm.service.ts (refactor existing)
└─ llm.controller.ts
personality/
├─ personality.entity.ts
├─ personality.service.ts
├─ personality.controller.ts
├─ personality-prompt.builder.ts
└─ dto/
├─ create-personality.dto.ts
└─ update-personality.dto.ts
telemetry/
├─ telemetry.service.ts
├─ telemetry.interceptor.ts
├─ llm-telemetry.decorator.ts
└─ span-context.service.ts
mcp/
├─ mcp-hub.service.ts
├─ mcp-server.interface.ts
├─ stdio-transport.ts
└─ tool-registry.ts
workspaces/
└─ workspace-settings.service.ts (extend existing)
scripts/
└─ migrate-jarvis-brain.ts
Schema (apps/api/prisma/)
migrations/
└─ YYYYMMDDHHMMSS_add_llm_and_personality/
└─ migration.sql
schema.prisma
└─ Add models: LlmProviderInstance, Personality, WorkspaceSettings
Documentation (docs/)
3-architecture/
├─ llm-provider-architecture.md
├─ telemetry.md
└─ mcp-integration.md
2-development/
└─ testing-llm-providers.md
Changelog
| Date | Change |
|---|---|
| 2026-01-30 | Created migration plan from jarvis r1 exploration |