Track LLM task completions via Mosaic Telemetry #371
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Summary
Instrument the LLM service layer to emit
TaskCompletionEvents through the Mosaic Telemetry client after each LLM interaction completes. This is the primary data source for token usage tracking, cost analysis, and prediction model training.Context
The telemetry system tracks AI coding task completions with rich metadata. The LLM service (
apps/api/src/llm/) is where all provider calls happen — this is the natural integration point.Note: This is separate from the existing OpenTelemetry (OTEL) instrumentation which handles request tracing/spans. Mosaic Telemetry tracks higher-level task completion metrics for cost forecasting and quality analysis.
Requirements
Event Construction
After each LLM call completes, build a
TaskCompletionEventusingEventBuilder:Integration Points
LlmService.chat()— Standard chat completionsLlmService.chatStream()— Streaming completions (aggregate tokens after stream ends)LlmService.embed()— Embedding operationsProvider-Specific Token Extraction
Each provider returns usage data differently:
response.usage.input_tokens,response.usage.output_tokensresponse.usage.prompt_tokens,response.usage.completion_tokensresponse.eval_count,response.prompt_eval_countNormalize all to the common
actual_input_tokens/actual_output_tokensfields.Cost Calculation
Task Type Inference
Map the calling context to a
TaskType:implementationorplanning(based on system prompt)planningimplementationcode_reviewAcceptance Criteria
Completed in commit
639881fon feature/m10-telemetry. Created LlmTelemetryTrackerService with fire-and-forget tracking, llm-cost-table.ts with microdollar pricing. Instrumented LlmService chat/chatStream/embed. 69 unit tests.