Files
telemetry-client-js/README.md
Jason Woltje 231a799a46
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
docs(#1): SDK integration guide, API reference, and CI pipeline
- Rewrite README with quick start, config table, prediction usage, API version note
- Add docs/integration-guide.md with Next.js and Node.js examples, env-specific
  config, error handling patterns, batch behavior, and API version compatibility
- Add docs/api-reference.md with full reference for all exported classes, methods,
  types, and enums
- Add .woodpecker.yml with quality gates (lint, typecheck, format, security audit,
  test with coverage) and npm publish to Gitea registry
- Add AGENTS.md and update CLAUDE.md with project conventions

Fixes #1

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-14 22:38:19 -06:00

4.8 KiB

@mosaicstack/telemetry-client

TypeScript client SDK for Mosaic Stack Telemetry. Reports task-completion metrics from AI coding harnesses and queries crowd-sourced predictions.

Zero runtime dependencies — uses native fetch, crypto.randomUUID(), and setInterval. Requires Node.js 18+.

Targets Mosaic Telemetry API v1 (/v1/ endpoints, event schema version 1.0).

Installation

npm install @mosaicstack/telemetry-client

Quick Start

import {
  TelemetryClient,
  TaskType,
  Complexity,
  Harness,
  Provider,
  Outcome,
  QualityGate,
} from '@mosaicstack/telemetry-client';

// 1. Create and start the client
const client = new TelemetryClient({
  serverUrl: 'https://tel-api.mosaicstack.dev',
  apiKey: process.env.TELEMETRY_API_KEY!,
  instanceId: process.env.TELEMETRY_INSTANCE_ID!,
});

client.start(); // begins background batch submission every 5 minutes

// 2. Build and track an event
const event = client.eventBuilder.build({
  task_duration_ms: 45000,
  task_type: TaskType.IMPLEMENTATION,
  complexity: Complexity.MEDIUM,
  harness: Harness.CLAUDE_CODE,
  model: 'claude-sonnet-4-5-20250929',
  provider: Provider.ANTHROPIC,
  estimated_input_tokens: 105000,
  estimated_output_tokens: 45000,
  actual_input_tokens: 112340,
  actual_output_tokens: 38760,
  estimated_cost_usd_micros: 630000,
  actual_cost_usd_micros: 919200,
  quality_gate_passed: true,
  quality_gates_run: [QualityGate.BUILD, QualityGate.LINT, QualityGate.TEST],
  quality_gates_failed: [],
  context_compactions: 2,
  context_rotations: 0,
  context_utilization_final: 0.72,
  outcome: Outcome.SUCCESS,
  retry_count: 0,
  language: 'typescript',
  repo_size_category: 'medium',
});

client.track(event); // queues the event (never throws)

// 3. Query predictions
const prediction = client.getPrediction({
  task_type: TaskType.IMPLEMENTATION,
  model: 'claude-sonnet-4-5-20250929',
  provider: Provider.ANTHROPIC,
  complexity: Complexity.MEDIUM,
});

// 4. Shut down gracefully (flushes remaining events)
await client.stop();

Configuration

Option Type Default Description
serverUrl string required Telemetry API base URL
apiKey string required Bearer token for authentication
instanceId string required UUID identifying this instance
enabled boolean true Set false to disable — track() becomes a no-op
submitIntervalMs number 300_000 Background flush interval (5 min)
maxQueueSize number 1000 Max queued events before FIFO eviction
batchSize number 100 Events per batch submission (server max: 100)
requestTimeoutMs number 10_000 HTTP request timeout
predictionCacheTtlMs number 21_600_000 Prediction cache TTL (6 hours)
dryRun boolean false Log events instead of sending them
maxRetries number 3 Retry attempts with exponential backoff
onError (error: Error) => void silent Error callback

Querying Predictions

Predictions are crowd-sourced token/cost/duration estimates from the telemetry API. The SDK caches them locally with a configurable TTL.

// Fetch predictions from the server and cache locally
await client.refreshPredictions([
  { task_type: TaskType.IMPLEMENTATION, model: 'claude-sonnet-4-5-20250929', provider: Provider.ANTHROPIC, complexity: Complexity.MEDIUM },
  { task_type: TaskType.TESTING, model: 'claude-haiku-4-5-20251001', provider: Provider.ANTHROPIC, complexity: Complexity.LOW },
]);

// Read from cache (returns null if not cached or expired)
const prediction = client.getPrediction({
  task_type: TaskType.IMPLEMENTATION,
  model: 'claude-sonnet-4-5-20250929',
  provider: Provider.ANTHROPIC,
  complexity: Complexity.MEDIUM,
});

if (prediction?.prediction) {
  console.log('Median input tokens:', prediction.prediction.input_tokens.median);
  console.log('Median cost ($):', prediction.prediction.cost_usd_micros.median / 1_000_000);
  console.log('Confidence:', prediction.metadata.confidence);
}

Dry-Run Mode

For development and testing without sending data to the server:

const client = new TelemetryClient({
  serverUrl: 'https://tel-api.mosaicstack.dev',
  apiKey: 'test-key',
  instanceId: 'test-uuid',
  dryRun: true,
});

In dry-run mode, track() still queues events and flush() still runs, but the BatchSubmitter returns synthetic accepted responses without making HTTP calls.

Documentation

  • Integration Guide — Next.js and Node.js examples, environment-specific configuration, error handling patterns
  • API Reference — Full reference for all exported classes, methods, types, and enums

License

MPL-2.0