Files
telemetry-client-js/README.md
Jason Woltje 20f56edb49
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
docs(#1): document dev/release package versioning convention
Add versioning table to README and integration guide showing dist-tags,
version formats, and .npmrc registry configuration for the Gitea npm
registry.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-14 23:02:46 -06:00

5.4 KiB

@mosaicstack/telemetry-client

TypeScript client SDK for Mosaic Stack Telemetry. Reports task-completion metrics from AI coding harnesses and queries crowd-sourced predictions.

Zero runtime dependencies — uses native fetch, crypto.randomUUID(), and setInterval. Requires Node.js 18+.

Targets Mosaic Telemetry API v1 (/v1/ endpoints, event schema version 1.0).

Installation

# Latest stable release (from main)
npm install @mosaicstack/telemetry-client

# Latest dev build (from develop)
npm install @mosaicstack/telemetry-client@dev

The Gitea npm registry must be configured in .npmrc:

@mosaicstack:registry=https://git.mosaicstack.dev/api/packages/mosaic/npm/

Versioning

Branch Dist-tag Version format Example
main latest {version} 0.1.0
develop dev {version}-dev.{YYYYMMDDHHmmss} 0.1.0-dev.20260215050000

Every push to develop publishes a new prerelease. Stable releases publish from main only when the version in package.json changes.

Quick Start

import {
  TelemetryClient,
  TaskType,
  Complexity,
  Harness,
  Provider,
  Outcome,
  QualityGate,
} from '@mosaicstack/telemetry-client';

// 1. Create and start the client
const client = new TelemetryClient({
  serverUrl: 'https://tel-api.mosaicstack.dev',
  apiKey: process.env.TELEMETRY_API_KEY!,
  instanceId: process.env.TELEMETRY_INSTANCE_ID!,
});

client.start(); // begins background batch submission every 5 minutes

// 2. Build and track an event
const event = client.eventBuilder.build({
  task_duration_ms: 45000,
  task_type: TaskType.IMPLEMENTATION,
  complexity: Complexity.MEDIUM,
  harness: Harness.CLAUDE_CODE,
  model: 'claude-sonnet-4-5-20250929',
  provider: Provider.ANTHROPIC,
  estimated_input_tokens: 105000,
  estimated_output_tokens: 45000,
  actual_input_tokens: 112340,
  actual_output_tokens: 38760,
  estimated_cost_usd_micros: 630000,
  actual_cost_usd_micros: 919200,
  quality_gate_passed: true,
  quality_gates_run: [QualityGate.BUILD, QualityGate.LINT, QualityGate.TEST],
  quality_gates_failed: [],
  context_compactions: 2,
  context_rotations: 0,
  context_utilization_final: 0.72,
  outcome: Outcome.SUCCESS,
  retry_count: 0,
  language: 'typescript',
  repo_size_category: 'medium',
});

client.track(event); // queues the event (never throws)

// 3. Query predictions
const prediction = client.getPrediction({
  task_type: TaskType.IMPLEMENTATION,
  model: 'claude-sonnet-4-5-20250929',
  provider: Provider.ANTHROPIC,
  complexity: Complexity.MEDIUM,
});

// 4. Shut down gracefully (flushes remaining events)
await client.stop();

Configuration

Option Type Default Description
serverUrl string required Telemetry API base URL
apiKey string required Bearer token for authentication
instanceId string required UUID identifying this instance
enabled boolean true Set false to disable — track() becomes a no-op
submitIntervalMs number 300_000 Background flush interval (5 min)
maxQueueSize number 1000 Max queued events before FIFO eviction
batchSize number 100 Events per batch submission (server max: 100)
requestTimeoutMs number 10_000 HTTP request timeout
predictionCacheTtlMs number 21_600_000 Prediction cache TTL (6 hours)
dryRun boolean false Log events instead of sending them
maxRetries number 3 Retry attempts with exponential backoff
onError (error: Error) => void silent Error callback

Querying Predictions

Predictions are crowd-sourced token/cost/duration estimates from the telemetry API. The SDK caches them locally with a configurable TTL.

// Fetch predictions from the server and cache locally
await client.refreshPredictions([
  { task_type: TaskType.IMPLEMENTATION, model: 'claude-sonnet-4-5-20250929', provider: Provider.ANTHROPIC, complexity: Complexity.MEDIUM },
  { task_type: TaskType.TESTING, model: 'claude-haiku-4-5-20251001', provider: Provider.ANTHROPIC, complexity: Complexity.LOW },
]);

// Read from cache (returns null if not cached or expired)
const prediction = client.getPrediction({
  task_type: TaskType.IMPLEMENTATION,
  model: 'claude-sonnet-4-5-20250929',
  provider: Provider.ANTHROPIC,
  complexity: Complexity.MEDIUM,
});

if (prediction?.prediction) {
  console.log('Median input tokens:', prediction.prediction.input_tokens.median);
  console.log('Median cost ($):', prediction.prediction.cost_usd_micros.median / 1_000_000);
  console.log('Confidence:', prediction.metadata.confidence);
}

Dry-Run Mode

For development and testing without sending data to the server:

const client = new TelemetryClient({
  serverUrl: 'https://tel-api.mosaicstack.dev',
  apiKey: 'test-key',
  instanceId: 'test-uuid',
  dryRun: true,
});

In dry-run mode, track() still queues events and flush() still runs, but the BatchSubmitter returns synthetic accepted responses without making HTTP calls.

Documentation

  • Integration Guide — Next.js and Node.js examples, environment-specific configuration, error handling patterns
  • API Reference — Full reference for all exported classes, methods, types, and enums

License

MPL-2.0