docs(#1): SDK integration guide, API reference, and CI pipeline
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed

- Rewrite README with quick start, config table, prediction usage, API version note
- Add docs/integration-guide.md with Next.js and Node.js examples, env-specific
  config, error handling patterns, batch behavior, and API version compatibility
- Add docs/api-reference.md with full reference for all exported classes, methods,
  types, and enums
- Add .woodpecker.yml with quality gates (lint, typecheck, format, security audit,
  test with coverage) and npm publish to Gitea registry
- Add AGENTS.md and update CLAUDE.md with project conventions

Fixes #1

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-14 22:38:19 -06:00
parent 177720e523
commit 231a799a46
6 changed files with 1303 additions and 52 deletions

133
README.md
View File

@@ -2,7 +2,9 @@
TypeScript client SDK for [Mosaic Stack Telemetry](https://tel.mosaicstack.dev). Reports task-completion metrics from AI coding harnesses and queries crowd-sourced predictions.
**Zero runtime dependencies** — uses native `fetch`, `crypto.randomUUID()`, and `setInterval`.
**Zero runtime dependencies** — uses native `fetch`, `crypto.randomUUID()`, and `setInterval`. Requires Node.js 18+.
**Targets Mosaic Telemetry API v1** (`/v1/` endpoints, event schema version `1.0`).
## Installation
@@ -13,17 +15,26 @@ npm install @mosaicstack/telemetry-client
## Quick Start
```typescript
import { TelemetryClient, TaskType, Complexity, Harness, Provider, Outcome } from '@mosaicstack/telemetry-client';
import {
TelemetryClient,
TaskType,
Complexity,
Harness,
Provider,
Outcome,
QualityGate,
} from '@mosaicstack/telemetry-client';
// 1. Create and start the client
const client = new TelemetryClient({
serverUrl: 'https://tel.mosaicstack.dev',
apiKey: 'your-64-char-hex-api-key',
instanceId: 'your-instance-uuid',
serverUrl: 'https://tel-api.mosaicstack.dev',
apiKey: process.env.TELEMETRY_API_KEY!,
instanceId: process.env.TELEMETRY_INSTANCE_ID!,
});
client.start();
client.start(); // begins background batch submission every 5 minutes
// Build and track an event
// 2. Build and track an event
const event = client.eventBuilder.build({
task_duration_ms: 45000,
task_type: TaskType.IMPLEMENTATION,
@@ -31,83 +42,101 @@ const event = client.eventBuilder.build({
harness: Harness.CLAUDE_CODE,
model: 'claude-sonnet-4-5-20250929',
provider: Provider.ANTHROPIC,
estimated_input_tokens: 5000,
estimated_output_tokens: 2000,
actual_input_tokens: 5500,
actual_output_tokens: 2200,
estimated_cost_usd_micros: 30000,
actual_cost_usd_micros: 33000,
estimated_input_tokens: 105000,
estimated_output_tokens: 45000,
actual_input_tokens: 112340,
actual_output_tokens: 38760,
estimated_cost_usd_micros: 630000,
actual_cost_usd_micros: 919200,
quality_gate_passed: true,
quality_gates_run: [],
quality_gates_run: [QualityGate.BUILD, QualityGate.LINT, QualityGate.TEST],
quality_gates_failed: [],
context_compactions: 0,
context_compactions: 2,
context_rotations: 0,
context_utilization_final: 0.4,
context_utilization_final: 0.72,
outcome: Outcome.SUCCESS,
retry_count: 0,
language: 'typescript',
repo_size_category: 'medium',
});
client.track(event);
client.track(event); // queues the event (never throws)
// When shutting down
await client.stop();
```
## Querying Predictions
```typescript
const query = {
// 3. Query predictions
const prediction = client.getPrediction({
task_type: TaskType.IMPLEMENTATION,
model: 'claude-sonnet-4-5-20250929',
provider: Provider.ANTHROPIC,
complexity: Complexity.MEDIUM,
};
});
// Fetch from server and cache locally
await client.refreshPredictions([query]);
// Get cached prediction (returns null if not cached)
const prediction = client.getPrediction(query);
if (prediction?.prediction) {
console.log('Median input tokens:', prediction.prediction.input_tokens.median);
console.log('Median cost (microdollars):', prediction.prediction.cost_usd_micros.median);
}
// 4. Shut down gracefully (flushes remaining events)
await client.stop();
```
## Configuration
```typescript
const client = new TelemetryClient({
serverUrl: 'https://tel.mosaicstack.dev', // Required
apiKey: 'your-api-key', // Required (64-char hex)
instanceId: 'your-uuid', // Required
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `serverUrl` | `string` | **required** | Telemetry API base URL |
| `apiKey` | `string` | **required** | Bearer token for authentication |
| `instanceId` | `string` | **required** | UUID identifying this instance |
| `enabled` | `boolean` | `true` | Set `false` to disable — `track()` becomes a no-op |
| `submitIntervalMs` | `number` | `300_000` | Background flush interval (5 min) |
| `maxQueueSize` | `number` | `1000` | Max queued events before FIFO eviction |
| `batchSize` | `number` | `100` | Events per batch submission (server max: 100) |
| `requestTimeoutMs` | `number` | `10_000` | HTTP request timeout |
| `predictionCacheTtlMs` | `number` | `21_600_000` | Prediction cache TTL (6 hours) |
| `dryRun` | `boolean` | `false` | Log events instead of sending them |
| `maxRetries` | `number` | `3` | Retry attempts with exponential backoff |
| `onError` | `(error: Error) => void` | silent | Error callback |
// Optional
enabled: true, // Set false to disable (track() becomes no-op)
submitIntervalMs: 300_000, // Background flush interval (default: 5 min)
maxQueueSize: 1000, // Max queued events (default: 1000, FIFO eviction)
batchSize: 100, // Events per batch (default/max: 100)
requestTimeoutMs: 10_000, // HTTP timeout (default: 10s)
predictionCacheTtlMs: 21_600_000, // Prediction cache TTL (default: 6 hours)
dryRun: false, // Log events instead of sending
maxRetries: 3, // Retry attempts on failure
onError: (err) => console.error(err), // Error callback
## Querying Predictions
Predictions are crowd-sourced token/cost/duration estimates from the telemetry API. The SDK caches them locally with a configurable TTL.
```typescript
// Fetch predictions from the server and cache locally
await client.refreshPredictions([
{ task_type: TaskType.IMPLEMENTATION, model: 'claude-sonnet-4-5-20250929', provider: Provider.ANTHROPIC, complexity: Complexity.MEDIUM },
{ task_type: TaskType.TESTING, model: 'claude-haiku-4-5-20251001', provider: Provider.ANTHROPIC, complexity: Complexity.LOW },
]);
// Read from cache (returns null if not cached or expired)
const prediction = client.getPrediction({
task_type: TaskType.IMPLEMENTATION,
model: 'claude-sonnet-4-5-20250929',
provider: Provider.ANTHROPIC,
complexity: Complexity.MEDIUM,
});
if (prediction?.prediction) {
console.log('Median input tokens:', prediction.prediction.input_tokens.median);
console.log('Median cost ($):', prediction.prediction.cost_usd_micros.median / 1_000_000);
console.log('Confidence:', prediction.metadata.confidence);
}
```
## Dry-Run Mode
For testing without sending data:
For development and testing without sending data to the server:
```typescript
const client = new TelemetryClient({
serverUrl: 'https://tel.mosaicstack.dev',
serverUrl: 'https://tel-api.mosaicstack.dev',
apiKey: 'test-key',
instanceId: 'test-uuid',
dryRun: true,
});
```
In dry-run mode, `track()` still queues events and `flush()` still runs, but the `BatchSubmitter` returns synthetic `accepted` responses without making HTTP calls.
## Documentation
- **[Integration Guide](docs/integration-guide.md)** — Next.js and Node.js examples, environment-specific configuration, error handling patterns
- **[API Reference](docs/api-reference.md)** — Full reference for all exported classes, methods, types, and enums
## License
MPL-2.0