feat: TypeScript telemetry client SDK v0.1.0

Standalone npm package (@mosaicstack/telemetry-client) for reporting
task-completion telemetry and querying predictions from the Mosaic
Stack Telemetry server.

- TelemetryClient with setInterval-based background flush
- EventQueue (bounded FIFO array)
- BatchSubmitter with native fetch, exponential backoff, Retry-After
- PredictionCache (Map + TTL)
- EventBuilder with auto-generated event_id/timestamp
- Zero runtime dependencies (Node 18+ native APIs)
- 43 tests, 86% branch coverage

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-07 23:25:31 -06:00
commit 177720e523
26 changed files with 5643 additions and 0 deletions

5
.gitignore vendored Normal file
View File

@@ -0,0 +1,5 @@
node_modules/
dist/
coverage/
*.tsbuildinfo
.env

30
CLAUDE.md Normal file
View File

@@ -0,0 +1,30 @@
# @mosaicstack/telemetry-client
TypeScript/JavaScript client SDK for Mosaic Stack Telemetry. Zero runtime dependencies.
## Commands
```bash
npm install # Install dependencies
npm run typecheck # Type check
npm run lint # Lint
npm run format:check # Format check
npm test # Run tests
npm run test:coverage # Tests with coverage (85% threshold)
npm run build # Build to dist/
```
## Architecture
- `client.ts` — TelemetryClient (main public API, setInterval-based background flush)
- `queue.ts` — EventQueue (bounded FIFO array)
- `submitter.ts` — BatchSubmitter (native fetch, exponential backoff, Retry-After)
- `prediction-cache.ts` — PredictionCache (Map + TTL)
- `event-builder.ts` — EventBuilder (auto-generates event_id, timestamp)
- `types/` — Standalone type definitions matching server API schema v1.0
## Key Patterns
- `track()` never throws — catches everything, routes to `onError` callback
- Zero runtime deps: uses native `fetch` (Node 18+), `crypto.randomUUID()`, `setInterval`
- All types are standalone — no dependency on the telemetry server package

113
README.md Normal file
View File

@@ -0,0 +1,113 @@
# @mosaicstack/telemetry-client
TypeScript client SDK for [Mosaic Stack Telemetry](https://tel.mosaicstack.dev). Reports task-completion metrics from AI coding harnesses and queries crowd-sourced predictions.
**Zero runtime dependencies** — uses native `fetch`, `crypto.randomUUID()`, and `setInterval`.
## Installation
```bash
npm install @mosaicstack/telemetry-client
```
## Quick Start
```typescript
import { TelemetryClient, TaskType, Complexity, Harness, Provider, Outcome } from '@mosaicstack/telemetry-client';
const client = new TelemetryClient({
serverUrl: 'https://tel.mosaicstack.dev',
apiKey: 'your-64-char-hex-api-key',
instanceId: 'your-instance-uuid',
});
client.start();
// Build and track an event
const event = client.eventBuilder.build({
task_duration_ms: 45000,
task_type: TaskType.IMPLEMENTATION,
complexity: Complexity.MEDIUM,
harness: Harness.CLAUDE_CODE,
model: 'claude-sonnet-4-5-20250929',
provider: Provider.ANTHROPIC,
estimated_input_tokens: 5000,
estimated_output_tokens: 2000,
actual_input_tokens: 5500,
actual_output_tokens: 2200,
estimated_cost_usd_micros: 30000,
actual_cost_usd_micros: 33000,
quality_gate_passed: true,
quality_gates_run: [],
quality_gates_failed: [],
context_compactions: 0,
context_rotations: 0,
context_utilization_final: 0.4,
outcome: Outcome.SUCCESS,
retry_count: 0,
});
client.track(event);
// When shutting down
await client.stop();
```
## Querying Predictions
```typescript
const query = {
task_type: TaskType.IMPLEMENTATION,
model: 'claude-sonnet-4-5-20250929',
provider: Provider.ANTHROPIC,
complexity: Complexity.MEDIUM,
};
// Fetch from server and cache locally
await client.refreshPredictions([query]);
// Get cached prediction (returns null if not cached)
const prediction = client.getPrediction(query);
if (prediction?.prediction) {
console.log('Median input tokens:', prediction.prediction.input_tokens.median);
console.log('Median cost (microdollars):', prediction.prediction.cost_usd_micros.median);
}
```
## Configuration
```typescript
const client = new TelemetryClient({
serverUrl: 'https://tel.mosaicstack.dev', // Required
apiKey: 'your-api-key', // Required (64-char hex)
instanceId: 'your-uuid', // Required
// Optional
enabled: true, // Set false to disable (track() becomes no-op)
submitIntervalMs: 300_000, // Background flush interval (default: 5 min)
maxQueueSize: 1000, // Max queued events (default: 1000, FIFO eviction)
batchSize: 100, // Events per batch (default/max: 100)
requestTimeoutMs: 10_000, // HTTP timeout (default: 10s)
predictionCacheTtlMs: 21_600_000, // Prediction cache TTL (default: 6 hours)
dryRun: false, // Log events instead of sending
maxRetries: 3, // Retry attempts on failure
onError: (err) => console.error(err), // Error callback
});
```
## Dry-Run Mode
For testing without sending data:
```typescript
const client = new TelemetryClient({
serverUrl: 'https://tel.mosaicstack.dev',
apiKey: 'test-key',
instanceId: 'test-uuid',
dryRun: true,
});
```
## License
MPL-2.0

25
eslint.config.js Normal file
View File

@@ -0,0 +1,25 @@
import eslint from '@eslint/js';
import tseslint from '@typescript-eslint/eslint-plugin';
import tsparser from '@typescript-eslint/parser';
export default [
eslint.configs.recommended,
{
files: ['src/**/*.ts', 'tests/**/*.ts'],
languageOptions: {
parser: tsparser,
parserOptions: {
ecmaVersion: 2022,
sourceType: 'module',
},
},
plugins: {
'@typescript-eslint': tseslint,
},
rules: {
...tseslint.configs.recommended.rules,
'no-unused-vars': 'off',
'@typescript-eslint/no-unused-vars': ['error', { argsIgnorePattern: '^_' }],
},
},
];

3526
package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

42
package.json Normal file
View File

@@ -0,0 +1,42 @@
{
"name": "@mosaicstack/telemetry-client",
"version": "0.1.0",
"description": "TypeScript client SDK for Mosaic Stack Telemetry",
"type": "module",
"main": "./dist/index.js",
"types": "./dist/index.d.ts",
"exports": {
".": {
"types": "./dist/index.d.ts",
"import": "./dist/index.js"
}
},
"engines": {
"node": ">=18"
},
"files": [
"dist"
],
"license": "MPL-2.0",
"scripts": {
"build": "tsc -p tsconfig.build.json",
"test": "vitest run",
"test:watch": "vitest",
"test:coverage": "vitest run --coverage",
"lint": "eslint src/ tests/",
"lint:fix": "eslint src/ tests/ --fix",
"format": "prettier --write src/ tests/",
"format:check": "prettier --check src/ tests/",
"typecheck": "tsc --noEmit"
},
"dependencies": {},
"devDependencies": {
"typescript": "^5.5.0",
"vitest": "^2.0.0",
"@vitest/coverage-v8": "^2.0.0",
"eslint": "^9.0.0",
"@typescript-eslint/eslint-plugin": "^8.0.0",
"@typescript-eslint/parser": "^8.0.0",
"prettier": "^3.0.0"
}
}

View File

@@ -0,0 +1,65 @@
/**
* Validates that the TypeScript types match the expected server schema.
* This script is meant to be run manually when the server schema changes.
*/
import {
TaskType,
Complexity,
Harness,
Provider,
QualityGate,
Outcome,
RepoSizeCategory,
} from '../src/types/events.js';
function validateEnum(name: string, enumObj: Record<string, string>, expected: string[]): boolean {
const values = Object.values(enumObj);
const missing = expected.filter((v) => !values.includes(v));
const extra = values.filter((v) => !expected.includes(v));
if (missing.length > 0 || extra.length > 0) {
console.error(`${name} mismatch:`);
if (missing.length) console.error(` Missing: ${missing.join(', ')}`);
if (extra.length) console.error(` Extra: ${extra.join(', ')}`);
return false;
}
console.log(`${name}: OK (${values.length} values)`);
return true;
}
let allValid = true;
allValid = validateEnum('TaskType', TaskType, [
'planning', 'implementation', 'code_review', 'testing', 'debugging',
'refactoring', 'documentation', 'configuration', 'security_audit', 'unknown',
]) && allValid;
allValid = validateEnum('Complexity', Complexity, ['low', 'medium', 'high', 'critical']) && allValid;
allValid = validateEnum('Harness', Harness, [
'claude_code', 'opencode', 'kilo_code', 'aider', 'api_direct',
'ollama_local', 'custom', 'unknown',
]) && allValid;
allValid = validateEnum('Provider', Provider, [
'anthropic', 'openai', 'openrouter', 'ollama', 'google', 'mistral', 'custom', 'unknown',
]) && allValid;
allValid = validateEnum('QualityGate', QualityGate, [
'build', 'lint', 'test', 'coverage', 'typecheck', 'security',
]) && allValid;
allValid = validateEnum('Outcome', Outcome, ['success', 'failure', 'partial', 'timeout']) && allValid;
allValid = validateEnum('RepoSizeCategory', RepoSizeCategory, [
'tiny', 'small', 'medium', 'large', 'huge',
]) && allValid;
if (allValid) {
console.log('\nAll enums validated successfully.');
} else {
console.error('\nSchema validation failed.');
process.exit(1);
}

158
src/client.ts Normal file
View File

@@ -0,0 +1,158 @@
import { TelemetryConfig, ResolvedConfig, resolveConfig } from './config.js';
import { EventQueue } from './queue.js';
import { BatchSubmitter } from './submitter.js';
import { PredictionCache } from './prediction-cache.js';
import { EventBuilder } from './event-builder.js';
import { TaskCompletionEvent } from './types/events.js';
import { PredictionQuery, PredictionResponse } from './types/predictions.js';
import { BatchPredictionResponse } from './types/common.js';
/**
* Main telemetry client. Queues task-completion events for background
* batch submission and provides access to crowd-sourced predictions.
*/
export class TelemetryClient {
private readonly config: ResolvedConfig;
private readonly queue: EventQueue;
private readonly submitter: BatchSubmitter;
private readonly predictionCache: PredictionCache;
private readonly _eventBuilder: EventBuilder;
private intervalId: ReturnType<typeof setInterval> | null = null;
private _isRunning = false;
constructor(config: TelemetryConfig) {
this.config = resolveConfig(config);
this.queue = new EventQueue(this.config.maxQueueSize);
this.submitter = new BatchSubmitter(this.config);
this.predictionCache = new PredictionCache(this.config.predictionCacheTtlMs);
this._eventBuilder = new EventBuilder(this.config);
}
/** Get the event builder for constructing events. */
get eventBuilder(): EventBuilder {
return this._eventBuilder;
}
/** Start background submission via setInterval. Idempotent. */
start(): void {
if (this._isRunning) {
return;
}
this._isRunning = true;
this.intervalId = setInterval(() => {
void this.flush();
}, this.config.submitIntervalMs);
}
/** Stop background submission, flush remaining events. */
async stop(): Promise<void> {
if (!this._isRunning) {
return;
}
this._isRunning = false;
if (this.intervalId !== null) {
clearInterval(this.intervalId);
this.intervalId = null;
}
await this.flush();
}
/** Queue an event for batch submission. Never throws. */
track(event: TaskCompletionEvent): void {
try {
if (!this.config.enabled) {
return;
}
this.queue.enqueue(event);
} catch (error) {
this.handleError(error);
}
}
/** Get a cached prediction. Returns null if not cached/expired. */
getPrediction(query: PredictionQuery): PredictionResponse | null {
return this.predictionCache.get(query);
}
/** Force-refresh predictions from server. */
async refreshPredictions(queries: PredictionQuery[]): Promise<void> {
try {
const url = `${this.config.serverUrl}/v1/predictions/batch`;
const controller = new AbortController();
const timeout = setTimeout(
() => controller.abort(),
this.config.requestTimeoutMs,
);
try {
const response = await fetch(url, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ queries }),
signal: controller.signal,
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
const body = (await response.json()) as BatchPredictionResponse;
for (let i = 0; i < queries.length; i++) {
if (body.results[i]) {
this.predictionCache.set(queries[i], body.results[i]);
}
}
} finally {
clearTimeout(timeout);
}
} catch (error) {
this.handleError(error);
}
}
/** Number of events currently queued. */
get queueSize(): number {
return this.queue.size;
}
/** Whether the client is currently running. */
get isRunning(): boolean {
return this._isRunning;
}
/** Flush the queue by draining and submitting batches. */
private async flush(): Promise<void> {
while (!this.queue.isEmpty) {
const batch = this.queue.drain(this.config.batchSize);
if (batch.length === 0) break;
try {
const result = await this.submitter.submit(batch);
if (!result.success) {
// Re-enqueue events that failed to submit
this.queue.prepend(batch);
if (result.error) {
this.handleError(result.error);
}
break; // Stop flushing on failure to avoid loops
}
} catch (error) {
this.queue.prepend(batch);
this.handleError(error);
break;
}
}
}
private handleError(error: unknown): void {
const err = error instanceof Error ? error : new Error(String(error));
try {
this.config.onError(err);
} catch {
// Prevent error handler from throwing
}
}
}

62
src/config.ts Normal file
View File

@@ -0,0 +1,62 @@
export interface TelemetryConfig {
/** Base URL of the telemetry server (e.g., "https://tel.mosaicstack.dev") */
serverUrl: string;
/** API key for authentication (64-char hex string) */
apiKey: string;
/** Instance UUID for this client */
instanceId: string;
/** Whether telemetry collection is enabled. Default: true */
enabled?: boolean;
/** Interval between automatic batch submissions in ms. Default: 300_000 (5 min) */
submitIntervalMs?: number;
/** Maximum number of events held in queue. Default: 1000 */
maxQueueSize?: number;
/** Maximum events per batch submission. Default: 100 */
batchSize?: number;
/** HTTP request timeout in ms. Default: 10_000 */
requestTimeoutMs?: number;
/** TTL for cached predictions in ms. Default: 21_600_000 (6 hours) */
predictionCacheTtlMs?: number;
/** If true, log events instead of sending them. Default: false */
dryRun?: boolean;
/** Maximum number of retries on failure. Default: 3 */
maxRetries?: number;
/** Optional callback invoked on errors */
onError?: (error: Error) => void;
}
export interface ResolvedConfig {
serverUrl: string;
apiKey: string;
instanceId: string;
enabled: boolean;
submitIntervalMs: number;
maxQueueSize: number;
batchSize: number;
requestTimeoutMs: number;
predictionCacheTtlMs: number;
dryRun: boolean;
maxRetries: number;
onError: (error: Error) => void;
}
const DEFAULT_ON_ERROR = (_error: Error): void => {
// Silent by default
};
export function resolveConfig(config: TelemetryConfig): ResolvedConfig {
return {
serverUrl: config.serverUrl.replace(/\/+$/, ''),
apiKey: config.apiKey,
instanceId: config.instanceId,
enabled: config.enabled ?? true,
submitIntervalMs: config.submitIntervalMs ?? 300_000,
maxQueueSize: config.maxQueueSize ?? 1000,
batchSize: config.batchSize ?? 100,
requestTimeoutMs: config.requestTimeoutMs ?? 10_000,
predictionCacheTtlMs: config.predictionCacheTtlMs ?? 21_600_000,
dryRun: config.dryRun ?? false,
maxRetries: config.maxRetries ?? 3,
onError: config.onError ?? DEFAULT_ON_ERROR,
};
}

62
src/event-builder.ts Normal file
View File

@@ -0,0 +1,62 @@
import { ResolvedConfig } from './config.js';
import {
Complexity,
Harness,
Outcome,
Provider,
QualityGate,
RepoSizeCategory,
TaskCompletionEvent,
TaskType,
} from './types/events.js';
export interface EventBuilderParams {
task_duration_ms: number;
task_type: TaskType;
complexity: Complexity;
harness: Harness;
model: string;
provider: Provider;
estimated_input_tokens: number;
estimated_output_tokens: number;
actual_input_tokens: number;
actual_output_tokens: number;
estimated_cost_usd_micros: number;
actual_cost_usd_micros: number;
quality_gate_passed: boolean;
quality_gates_run: QualityGate[];
quality_gates_failed: QualityGate[];
context_compactions: number;
context_rotations: number;
context_utilization_final: number;
outcome: Outcome;
retry_count: number;
language?: string | null;
repo_size_category?: RepoSizeCategory | null;
}
/**
* Convenience builder for TaskCompletionEvent objects.
* Auto-generates event_id, timestamp, instance_id, and schema_version.
*/
export class EventBuilder {
private readonly config: ResolvedConfig;
constructor(config: ResolvedConfig) {
this.config = config;
}
/**
* Build a complete TaskCompletionEvent from the given parameters.
* Automatically fills in event_id, timestamp, instance_id, and schema_version.
*/
build(params: EventBuilderParams): TaskCompletionEvent {
return {
instance_id: this.config.instanceId,
event_id: crypto.randomUUID(),
schema_version: '1.0',
timestamp: new Date().toISOString(),
...params,
};
}
}

36
src/index.ts Normal file
View File

@@ -0,0 +1,36 @@
export { TelemetryClient } from './client.js';
export { EventBuilder } from './event-builder.js';
export { EventQueue } from './queue.js';
export { BatchSubmitter } from './submitter.js';
export { PredictionCache } from './prediction-cache.js';
export { resolveConfig } from './config.js';
export type { TelemetryConfig, ResolvedConfig } from './config.js';
export type { EventBuilderParams } from './event-builder.js';
export type { SubmitResult } from './submitter.js';
// Re-export all types
export {
TaskType,
Complexity,
Harness,
Provider,
QualityGate,
Outcome,
RepoSizeCategory,
} from './types/index.js';
export type {
TaskCompletionEvent,
TokenDistribution,
CorrectionFactors,
QualityPrediction,
PredictionData,
PredictionMetadata,
PredictionResponse,
PredictionQuery,
BatchEventRequest,
BatchEventResult,
BatchEventResponse,
BatchPredictionRequest,
BatchPredictionResponse,
} from './types/index.js';

59
src/prediction-cache.ts Normal file
View File

@@ -0,0 +1,59 @@
import { PredictionQuery, PredictionResponse } from './types/predictions.js';
interface CacheEntry {
response: PredictionResponse;
expiresAt: number;
}
/**
* In-memory cache for prediction responses with TTL-based expiry.
*/
export class PredictionCache {
private readonly cache = new Map<string, CacheEntry>();
private readonly ttlMs: number;
constructor(ttlMs: number) {
this.ttlMs = ttlMs;
}
/** Build a deterministic cache key from a prediction query. */
private buildKey(query: PredictionQuery): string {
return `${query.task_type}:${query.model}:${query.provider}:${query.complexity}`;
}
/** Get a cached prediction. Returns null if not cached or expired. */
get(query: PredictionQuery): PredictionResponse | null {
const key = this.buildKey(query);
const entry = this.cache.get(key);
if (!entry) {
return null;
}
if (Date.now() > entry.expiresAt) {
this.cache.delete(key);
return null;
}
return entry.response;
}
/** Store a prediction response with TTL. */
set(query: PredictionQuery, response: PredictionResponse): void {
const key = this.buildKey(query);
this.cache.set(key, {
response,
expiresAt: Date.now() + this.ttlMs,
});
}
/** Clear all cached predictions. */
clear(): void {
this.cache.clear();
}
/** Number of entries currently in cache (including potentially expired). */
get size(): number {
return this.cache.size;
}
}

49
src/queue.ts Normal file
View File

@@ -0,0 +1,49 @@
import { TaskCompletionEvent } from './types/events.js';
/**
* Bounded FIFO event queue. When the queue is full, the oldest events
* are evicted to make room for new ones.
*/
export class EventQueue {
private readonly items: TaskCompletionEvent[] = [];
private readonly maxSize: number;
constructor(maxSize: number) {
this.maxSize = maxSize;
}
/** Add an event to the queue. Evicts the oldest event if at capacity. */
enqueue(event: TaskCompletionEvent): void {
if (this.items.length >= this.maxSize) {
this.items.shift();
}
this.items.push(event);
}
/**
* Remove and return up to `maxItems` events from the front of the queue.
* Returns an empty array if the queue is empty.
*/
drain(maxItems: number): TaskCompletionEvent[] {
const count = Math.min(maxItems, this.items.length);
return this.items.splice(0, count);
}
/** Prepend events back to the front of the queue (for re-enqueue on failure). */
prepend(events: TaskCompletionEvent[]): void {
// If prepending would exceed max, only keep as many as will fit
const available = this.maxSize - this.items.length;
const toAdd = events.slice(0, available);
this.items.unshift(...toAdd);
}
/** Current number of events in the queue. */
get size(): number {
return this.items.length;
}
/** Whether the queue is empty. */
get isEmpty(): boolean {
return this.items.length === 0;
}
}

138
src/submitter.ts Normal file
View File

@@ -0,0 +1,138 @@
import { ResolvedConfig } from './config.js';
import { TaskCompletionEvent } from './types/events.js';
import { BatchEventResponse } from './types/common.js';
const SDK_VERSION = '0.1.0';
const USER_AGENT = `mosaic-telemetry-client-js/${SDK_VERSION}`;
export interface SubmitResult {
success: boolean;
response?: BatchEventResponse;
retryAfterMs?: number;
error?: Error;
}
/**
* Handles HTTP submission of event batches to the telemetry server.
* Supports exponential backoff with jitter and Retry-After header handling.
*/
export class BatchSubmitter {
private readonly config: ResolvedConfig;
constructor(config: ResolvedConfig) {
this.config = config;
}
/**
* Submit a batch of events to the server.
* Retries with exponential backoff on transient failures.
*/
async submit(events: TaskCompletionEvent[]): Promise<SubmitResult> {
if (this.config.dryRun) {
return {
success: true,
response: {
accepted: events.length,
rejected: 0,
results: events.map((e) => ({
event_id: e.event_id,
status: 'accepted' as const,
})),
},
};
}
let lastError: Error | undefined;
for (let attempt = 0; attempt <= this.config.maxRetries; attempt++) {
if (attempt > 0) {
const delayMs = this.backoffDelay(attempt);
await this.sleep(delayMs);
}
try {
const result = await this.attemptSubmit(events);
if (result.retryAfterMs !== undefined) {
// 429: wait and retry
await this.sleep(result.retryAfterMs);
continue;
}
return result;
} catch (error) {
lastError = error instanceof Error ? error : new Error(String(error));
// Continue to next retry attempt
}
}
return {
success: false,
error: lastError ?? new Error('Max retries exceeded'),
};
}
private async attemptSubmit(
events: TaskCompletionEvent[],
): Promise<SubmitResult> {
const url = `${this.config.serverUrl}/v1/events/batch`;
const controller = new AbortController();
const timeout = setTimeout(
() => controller.abort(),
this.config.requestTimeoutMs,
);
try {
const response = await fetch(url, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${this.config.apiKey}`,
'User-Agent': USER_AGENT,
},
body: JSON.stringify({ events }),
signal: controller.signal,
});
if (response.status === 429) {
const retryAfter = response.headers.get('Retry-After');
const retryAfterMs = retryAfter ? parseInt(retryAfter, 10) * 1000 : 5000;
return { success: false, retryAfterMs };
}
if (response.status === 403) {
return {
success: false,
error: new Error(
`Forbidden: API key does not match instance_id (HTTP 403)`,
),
};
}
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
const body = (await response.json()) as BatchEventResponse;
return { success: true, response: body };
} finally {
clearTimeout(timeout);
}
}
/**
* Exponential backoff with jitter.
* Base = 1s, max = 60s.
*/
private backoffDelay(attempt: number): number {
const baseMs = 1000;
const maxMs = 60_000;
const exponential = Math.min(maxMs, baseMs * Math.pow(2, attempt - 1));
const jitter = Math.random() * exponential * 0.5;
return exponential + jitter;
}
private sleep(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms));
}
}

26
src/types/common.ts Normal file
View File

@@ -0,0 +1,26 @@
import { TaskCompletionEvent } from './events.js';
import { PredictionQuery, PredictionResponse } from './predictions.js';
export interface BatchEventRequest {
events: TaskCompletionEvent[];
}
export interface BatchEventResult {
event_id: string;
status: 'accepted' | 'rejected';
error?: string | null;
}
export interface BatchEventResponse {
accepted: number;
rejected: number;
results: BatchEventResult[];
}
export interface BatchPredictionRequest {
queries: PredictionQuery[];
}
export interface BatchPredictionResponse {
results: PredictionResponse[];
}

94
src/types/events.ts Normal file
View File

@@ -0,0 +1,94 @@
export enum TaskType {
PLANNING = 'planning',
IMPLEMENTATION = 'implementation',
CODE_REVIEW = 'code_review',
TESTING = 'testing',
DEBUGGING = 'debugging',
REFACTORING = 'refactoring',
DOCUMENTATION = 'documentation',
CONFIGURATION = 'configuration',
SECURITY_AUDIT = 'security_audit',
UNKNOWN = 'unknown',
}
export enum Complexity {
LOW = 'low',
MEDIUM = 'medium',
HIGH = 'high',
CRITICAL = 'critical',
}
export enum Harness {
CLAUDE_CODE = 'claude_code',
OPENCODE = 'opencode',
KILO_CODE = 'kilo_code',
AIDER = 'aider',
API_DIRECT = 'api_direct',
OLLAMA_LOCAL = 'ollama_local',
CUSTOM = 'custom',
UNKNOWN = 'unknown',
}
export enum Provider {
ANTHROPIC = 'anthropic',
OPENAI = 'openai',
OPENROUTER = 'openrouter',
OLLAMA = 'ollama',
GOOGLE = 'google',
MISTRAL = 'mistral',
CUSTOM = 'custom',
UNKNOWN = 'unknown',
}
export enum QualityGate {
BUILD = 'build',
LINT = 'lint',
TEST = 'test',
COVERAGE = 'coverage',
TYPECHECK = 'typecheck',
SECURITY = 'security',
}
export enum Outcome {
SUCCESS = 'success',
FAILURE = 'failure',
PARTIAL = 'partial',
TIMEOUT = 'timeout',
}
export enum RepoSizeCategory {
TINY = 'tiny',
SMALL = 'small',
MEDIUM = 'medium',
LARGE = 'large',
HUGE = 'huge',
}
export interface TaskCompletionEvent {
instance_id: string;
event_id: string;
schema_version: string;
timestamp: string;
task_duration_ms: number;
task_type: TaskType;
complexity: Complexity;
harness: Harness;
model: string;
provider: Provider;
estimated_input_tokens: number;
estimated_output_tokens: number;
actual_input_tokens: number;
actual_output_tokens: number;
estimated_cost_usd_micros: number;
actual_cost_usd_micros: number;
quality_gate_passed: boolean;
quality_gates_run: QualityGate[];
quality_gates_failed: QualityGate[];
context_compactions: number;
context_rotations: number;
context_utilization_final: number;
outcome: Outcome;
retry_count: number;
language?: string | null;
repo_size_category?: RepoSizeCategory | null;
}

28
src/types/index.ts Normal file
View File

@@ -0,0 +1,28 @@
export {
TaskType,
Complexity,
Harness,
Provider,
QualityGate,
Outcome,
RepoSizeCategory,
type TaskCompletionEvent,
} from './events.js';
export {
type TokenDistribution,
type CorrectionFactors,
type QualityPrediction,
type PredictionData,
type PredictionMetadata,
type PredictionResponse,
type PredictionQuery,
} from './predictions.js';
export {
type BatchEventRequest,
type BatchEventResult,
type BatchEventResponse,
type BatchPredictionRequest,
type BatchPredictionResponse,
} from './common.js';

50
src/types/predictions.ts Normal file
View File

@@ -0,0 +1,50 @@
import { Complexity, Provider, TaskType } from './events.js';
export interface TokenDistribution {
p10: number;
p25: number;
median: number;
p75: number;
p90: number;
}
export interface CorrectionFactors {
input: number;
output: number;
}
export interface QualityPrediction {
gate_pass_rate: number;
success_rate: number;
}
export interface PredictionData {
input_tokens: TokenDistribution;
output_tokens: TokenDistribution;
cost_usd_micros: Record<string, number>;
duration_ms: Record<string, number>;
correction_factors: CorrectionFactors;
quality: QualityPrediction;
}
export interface PredictionMetadata {
sample_size: number;
fallback_level: number;
confidence: 'none' | 'low' | 'medium' | 'high';
last_updated: string | null;
dimensions_matched?: Record<string, string | null> | null;
fallback_note?: string | null;
cache_hit: boolean;
}
export interface PredictionResponse {
prediction: PredictionData | null;
metadata: PredictionMetadata;
}
export interface PredictionQuery {
task_type: TaskType;
model: string;
provider: Provider;
complexity: Complexity;
}

318
tests/client.test.ts Normal file
View File

@@ -0,0 +1,318 @@
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { TelemetryClient } from '../src/client.js';
import { TelemetryConfig } from '../src/config.js';
import {
TaskCompletionEvent,
TaskType,
Complexity,
Harness,
Provider,
Outcome,
} from '../src/types/events.js';
import { PredictionQuery, PredictionResponse } from '../src/types/predictions.js';
function makeConfig(overrides: Partial<TelemetryConfig> = {}): TelemetryConfig {
return {
serverUrl: 'https://tel.example.com',
apiKey: 'a'.repeat(64),
instanceId: 'test-instance',
submitIntervalMs: 60_000,
maxQueueSize: 100,
batchSize: 10,
requestTimeoutMs: 5000,
dryRun: true, // Use dryRun by default in tests
...overrides,
};
}
function makeEvent(id = 'evt-1'): TaskCompletionEvent {
return {
instance_id: 'test-instance',
event_id: id,
schema_version: '1.0',
timestamp: new Date().toISOString(),
task_duration_ms: 5000,
task_type: TaskType.IMPLEMENTATION,
complexity: Complexity.MEDIUM,
harness: Harness.CLAUDE_CODE,
model: 'claude-3-opus',
provider: Provider.ANTHROPIC,
estimated_input_tokens: 1000,
estimated_output_tokens: 500,
actual_input_tokens: 1100,
actual_output_tokens: 550,
estimated_cost_usd_micros: 50000,
actual_cost_usd_micros: 55000,
quality_gate_passed: true,
quality_gates_run: [],
quality_gates_failed: [],
context_compactions: 0,
context_rotations: 0,
context_utilization_final: 0.5,
outcome: Outcome.SUCCESS,
retry_count: 0,
};
}
function makeQuery(): PredictionQuery {
return {
task_type: TaskType.IMPLEMENTATION,
model: 'claude-3-opus',
provider: Provider.ANTHROPIC,
complexity: Complexity.MEDIUM,
};
}
function makePredictionResponse(): PredictionResponse {
return {
prediction: {
input_tokens: { p10: 500, p25: 750, median: 1000, p75: 1500, p90: 2000 },
output_tokens: { p10: 200, p25: 350, median: 500, p75: 750, p90: 1000 },
cost_usd_micros: { median: 50000 },
duration_ms: { median: 30000 },
correction_factors: { input: 1.1, output: 1.05 },
quality: { gate_pass_rate: 0.85, success_rate: 0.9 },
},
metadata: {
sample_size: 100,
fallback_level: 0,
confidence: 'high',
last_updated: new Date().toISOString(),
cache_hit: false,
},
};
}
describe('TelemetryClient', () => {
let fetchSpy: ReturnType<typeof vi.fn>;
beforeEach(() => {
vi.useFakeTimers();
fetchSpy = vi.fn();
vi.stubGlobal('fetch', fetchSpy);
});
afterEach(() => {
vi.useRealTimers();
vi.unstubAllGlobals();
});
describe('start/stop lifecycle', () => {
it('should start and stop cleanly', async () => {
const client = new TelemetryClient(makeConfig());
expect(client.isRunning).toBe(false);
client.start();
expect(client.isRunning).toBe(true);
await client.stop();
expect(client.isRunning).toBe(false);
});
it('should be idempotent on start', () => {
const client = new TelemetryClient(makeConfig());
client.start();
client.start(); // Should not throw or create double intervals
expect(client.isRunning).toBe(true);
});
it('should be idempotent on stop', async () => {
const client = new TelemetryClient(makeConfig());
await client.stop();
await client.stop(); // Should not throw
expect(client.isRunning).toBe(false);
});
it('should flush events on stop', async () => {
const client = new TelemetryClient(makeConfig());
client.start();
client.track(makeEvent('e1'));
client.track(makeEvent('e2'));
expect(client.queueSize).toBe(2);
await client.stop();
// In dryRun mode, flush succeeds and queue should be empty
expect(client.queueSize).toBe(0);
});
});
describe('track()', () => {
it('should queue events', () => {
const client = new TelemetryClient(makeConfig());
client.track(makeEvent('e1'));
client.track(makeEvent('e2'));
expect(client.queueSize).toBe(2);
});
it('should silently drop events when disabled', () => {
const client = new TelemetryClient(makeConfig({ enabled: false }));
client.track(makeEvent());
expect(client.queueSize).toBe(0);
});
it('should never throw even on internal error', () => {
const errorFn = vi.fn();
const client = new TelemetryClient(
makeConfig({ onError: errorFn, maxQueueSize: 0 }),
);
// This should not throw. maxQueueSize of 0 could cause issues
// but track() is designed to catch everything.
expect(() => client.track(makeEvent())).not.toThrow();
});
});
describe('predictions', () => {
it('should return null for uncached prediction', () => {
const client = new TelemetryClient(makeConfig());
const result = client.getPrediction(makeQuery());
expect(result).toBeNull();
});
it('should return cached prediction after refresh', async () => {
const predictionResponse = makePredictionResponse();
fetchSpy.mockResolvedValueOnce({
ok: true,
status: 200,
json: () =>
Promise.resolve({
results: [predictionResponse],
}),
});
const client = new TelemetryClient(makeConfig({ dryRun: false }));
const query = makeQuery();
await client.refreshPredictions([query]);
const result = client.getPrediction(query);
expect(result).toEqual(predictionResponse);
});
it('should handle refresh error gracefully', async () => {
fetchSpy.mockRejectedValueOnce(new Error('Network error'));
const errorFn = vi.fn();
const client = new TelemetryClient(
makeConfig({ dryRun: false, onError: errorFn }),
);
// Should not throw
await client.refreshPredictions([makeQuery()]);
expect(errorFn).toHaveBeenCalledWith(expect.any(Error));
});
it('should handle non-ok HTTP response on refresh', async () => {
fetchSpy.mockResolvedValueOnce({
ok: false,
status: 500,
statusText: 'Internal Server Error',
});
const errorFn = vi.fn();
const client = new TelemetryClient(
makeConfig({ dryRun: false, onError: errorFn }),
);
await client.refreshPredictions([makeQuery()]);
expect(errorFn).toHaveBeenCalledWith(expect.any(Error));
});
});
describe('background flush', () => {
it('should trigger flush on interval', async () => {
const client = new TelemetryClient(
makeConfig({ submitIntervalMs: 10_000 }),
);
client.start();
client.track(makeEvent('e1'));
expect(client.queueSize).toBe(1);
// Advance past submit interval
await vi.advanceTimersByTimeAsync(11_000);
// In dryRun mode, events should be flushed
expect(client.queueSize).toBe(0);
await client.stop();
});
});
describe('flush error handling', () => {
it('should re-enqueue events on submit failure', async () => {
// Use non-dryRun mode to actually hit the submitter
fetchSpy.mockResolvedValueOnce({
ok: false,
status: 500,
statusText: 'Internal Server Error',
});
const errorFn = vi.fn();
const client = new TelemetryClient(
makeConfig({ dryRun: false, maxRetries: 0, onError: errorFn }),
);
client.track(makeEvent('e1'));
expect(client.queueSize).toBe(1);
// Start and trigger flush
client.start();
await vi.advanceTimersByTimeAsync(70_000);
// Events should be re-enqueued after failure
expect(client.queueSize).toBeGreaterThan(0);
await client.stop();
});
it('should handle onError callback that throws', async () => {
const throwingErrorFn = () => {
throw new Error('Error handler broke');
};
const client = new TelemetryClient(
makeConfig({ onError: throwingErrorFn, enabled: false }),
);
// This should not throw even though onError throws
// Force an error path by calling track when disabled (no error),
// but we can test via refreshPredictions
fetchSpy.mockRejectedValueOnce(new Error('fail'));
await expect(client.refreshPredictions([makeQuery()])).resolves.not.toThrow();
});
});
describe('event builder', () => {
it('should expose an event builder', () => {
const client = new TelemetryClient(makeConfig());
expect(client.eventBuilder).toBeDefined();
const event = client.eventBuilder.build({
task_duration_ms: 1000,
task_type: TaskType.TESTING,
complexity: Complexity.LOW,
harness: Harness.AIDER,
model: 'gpt-4',
provider: Provider.OPENAI,
estimated_input_tokens: 100,
estimated_output_tokens: 50,
actual_input_tokens: 100,
actual_output_tokens: 50,
estimated_cost_usd_micros: 1000,
actual_cost_usd_micros: 1000,
quality_gate_passed: true,
quality_gates_run: [],
quality_gates_failed: [],
context_compactions: 0,
context_rotations: 0,
context_utilization_final: 0.3,
outcome: Outcome.SUCCESS,
retry_count: 0,
});
expect(event.instance_id).toBe('test-instance');
expect(event.schema_version).toBe('1.0');
});
});
});

219
tests/event-builder.test.ts Normal file
View File

@@ -0,0 +1,219 @@
import { describe, it, expect, vi, afterEach } from 'vitest';
import { EventBuilder } from '../src/event-builder.js';
import { ResolvedConfig } from '../src/config.js';
import {
TaskType,
Complexity,
Harness,
Provider,
Outcome,
QualityGate,
RepoSizeCategory,
} from '../src/types/events.js';
function makeConfig(): ResolvedConfig {
return {
serverUrl: 'https://tel.example.com',
apiKey: 'a'.repeat(64),
instanceId: 'my-instance-uuid',
enabled: true,
submitIntervalMs: 300_000,
maxQueueSize: 1000,
batchSize: 100,
requestTimeoutMs: 10_000,
predictionCacheTtlMs: 21_600_000,
dryRun: false,
maxRetries: 3,
onError: () => {},
};
}
describe('EventBuilder', () => {
afterEach(() => {
vi.restoreAllMocks();
});
it('should build a complete TaskCompletionEvent', () => {
const builder = new EventBuilder(makeConfig());
const event = builder.build({
task_duration_ms: 15000,
task_type: TaskType.IMPLEMENTATION,
complexity: Complexity.HIGH,
harness: Harness.CLAUDE_CODE,
model: 'claude-3-opus',
provider: Provider.ANTHROPIC,
estimated_input_tokens: 2000,
estimated_output_tokens: 1000,
actual_input_tokens: 2200,
actual_output_tokens: 1100,
estimated_cost_usd_micros: 100000,
actual_cost_usd_micros: 110000,
quality_gate_passed: true,
quality_gates_run: [QualityGate.BUILD, QualityGate.TEST, QualityGate.LINT],
quality_gates_failed: [],
context_compactions: 2,
context_rotations: 1,
context_utilization_final: 0.75,
outcome: Outcome.SUCCESS,
retry_count: 0,
language: 'typescript',
repo_size_category: RepoSizeCategory.MEDIUM,
});
expect(event.task_type).toBe(TaskType.IMPLEMENTATION);
expect(event.complexity).toBe(Complexity.HIGH);
expect(event.model).toBe('claude-3-opus');
expect(event.quality_gates_run).toEqual([
QualityGate.BUILD,
QualityGate.TEST,
QualityGate.LINT,
]);
expect(event.language).toBe('typescript');
expect(event.repo_size_category).toBe(RepoSizeCategory.MEDIUM);
});
it('should auto-generate event_id as UUID', () => {
const builder = new EventBuilder(makeConfig());
const event = builder.build({
task_duration_ms: 1000,
task_type: TaskType.TESTING,
complexity: Complexity.LOW,
harness: Harness.AIDER,
model: 'gpt-4',
provider: Provider.OPENAI,
estimated_input_tokens: 100,
estimated_output_tokens: 50,
actual_input_tokens: 100,
actual_output_tokens: 50,
estimated_cost_usd_micros: 1000,
actual_cost_usd_micros: 1000,
quality_gate_passed: true,
quality_gates_run: [],
quality_gates_failed: [],
context_compactions: 0,
context_rotations: 0,
context_utilization_final: 0.3,
outcome: Outcome.SUCCESS,
retry_count: 0,
});
// UUID format: 8-4-4-4-12 hex chars
expect(event.event_id).toMatch(
/^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$/,
);
// Each event should get a unique ID
const event2 = builder.build({
task_duration_ms: 1000,
task_type: TaskType.TESTING,
complexity: Complexity.LOW,
harness: Harness.AIDER,
model: 'gpt-4',
provider: Provider.OPENAI,
estimated_input_tokens: 100,
estimated_output_tokens: 50,
actual_input_tokens: 100,
actual_output_tokens: 50,
estimated_cost_usd_micros: 1000,
actual_cost_usd_micros: 1000,
quality_gate_passed: true,
quality_gates_run: [],
quality_gates_failed: [],
context_compactions: 0,
context_rotations: 0,
context_utilization_final: 0.3,
outcome: Outcome.SUCCESS,
retry_count: 0,
});
expect(event.event_id).not.toBe(event2.event_id);
});
it('should auto-set timestamp to ISO 8601', () => {
const now = new Date('2026-02-07T10:00:00.000Z');
vi.setSystemTime(now);
const builder = new EventBuilder(makeConfig());
const event = builder.build({
task_duration_ms: 1000,
task_type: TaskType.DEBUGGING,
complexity: Complexity.MEDIUM,
harness: Harness.OPENCODE,
model: 'claude-3-sonnet',
provider: Provider.ANTHROPIC,
estimated_input_tokens: 500,
estimated_output_tokens: 200,
actual_input_tokens: 500,
actual_output_tokens: 200,
estimated_cost_usd_micros: 5000,
actual_cost_usd_micros: 5000,
quality_gate_passed: false,
quality_gates_run: [QualityGate.TEST],
quality_gates_failed: [QualityGate.TEST],
context_compactions: 0,
context_rotations: 0,
context_utilization_final: 0.4,
outcome: Outcome.FAILURE,
retry_count: 1,
});
expect(event.timestamp).toBe('2026-02-07T10:00:00.000Z');
});
it('should set instance_id from config', () => {
const config = makeConfig();
const builder = new EventBuilder(config);
const event = builder.build({
task_duration_ms: 1000,
task_type: TaskType.PLANNING,
complexity: Complexity.LOW,
harness: Harness.UNKNOWN,
model: 'test-model',
provider: Provider.UNKNOWN,
estimated_input_tokens: 0,
estimated_output_tokens: 0,
actual_input_tokens: 0,
actual_output_tokens: 0,
estimated_cost_usd_micros: 0,
actual_cost_usd_micros: 0,
quality_gate_passed: true,
quality_gates_run: [],
quality_gates_failed: [],
context_compactions: 0,
context_rotations: 0,
context_utilization_final: 0,
outcome: Outcome.SUCCESS,
retry_count: 0,
});
expect(event.instance_id).toBe('my-instance-uuid');
});
it('should set schema_version to 1.0', () => {
const builder = new EventBuilder(makeConfig());
const event = builder.build({
task_duration_ms: 1000,
task_type: TaskType.REFACTORING,
complexity: Complexity.CRITICAL,
harness: Harness.KILO_CODE,
model: 'gemini-pro',
provider: Provider.GOOGLE,
estimated_input_tokens: 3000,
estimated_output_tokens: 2000,
actual_input_tokens: 3000,
actual_output_tokens: 2000,
estimated_cost_usd_micros: 80000,
actual_cost_usd_micros: 80000,
quality_gate_passed: true,
quality_gates_run: [QualityGate.TYPECHECK],
quality_gates_failed: [],
context_compactions: 5,
context_rotations: 2,
context_utilization_final: 0.95,
outcome: Outcome.SUCCESS,
retry_count: 0,
});
expect(event.schema_version).toBe('1.0');
});
});

View File

@@ -0,0 +1,126 @@
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { PredictionCache } from '../src/prediction-cache.js';
import { PredictionQuery, PredictionResponse } from '../src/types/predictions.js';
import { TaskType, Complexity, Provider } from '../src/types/events.js';
function makeQuery(overrides: Partial<PredictionQuery> = {}): PredictionQuery {
return {
task_type: TaskType.IMPLEMENTATION,
model: 'claude-3-opus',
provider: Provider.ANTHROPIC,
complexity: Complexity.MEDIUM,
...overrides,
};
}
function makeResponse(sampleSize = 100): PredictionResponse {
return {
prediction: {
input_tokens: { p10: 500, p25: 750, median: 1000, p75: 1500, p90: 2000 },
output_tokens: { p10: 200, p25: 350, median: 500, p75: 750, p90: 1000 },
cost_usd_micros: { median: 50000 },
duration_ms: { median: 30000 },
correction_factors: { input: 1.1, output: 1.05 },
quality: { gate_pass_rate: 0.85, success_rate: 0.9 },
},
metadata: {
sample_size: sampleSize,
fallback_level: 0,
confidence: 'high',
last_updated: new Date().toISOString(),
cache_hit: false,
},
};
}
describe('PredictionCache', () => {
beforeEach(() => {
vi.useFakeTimers();
});
afterEach(() => {
vi.useRealTimers();
});
it('should return null for cache miss', () => {
const cache = new PredictionCache(60_000);
const result = cache.get(makeQuery());
expect(result).toBeNull();
});
it('should return cached prediction on hit', () => {
const cache = new PredictionCache(60_000);
const query = makeQuery();
const response = makeResponse();
cache.set(query, response);
const result = cache.get(query);
expect(result).toEqual(response);
});
it('should return null when entry has expired', () => {
const cache = new PredictionCache(60_000); // 60s TTL
const query = makeQuery();
const response = makeResponse();
cache.set(query, response);
expect(cache.get(query)).toEqual(response);
// Advance time past TTL
vi.advanceTimersByTime(61_000);
expect(cache.get(query)).toBeNull();
});
it('should differentiate queries by all fields', () => {
const cache = new PredictionCache(60_000);
const query1 = makeQuery({ task_type: TaskType.IMPLEMENTATION });
const query2 = makeQuery({ task_type: TaskType.DEBUGGING });
const response1 = makeResponse(100);
const response2 = makeResponse(200);
cache.set(query1, response1);
cache.set(query2, response2);
expect(cache.get(query1)?.metadata.sample_size).toBe(100);
expect(cache.get(query2)?.metadata.sample_size).toBe(200);
});
it('should clear all entries', () => {
const cache = new PredictionCache(60_000);
cache.set(makeQuery(), makeResponse());
cache.set(makeQuery({ task_type: TaskType.TESTING }), makeResponse());
expect(cache.size).toBe(2);
cache.clear();
expect(cache.size).toBe(0);
expect(cache.get(makeQuery())).toBeNull();
});
it('should overwrite existing entry with same query', () => {
const cache = new PredictionCache(60_000);
const query = makeQuery();
cache.set(query, makeResponse(100));
cache.set(query, makeResponse(200));
expect(cache.size).toBe(1);
expect(cache.get(query)?.metadata.sample_size).toBe(200);
});
it('should clean expired entry on get', () => {
const cache = new PredictionCache(60_000);
const query = makeQuery();
cache.set(query, makeResponse());
expect(cache.size).toBe(1);
vi.advanceTimersByTime(61_000);
// get() should clean up
cache.get(query);
expect(cache.size).toBe(0);
});
});

150
tests/queue.test.ts Normal file
View File

@@ -0,0 +1,150 @@
import { describe, it, expect } from 'vitest';
import { EventQueue } from '../src/queue.js';
import {
TaskType,
Complexity,
Harness,
Provider,
Outcome,
TaskCompletionEvent,
} from '../src/types/events.js';
function makeEvent(id: string): TaskCompletionEvent {
return {
instance_id: 'test-instance',
event_id: id,
schema_version: '1.0',
timestamp: new Date().toISOString(),
task_duration_ms: 1000,
task_type: TaskType.IMPLEMENTATION,
complexity: Complexity.MEDIUM,
harness: Harness.CLAUDE_CODE,
model: 'claude-3-opus',
provider: Provider.ANTHROPIC,
estimated_input_tokens: 1000,
estimated_output_tokens: 500,
actual_input_tokens: 1100,
actual_output_tokens: 550,
estimated_cost_usd_micros: 50000,
actual_cost_usd_micros: 55000,
quality_gate_passed: true,
quality_gates_run: [],
quality_gates_failed: [],
context_compactions: 0,
context_rotations: 0,
context_utilization_final: 0.5,
outcome: Outcome.SUCCESS,
retry_count: 0,
};
}
describe('EventQueue', () => {
it('should enqueue and drain events', () => {
const queue = new EventQueue(10);
const event = makeEvent('e1');
queue.enqueue(event);
expect(queue.size).toBe(1);
expect(queue.isEmpty).toBe(false);
const drained = queue.drain(10);
expect(drained).toHaveLength(1);
expect(drained[0].event_id).toBe('e1');
expect(queue.isEmpty).toBe(true);
});
it('should respect maxSize with FIFO eviction', () => {
const queue = new EventQueue(3);
queue.enqueue(makeEvent('e1'));
queue.enqueue(makeEvent('e2'));
queue.enqueue(makeEvent('e3'));
expect(queue.size).toBe(3);
// Adding a 4th should evict the oldest (e1)
queue.enqueue(makeEvent('e4'));
expect(queue.size).toBe(3);
const drained = queue.drain(10);
expect(drained.map((e) => e.event_id)).toEqual(['e2', 'e3', 'e4']);
});
it('should drain up to maxItems', () => {
const queue = new EventQueue(10);
queue.enqueue(makeEvent('e1'));
queue.enqueue(makeEvent('e2'));
queue.enqueue(makeEvent('e3'));
const drained = queue.drain(2);
expect(drained).toHaveLength(2);
expect(drained.map((e) => e.event_id)).toEqual(['e1', 'e2']);
expect(queue.size).toBe(1);
});
it('should remove drained items from the queue', () => {
const queue = new EventQueue(10);
queue.enqueue(makeEvent('e1'));
queue.enqueue(makeEvent('e2'));
queue.drain(1);
expect(queue.size).toBe(1);
const remaining = queue.drain(10);
expect(remaining[0].event_id).toBe('e2');
});
it('should report isEmpty correctly', () => {
const queue = new EventQueue(5);
expect(queue.isEmpty).toBe(true);
queue.enqueue(makeEvent('e1'));
expect(queue.isEmpty).toBe(false);
queue.drain(1);
expect(queue.isEmpty).toBe(true);
});
it('should report size correctly', () => {
const queue = new EventQueue(10);
expect(queue.size).toBe(0);
queue.enqueue(makeEvent('e1'));
expect(queue.size).toBe(1);
queue.enqueue(makeEvent('e2'));
expect(queue.size).toBe(2);
queue.drain(1);
expect(queue.size).toBe(1);
});
it('should return empty array when draining empty queue', () => {
const queue = new EventQueue(5);
const drained = queue.drain(10);
expect(drained).toEqual([]);
});
it('should prepend events to the front of the queue', () => {
const queue = new EventQueue(10);
queue.enqueue(makeEvent('e3'));
queue.prepend([makeEvent('e1'), makeEvent('e2')]);
expect(queue.size).toBe(3);
const drained = queue.drain(10);
expect(drained.map((e) => e.event_id)).toEqual(['e1', 'e2', 'e3']);
});
it('should respect maxSize when prepending', () => {
const queue = new EventQueue(3);
queue.enqueue(makeEvent('e3'));
queue.enqueue(makeEvent('e4'));
// Only 1 slot available, so only first event should be prepended
queue.prepend([makeEvent('e1'), makeEvent('e2')]);
expect(queue.size).toBe(3);
const drained = queue.drain(10);
expect(drained.map((e) => e.event_id)).toEqual(['e1', 'e3', 'e4']);
});
});

216
tests/submitter.test.ts Normal file
View File

@@ -0,0 +1,216 @@
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { BatchSubmitter } from '../src/submitter.js';
import { ResolvedConfig } from '../src/config.js';
import {
TaskCompletionEvent,
TaskType,
Complexity,
Harness,
Provider,
Outcome,
} from '../src/types/events.js';
function makeConfig(overrides: Partial<ResolvedConfig> = {}): ResolvedConfig {
return {
serverUrl: 'https://tel.example.com',
apiKey: 'a'.repeat(64),
instanceId: 'test-instance-id',
enabled: true,
submitIntervalMs: 300_000,
maxQueueSize: 1000,
batchSize: 100,
requestTimeoutMs: 10_000,
predictionCacheTtlMs: 21_600_000,
dryRun: false,
maxRetries: 3,
onError: () => {},
...overrides,
};
}
function makeEvent(id = 'evt-1'): TaskCompletionEvent {
return {
instance_id: 'test-instance-id',
event_id: id,
schema_version: '1.0',
timestamp: new Date().toISOString(),
task_duration_ms: 5000,
task_type: TaskType.IMPLEMENTATION,
complexity: Complexity.MEDIUM,
harness: Harness.CLAUDE_CODE,
model: 'claude-3-opus',
provider: Provider.ANTHROPIC,
estimated_input_tokens: 1000,
estimated_output_tokens: 500,
actual_input_tokens: 1100,
actual_output_tokens: 550,
estimated_cost_usd_micros: 50000,
actual_cost_usd_micros: 55000,
quality_gate_passed: true,
quality_gates_run: [],
quality_gates_failed: [],
context_compactions: 0,
context_rotations: 0,
context_utilization_final: 0.5,
outcome: Outcome.SUCCESS,
retry_count: 0,
};
}
describe('BatchSubmitter', () => {
let fetchSpy: ReturnType<typeof vi.fn>;
beforeEach(() => {
vi.useFakeTimers();
fetchSpy = vi.fn();
vi.stubGlobal('fetch', fetchSpy);
});
afterEach(() => {
vi.useRealTimers();
vi.unstubAllGlobals();
});
it('should submit a batch successfully', async () => {
const responseBody = {
accepted: 1,
rejected: 0,
results: [{ event_id: 'evt-1', status: 'accepted' }],
};
fetchSpy.mockResolvedValueOnce({
ok: true,
status: 202,
json: () => Promise.resolve(responseBody),
});
const submitter = new BatchSubmitter(makeConfig());
const result = await submitter.submit([makeEvent()]);
expect(result.success).toBe(true);
expect(result.response).toEqual(responseBody);
expect(fetchSpy).toHaveBeenCalledTimes(1);
const [url, options] = fetchSpy.mock.calls[0];
expect(url).toBe('https://tel.example.com/v1/events/batch');
expect(options.method).toBe('POST');
expect(options.headers['Authorization']).toBe(`Bearer ${'a'.repeat(64)}`);
});
it('should handle 429 with Retry-After header', async () => {
const headers = new Map([['Retry-After', '1']]);
fetchSpy.mockResolvedValueOnce({
ok: false,
status: 429,
headers: { get: (name: string) => headers.get(name) ?? null },
});
// After retry, succeed
const responseBody = {
accepted: 1,
rejected: 0,
results: [{ event_id: 'evt-1', status: 'accepted' }],
};
fetchSpy.mockResolvedValueOnce({
ok: true,
status: 202,
json: () => Promise.resolve(responseBody),
});
const submitter = new BatchSubmitter(makeConfig({ maxRetries: 1 }));
// Run submit in background and advance timers
const submitPromise = submitter.submit([makeEvent()]);
// Advance enough to cover Retry-After (1s) + backoff with jitter (~1-1.5s)
await vi.advanceTimersByTimeAsync(10_000);
const result = await submitPromise;
expect(result.success).toBe(true);
expect(fetchSpy).toHaveBeenCalledTimes(2);
});
it('should handle 403 error', async () => {
fetchSpy.mockResolvedValueOnce({
ok: false,
status: 403,
statusText: 'Forbidden',
});
const submitter = new BatchSubmitter(makeConfig({ maxRetries: 0 }));
const result = await submitter.submit([makeEvent()]);
expect(result.success).toBe(false);
expect(result.error?.message).toContain('Forbidden');
expect(result.error?.message).toContain('403');
});
it('should retry on network error with backoff', async () => {
fetchSpy.mockRejectedValueOnce(new Error('Network error'));
fetchSpy.mockResolvedValueOnce({
ok: true,
status: 202,
json: () =>
Promise.resolve({
accepted: 1,
rejected: 0,
results: [{ event_id: 'evt-1', status: 'accepted' }],
}),
});
const submitter = new BatchSubmitter(makeConfig({ maxRetries: 1 }));
const submitPromise = submitter.submit([makeEvent()]);
// Advance past backoff delay
await vi.advanceTimersByTimeAsync(5000);
const result = await submitPromise;
expect(result.success).toBe(true);
expect(fetchSpy).toHaveBeenCalledTimes(2);
});
it('should fail after max retries exhausted', async () => {
fetchSpy.mockRejectedValue(new Error('Network error'));
const submitter = new BatchSubmitter(makeConfig({ maxRetries: 2 }));
const submitPromise = submitter.submit([makeEvent()]);
// Advance timers to allow all retries
await vi.advanceTimersByTimeAsync(120_000);
const result = await submitPromise;
expect(result.success).toBe(false);
expect(result.error?.message).toBe('Network error');
});
it('should not call fetch in dryRun mode', async () => {
const submitter = new BatchSubmitter(makeConfig({ dryRun: true }));
const result = await submitter.submit([makeEvent('evt-1'), makeEvent('evt-2')]);
expect(result.success).toBe(true);
expect(result.response?.accepted).toBe(2);
expect(result.response?.rejected).toBe(0);
expect(fetchSpy).not.toHaveBeenCalled();
});
it('should handle request timeout via AbortController', async () => {
fetchSpy.mockImplementation(
(_url: string, options: { signal: AbortSignal }) =>
new Promise((_resolve, reject) => {
options.signal.addEventListener('abort', () => {
reject(new DOMException('The operation was aborted.', 'AbortError'));
});
}),
);
const submitter = new BatchSubmitter(
makeConfig({ requestTimeoutMs: 1000, maxRetries: 0 }),
);
const submitPromise = submitter.submit([makeEvent()]);
await vi.advanceTimersByTimeAsync(2000);
const result = await submitPromise;
expect(result.success).toBe(false);
expect(result.error?.message).toContain('aborted');
});
});

4
tsconfig.build.json Normal file
View File

@@ -0,0 +1,4 @@
{
"extends": "./tsconfig.json",
"exclude": ["node_modules", "dist", "tests", "**/*.test.ts"]
}

23
tsconfig.json Normal file
View File

@@ -0,0 +1,23 @@
{
"compilerOptions": {
"target": "ES2022",
"module": "ESNext",
"moduleResolution": "bundler",
"declaration": true,
"declarationMap": true,
"sourceMap": true,
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"resolveJsonModule": true,
"isolatedModules": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"noFallthroughCasesInSwitch": true
},
"include": ["src/**/*"],
"exclude": ["node_modules", "dist", "tests"]
}

19
vitest.config.ts Normal file
View File

@@ -0,0 +1,19 @@
import { defineConfig } from 'vitest/config';
export default defineConfig({
test: {
globals: true,
coverage: {
provider: 'v8',
reporter: ['text', 'text-summary'],
include: ['src/**/*.ts'],
exclude: ['src/types/**'],
thresholds: {
statements: 85,
branches: 85,
functions: 85,
lines: 85,
},
},
},
});