feat(#93): implement agent spawn via federation
Implements FED-010: Agent Spawn via Federation feature that enables spawning and managing Claude agents on remote federated Mosaic Stack instances via COMMAND message type. Features: - Federation agent command types (spawn, status, kill) - FederationAgentService for handling agent operations - Integration with orchestrator's agent spawner/lifecycle services - API endpoints for spawning, querying status, and killing agents - Full command routing through federation COMMAND infrastructure - Comprehensive test coverage (12/12 tests passing) Architecture: - Hub → Spoke: Spawn agents on remote instances - Command flow: FederationController → FederationAgentService → CommandService → Remote Orchestrator - Response handling: Remote orchestrator returns agent status/results - Security: Connection validation, signature verification Files created: - apps/api/src/federation/types/federation-agent.types.ts - apps/api/src/federation/federation-agent.service.ts - apps/api/src/federation/federation-agent.service.spec.ts Files modified: - apps/api/src/federation/command.service.ts (agent command routing) - apps/api/src/federation/federation.controller.ts (agent endpoints) - apps/api/src/federation/federation.module.ts (service registration) - apps/orchestrator/src/api/agents/agents.controller.ts (status endpoint) - apps/orchestrator/src/api/agents/agents.module.ts (lifecycle integration) Testing: - 12/12 tests passing for FederationAgentService - All command service tests passing - TypeScript compilation successful - Linting passed Refs #93 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -63,6 +63,7 @@ Get your API key from: https://platform.openai.com/api-keys
|
||||
### OpenAI Model
|
||||
|
||||
The default embedding model is `text-embedding-3-small` (1536 dimensions). This provides:
|
||||
|
||||
- High quality embeddings
|
||||
- Cost-effective pricing
|
||||
- Fast generation speed
|
||||
@@ -76,6 +77,7 @@ The default embedding model is `text-embedding-3-small` (1536 dimensions). This
|
||||
Search using vector similarity only.
|
||||
|
||||
**Request:**
|
||||
|
||||
```json
|
||||
{
|
||||
"query": "database performance optimization",
|
||||
@@ -84,10 +86,12 @@ Search using vector similarity only.
|
||||
```
|
||||
|
||||
**Query Parameters:**
|
||||
|
||||
- `page` (optional): Page number (default: 1)
|
||||
- `limit` (optional): Results per page (default: 20)
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"data": [
|
||||
@@ -118,6 +122,7 @@ Search using vector similarity only.
|
||||
Combines vector similarity and full-text search using Reciprocal Rank Fusion (RRF).
|
||||
|
||||
**Request:**
|
||||
|
||||
```json
|
||||
{
|
||||
"query": "indexing strategies",
|
||||
@@ -126,6 +131,7 @@ Combines vector similarity and full-text search using Reciprocal Rank Fusion (RR
|
||||
```
|
||||
|
||||
**Benefits of Hybrid Search:**
|
||||
|
||||
- Best of both worlds: semantic understanding + keyword matching
|
||||
- Better ranking for exact matches
|
||||
- Improved recall and precision
|
||||
@@ -136,10 +142,12 @@ Combines vector similarity and full-text search using Reciprocal Rank Fusion (RR
|
||||
**POST** `/api/knowledge/embeddings/batch`
|
||||
|
||||
Generate embeddings for all existing entries. Useful for:
|
||||
|
||||
- Initial setup after enabling semantic search
|
||||
- Regenerating embeddings after model updates
|
||||
|
||||
**Request:**
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "PUBLISHED"
|
||||
@@ -147,6 +155,7 @@ Generate embeddings for all existing entries. Useful for:
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"message": "Generated 42 embeddings out of 45 entries",
|
||||
@@ -169,6 +178,7 @@ The generation happens asynchronously to avoid blocking API responses.
|
||||
### Content Preparation
|
||||
|
||||
Before generating embeddings, content is prepared by:
|
||||
|
||||
1. Combining title and content
|
||||
2. Weighting title more heavily (appears twice)
|
||||
3. This improves semantic matching on titles
|
||||
@@ -206,6 +216,7 @@ RRF(d) = sum(1 / (k + rank_i))
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
- `d` = document
|
||||
- `k` = constant (60 is standard)
|
||||
- `rank_i` = rank from source i
|
||||
@@ -213,6 +224,7 @@ Where:
|
||||
**Example:**
|
||||
|
||||
Document ranks in two searches:
|
||||
|
||||
- Vector search: rank 3
|
||||
- Keyword search: rank 1
|
||||
|
||||
@@ -225,6 +237,7 @@ Higher RRF score = better combined ranking.
|
||||
### Index Parameters
|
||||
|
||||
The HNSW index uses:
|
||||
|
||||
- `m = 16`: Max connections per layer (balances accuracy/memory)
|
||||
- `ef_construction = 64`: Build quality (higher = more accurate, slower build)
|
||||
|
||||
@@ -237,6 +250,7 @@ The HNSW index uses:
|
||||
### Cost (OpenAI API)
|
||||
|
||||
Using `text-embedding-3-small`:
|
||||
|
||||
- ~$0.00002 per 1000 tokens
|
||||
- Average entry (~500 tokens): $0.00001
|
||||
- 10,000 entries: ~$0.10
|
||||
@@ -253,6 +267,7 @@ pnpm prisma migrate deploy
|
||||
```
|
||||
|
||||
This creates:
|
||||
|
||||
- `knowledge_embeddings` table
|
||||
- Vector index on embeddings
|
||||
|
||||
@@ -312,6 +327,7 @@ curl -X POST http://localhost:3001/api/knowledge/search/hybrid \
|
||||
**Solutions:**
|
||||
|
||||
1. Verify index exists and is being used:
|
||||
|
||||
```sql
|
||||
EXPLAIN ANALYZE
|
||||
SELECT * FROM knowledge_embeddings
|
||||
|
||||
Reference in New Issue
Block a user