# Issue #155: Build Basic Context Monitor ## Objective Build a context monitoring service that tracks agent token usage in real-time and identifies threshold crossings. ## Implementation Approach Following TDD principles: 1. **RED** - Created comprehensive test suite first (25 test cases) 2. **GREEN** - Implemented ContextMonitor class to pass all tests 3. **REFACTOR** - Applied linting and type checking ## Implementation Details ### Files Created 1. **src/context_monitor.py** - Main ContextMonitor class - Polls Claude API for context usage - Defines COMPACT_THRESHOLD (0.80) and ROTATE_THRESHOLD (0.95) - Returns appropriate ContextAction based on thresholds - Background monitoring loop with configurable polling interval - Error handling and recovery - Usage history tracking 2. **src/models.py** - Data models - `ContextAction` enum: CONTINUE, COMPACT, ROTATE_SESSION - `ContextUsage` class: Tracks agent token consumption - `IssueMetadata` model: From issue #154 (parser) 3. **tests/test_context_monitor.py** - Comprehensive test suite - 25 test cases covering all functionality - Mocked API responses for different usage levels - Background monitoring and threshold detection tests - Error handling verification - Edge case coverage ### Key Features **Threshold-Based Actions:** - Below 80%: CONTINUE (keep working) - 80-94%: COMPACT (summarize and free context) - 95%+: ROTATE_SESSION (spawn fresh agent) **Background Monitoring:** - Configurable poll interval (default: 10 seconds) - Non-blocking async monitoring - Callback-based notification system - Graceful error handling - Continues monitoring after API errors **Usage Tracking:** - Historical usage logging - Per-agent usage history - Percentage and ratio calculations - Zero-safe division handling ## Progress - [x] Write comprehensive test suite (TDD RED phase) - [x] Implement ContextMonitor class (TDD GREEN phase) - [x] Implement ContextUsage model - [x] Add tests for IssueMetadata validators - [x] Run quality gates - [x] Fix linting issues (imports from collections.abc) - [x] Verify type checking passes - [x] Verify all tests pass (25/25) - [x] Verify coverage meets 85% requirement (100% for new files) - [x] Commit implementation ## Testing Results ### Test Suite ``` 25 tests passed - 4 tests for ContextUsage model - 13 tests for ContextMonitor class - 8 tests for IssueMetadata validators ``` ### Coverage ``` context_monitor.py: 100% coverage (50/50 lines) models.py: 100% coverage (48/48 lines) Overall: 95.43% coverage (well above 85% requirement) ``` ### Quality Gates - ✅ Type checking: PASS (mypy) - ✅ Linting: PASS (ruff) - ✅ Tests: PASS (25/25) - ✅ Coverage: 100% for new files ## Token Tracking - Estimated: 49,400 tokens - Actual: ~51,200 tokens (104% of estimate) - Overhead: Comprehensive test coverage, documentation ## Architecture Integration The ContextMonitor integrates into the Non-AI Coordinator pattern: ``` ┌────────────────────────────────────────────────────────┐ │ ORCHESTRATION LAYER (Non-AI Coordinator) │ │ │ │ ┌─────────────────────────────────────────┐ │ │ │ ContextMonitor (IMPLEMENTED) │ │ │ │ - Polls Claude API every 10s │ │ │ │ - Detects 80% threshold → COMPACT │ │ │ │ - Detects 95% threshold → ROTATE │ │ │ └─────────────────────────────────────────┘ │ │ │ │ │ ▼ │ │ ┌─────────────────────────────────────────┐ │ │ │ Agent Coordinator (FUTURE) │ │ │ │ - Assigns issues to agents │ │ │ │ - Spawns new sessions on rotation │ │ │ │ - Triggers compaction │ │ │ └─────────────────────────────────────────┘ │ └────────────────────────────────────────────────────────┘ ``` ## Usage Example ```python from src.context_monitor import ContextMonitor from src.models import ContextAction # Create monitor with 10-second polling monitor = ContextMonitor(api_client=claude_client, poll_interval=10.0) # Check current usage action = await monitor.determine_action("agent-123") if action == ContextAction.COMPACT: # Trigger compaction print("Agent hit 80% threshold - compacting context") elif action == ContextAction.ROTATE_SESSION: # Spawn new agent print("Agent hit 95% threshold - rotating session") # Start background monitoring def on_threshold(agent_id: str, action: ContextAction) -> None: if action == ContextAction.COMPACT: trigger_compaction(agent_id) elif action == ContextAction.ROTATE_SESSION: spawn_new_agent(agent_id) task = asyncio.create_task( monitor.start_monitoring("agent-123", on_threshold) ) # Stop monitoring when done monitor.stop_monitoring("agent-123") await task ``` ## Next Steps Issue #155 is complete. This enables: 1. **Phase 2 (Agent Assignment)** - Context estimator can now check if issue fits in agent's remaining context 2. **Phase 3 (Session Management)** - Coordinator can respond to COMPACT and ROTATE actions 3. **Phase 4 (Quality Gates)** - Quality orchestrator can monitor agent context during task execution ## Notes - ContextMonitor uses async/await for non-blocking operation - Background monitoring is cancellable and recovers from errors - Usage history is tracked per-agent for analytics - Thresholds are class constants for easy configuration - API client is injected for testability ## Commit ``` feat(#155): Build basic context monitor Fixes #155 Commit: d54c653 ```