Implement the main orchestration loop that coordinates all components:
- Queue processing with priority sorting (issues by number)
- Integration with ContextMonitor for tracking agent context usage
- Integration with QualityOrchestrator for running quality gates
- Integration with ForcedContinuationService for rejection prompts
- Metrics tracking (processed_count, success_count, rejection_count)
- Graceful start/stop with proper lifecycle management
- Error handling at all levels (spawn, context, quality, continuation)
The OrchestrationLoop flow:
1. Read issue queue (priority sorted by issue number)
2. Mark issue as in progress
3. Spawn agent (stub implementation for Phase 0)
4. Check context usage via ContextMonitor
5. Run quality gates via QualityOrchestrator
6. On approval: mark complete, increment success count
7. On rejection: generate continuation prompt, increment rejection count
99% test coverage for coordinator.py (183 statements, 2 missed).
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add comprehensive test suite for OrchestrationLoop class that integrates:
- Queue processing with priority sorting
- Agent assignment (50% rule)
- Quality gate verification on completion claims
- Rejection handling with forced continuation prompts
- Context monitoring during agent execution
- Lifecycle management (start/stop)
- Error handling for all edge cases
- Metrics tracking (processed, success, rejection counts)
33 new tests covering all acceptance criteria.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Fixed code review findings:
- Removed unused imports (AsyncMock, MagicMock)
- Fixed line length violation in test_forced_continuation.py
All 15 tests still passing after fixes.
Implement comprehensive test suite for four core quality gates:
- BuildGate: Tests mypy type checking enforcement
- LintGate: Tests ruff linting with warnings as failures
- TestGate: Tests pytest execution requiring 100% pass rate
- CoverageGate: Tests coverage enforcement with 85% minimum
All tests follow TDD methodology - written before implementation.
Total: 36 tests covering success, failure, and edge cases.
Related to #147
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Add comprehensive cost optimization test scenarios and validation report.
Test Scenarios Added (10 new tests):
- Low difficulty assigns to MiniMax/GLM (free agents)
- Medium difficulty assigns to GLM when within capacity
- High difficulty assigns to Opus (only capable agent)
- Oversized issues rejected with actionable error
- Boundary conditions at capacity limits
- Aggregate cost optimization across all scenarios
Results:
- All 33 tests passing (23 existing + 10 new)
- 100% coverage of agent_assignment.py (36/36 statements)
- Cost savings validation: 50%+ in aggregate scenarios
- Real-world projection: 70%+ savings with typical workload
Documentation:
- Created cost-optimization-validation.md with detailed analysis
- Documents cost savings for each scenario
- Validates all acceptance criteria from COORD-006
Completes Phase 2 (M4.1-Coordinator) testing requirements.
Fixes#146
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Implements the Coordinator class with main orchestration loop:
- Async loop architecture with configurable poll interval
- process_queue() method gets next ready issue and spawns agent (stub)
- Graceful shutdown handling with stop() method
- Error handling that allows loop to continue after failures
- Logging for all actions (start, stop, processing, errors)
- Integration with QueueManager from #159
- Active agent tracking for future agent management
Configuration settings added:
- COORDINATOR_POLL_INTERVAL (default: 5.0s)
- COORDINATOR_MAX_CONCURRENT_AGENTS (default: 10)
- COORDINATOR_ENABLED (default: true)
Tests: 27 new tests covering all acceptance criteria
Coverage: 92% overall (100% for coordinator.py)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add Capability enum (HIGH, MEDIUM, LOW) for agent difficulty levels
- Add AgentName enum for all 5 agents (opus, sonnet, haiku, glm, minimax)
- Implement AgentProfile data structure with validation
- context_limit: max tokens for context window
- cost_per_mtok: cost per million tokens (0 for self-hosted)
- capabilities: list of difficulty levels the agent handles
- best_for: description of optimal use cases
- Define profiles for all 5 agents with specifications:
- Anthropic models (opus, sonnet, haiku): 200K context, various costs
- Self-hosted models (glm, minimax): 128K context, free
- Implement get_agent_profile() function for profile lookup
- Add comprehensive test suite (37 tests, 100% coverage)
- Profile data structure validation
- All 5 predefined profiles exist and are correct
- Capability enum and AgentName enum tests
- Best_for validation and capability matching
- Consistency checks across profiles
Fixes#144
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Implements ContextMonitor class with real-time token usage tracking:
- COMPACT_THRESHOLD at 0.80 (80% triggers compaction)
- ROTATE_THRESHOLD at 0.95 (95% triggers rotation)
- Poll Claude API for context usage
- Return appropriate ContextAction based on thresholds
- Background monitoring loop (10-second polling)
- Log usage over time
- Error handling and recovery
Added ContextUsage model for tracking agent token consumption.
Tests:
- 25 test cases covering all functionality
- 100% coverage for context_monitor.py and models.py
- Mocked API responses for different usage levels
- Background monitoring and threshold detection
- Error handling verification
Quality gates:
- Type checking: PASS (mypy)
- Linting: PASS (ruff)
- Tests: PASS (25/25)
- Coverage: 100% for new files, 95.43% overall
Fixes#155
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
docker:dind requires privileged mode and a running daemon.
Kaniko builds containers without needing Docker daemon:
- Runs unprivileged
- Reads credentials from /kaniko/.docker/config.json
- Designed for CI environments like Woodpecker
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
The buildx plugin's credential handling doesn't work properly with
Harbor. The docker-auth-test step proved that standard docker login
works, so we switch to:
- docker:dind image
- Manual docker login before build
- Standard docker build and docker push
This bypasses buildx's separate credential store issue.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Added a docker-auth-test step that:
- Shows credential lengths (for debugging)
- Tests docker login directly with Harbor
This will help identify if the issue is with secrets injection
or with how buildx handles authentication.
Reverted to woodpeckerci/plugin-docker-buildx since plugins/docker
requires server-side WOODPECKER_PLUGINS_PRIVILEGED config.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
The repo setting should NOT include the registry prefix - the
registry setting handles that separately.
Changed repo: reg.mosaicstack.dev/mosaic/api -> repo: mosaic/api
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
The woodpeckerci/plugin-docker-buildx plugin was failing with
"insufficient_scope: authorization failed" when pushing to Harbor,
even though the same credentials worked locally.
Switched to the standard plugins/docker which uses traditional
docker login authentication that may work better with Harbor.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Add .dockerignore to exclude node_modules, dist, and build artifacts
- Add pre/post build directory listings to diagnose dist not found issue
- Disable turbo cache temporarily with --force flag
- Add --verbosity=2 for more detailed turbo output
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The docker-buildx plugin automatically prepends registry to repo,
so having the full URL caused doubled paths:
reg.mosaicstack.dev/reg.mosaicstack.dev/mosaic/api
Changed from: repo: reg.mosaicstack.dev/mosaic/api
Changed to: repo: mosaic/api
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Docker COPY replaces directory contents, so copying source code
after node_modules was wiping the deps. Reordered to:
1. Copy source code first
2. Copy node_modules second (won't be overwritten)
Fixes API build failure: "dist not found"
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add docker-build-api, docker-build-web, docker-build-postgres steps
- Images pushed to reg.diversecanvas.com/mosaic/* on main/develop
- Create docker-compose.prod.yml for production deployments
- Add .env.prod.example with production configuration
Requires Harbor secrets in Woodpecker:
- harbor_username
- harbor_password
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Modules using AuthGuard in their controllers need to import AuthModule
to make AuthService available for dependency injection.
Fixed:
- ActivityModule
- WorkspaceSettingsModule
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Issues fixed:
1. Module not found: Added missing copy of apps/{api,web}/node_modules
which contains pnpm symlinks to the root node_modules
2. Healthcheck syntax: Fixed broken quoting from prettier reformatting
Changed to CMD-SHELL with proper escaping
3. Removed obsolete version: "3.9" from docker-compose.yml
The apps need their own node_modules directories because pnpm uses
symlinks that point from apps/*/node_modules to node_modules/.pnpm/*
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Fixed the mismatch between environment variables:
- docker-compose now passes PORT (what NestJS/Next.js read) instead of API_PORT
- API_PORT/WEB_PORT control host mapping, PORT controls container
Changes:
- docker-compose: Pass PORT=${API_PORT} and PORT=${WEB_PORT} to containers
- docker-compose: Dynamic port mapping on both host and container sides
- docker-compose: Traefik labels use ${API_PORT}/${WEB_PORT} variables
- docker-compose: Healthchecks use PORT env var
- Dockerfiles: Removed hardcoded port values
- Dockerfiles: Healthchecks read PORT at runtime
This allows changing ports via API_PORT/WEB_PORT environment variables
and have all components (app, healthcheck, Traefik) use the correct port.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Added cache mounts for:
- pnpm store: Caches downloaded packages between builds
- TurboRepo: Caches build outputs between builds
This significantly speeds up subsequent builds:
- First build: Full download and compile
- Subsequent builds: Only changed packages are re-downloaded/rebuilt
Requires Docker BuildKit (default in Docker 23+).
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The production stage was failing because it tried to copy the public
directory which doesn't exist in the source. Added mkdir -p to ensure
the directory exists (even if empty) before the production stage
tries to copy it.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When detecting existing configuration, the setup script now shows a
detailed breakdown instead of just "Current base URL: ...":
Mode: Traefik reverse proxy
Web URL: https://app.mosaicstack.dev
API URL: https://api.mosaicstack.dev
Auth: https://auth.mosaicstack.dev
This makes it clear:
- What access mode is configured (localhost/IP/domain/Traefik)
- What each URL is used for (Web UI, API, Authentication)
- Whether to change the configuration
Added helper functions:
- detect_access_mode(): Determines mode from existing .env values
- display_access_config(): Formats the URL breakdown display
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
pnpm stores the Prisma client in the content-addressable store at
node_modules/.pnpm/.../.prisma, not at apps/api/node_modules/.prisma.
The production stage was trying to copy from the wrong location.
Additionally, running `pnpm install --prod` in production failed because:
1. The husky prepare script runs but husky is a devDependency
2. The Prisma client postinstall can't run without the prisma CLI
Fixed by copying the full node_modules from the builder stage, which
already has all dependencies properly installed and the Prisma client
generated in the correct pnpm store location.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The Docker builds were failing because they ran `pnpm build` directly
in the app directories without first building workspace dependencies
(@mosaic/shared, @mosaic/ui). CI passed because it runs TurboRepo
from the root which respects the dependency graph.
Changed both Dockerfiles to use `pnpm turbo build --filter=@mosaic/{app}`
which ensures dependencies are built in the correct order:
- Web: @mosaic/config → @mosaic/shared → @mosaic/ui → @mosaic/web
- API: @mosaic/config → @mosaic/shared → prisma:generate → @mosaic/api
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The husky prepare script was failing during Docker production builds
because husky is a devDependency and isn't available when running
`pnpm install --prod --frozen-lockfile`.
Changed from `husky install` (deprecated in v9+) to `husky || true`
which gracefully handles the case when husky isn't installed.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>