- Add OpenBao services to docker-compose.yml with profiles (openbao, full) - Add docker-compose.build.yml for local builds vs registry pulls - Make PostgreSQL and Valkey optional via profiles (database, cache) - Create example compose files for common deployment scenarios: - docker/docker-compose.example.turnkey.yml (all bundled) - docker/docker-compose.example.external.yml (all external) - docker/docker.example.hybrid.yml (mixed deployment) - Update documentation: - Enhance .env.example with profiles and external service examples - Update README.md with deployment mode quick starts - Add deployment scenarios to docs/OPENBAO.md - Create docker/DOCKER-COMPOSE-GUIDE.md with comprehensive guide - Clean up repository structure: - Move shell scripts to scripts/ directory - Move documentation to docs/ directory - Move docker compose examples to docker/ directory - Configure for external Authentik with internal services: - Comment out Authentik services (using external OIDC) - Comment out unused volumes for disabled services - Keep postgres, valkey, openbao as internal services This provides a flexible deployment architecture supporting turnkey, production (all external), and hybrid configurations via Docker Compose profiles. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
6.3 KiB
Codex AI Review Setup for Mosaic Stack
Added: 2026-02-07 Status: Ready for activation
What Was Added
1. Woodpecker CI Pipeline
.woodpecker/
├── README.md # Setup and usage guide
├── codex-review.yml # CI pipeline configuration
└── schemas/
├── code-review-schema.json # Code review output schema
└── security-review-schema.json # Security review output schema
The pipeline provides:
- ✅ AI-powered code quality review (correctness, testing, performance)
- ✅ AI-powered security review (OWASP Top 10, secrets, injection)
- ✅ Structured JSON output with actionable findings
- ✅ Automatic PR blocking on critical issues
2. Local Testing Scripts
Global scripts at ~/.claude/scripts/codex/ are available for local testing:
codex-code-review.sh— Code quality reviewcodex-security-review.sh— Security vulnerability review
Prerequisites
Required Tools (for local testing)
# Check if installed
codex --version # OpenAI Codex CLI
jq --version # JSON processor
Installation
Codex CLI:
npm i -g @openai/codex
codex # Authenticate on first run
jq:
# Arch Linux
sudo pacman -S jq
# Debian/Ubuntu
sudo apt install jq
Usage
Local Testing (Before Committing)
cd ~/src/mosaic-stack
# Review uncommitted changes
~/.claude/scripts/codex/codex-code-review.sh --uncommitted
~/.claude/scripts/codex/codex-security-review.sh --uncommitted
# Review against main branch
~/.claude/scripts/codex/codex-code-review.sh -b main
~/.claude/scripts/codex/codex-security-review.sh -b main
# Review specific commit
~/.claude/scripts/codex/codex-code-review.sh -c abc123f
# Save results to file
~/.claude/scripts/codex/codex-code-review.sh -b main -o review.json
CI Pipeline Activation
Step 1: Commit the Pipeline
cd ~/src/mosaic-stack
git add .woodpecker/ CODEX-SETUP.md
git commit -m "feat: Add Codex AI review pipeline for automated code/security reviews
Add Woodpecker CI pipeline for automated code quality and security reviews
on every pull request using OpenAI's Codex CLI.
Features:
- Code quality review (correctness, testing, performance, code quality)
- Security review (OWASP Top 10, secrets, injection, auth gaps)
- Parallel execution for fast feedback
- Fails on blockers or critical/high security findings
- Structured JSON output
Includes:
- .woodpecker/codex-review.yml — CI pipeline configuration
- .woodpecker/schemas/ — JSON schemas for structured output
- CODEX-SETUP.md — Setup documentation
To activate:
1. Add 'codex_api_key' secret to Woodpecker CI
2. Create a PR to trigger the pipeline
3. Review findings in CI logs
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>"
git push
Step 2: Add Woodpecker Secret
- Go to https://ci.mosaicstack.dev
- Navigate to
mosaic/stackrepository - Settings → Secrets
- Add new secret:
- Name:
codex_api_key - Value: (your OpenAI API key)
- Events: Pull Request, Manual
- Name:
Step 3: Test the Pipeline
Create a test PR:
git checkout -b test/codex-review
echo "# Test" >> README.md
git add README.md
git commit -m "test: Trigger Codex review pipeline"
git push -u origin test/codex-review
# Create PR via gh or tea CLI
gh pr create --title "Test: Codex Review Pipeline" --body "Testing automated reviews"
What Gets Reviewed
Code Quality Review
- ✓ Correctness — Logic errors, edge cases, error handling
- ✓ Code Quality — Complexity, duplication, naming conventions
- ✓ Testing — Coverage, test quality, flaky tests
- ✓ Performance — N+1 queries, blocking operations
- ✓ Dependencies — Deprecated packages
- ✓ Documentation — Complex logic comments, API docs
Severity levels: blocker, should-fix, suggestion
Security Review
- ✓ OWASP Top 10 — Injection, XSS, CSRF, auth bypass, etc.
- ✓ Secrets Detection — Hardcoded credentials, API keys
- ✓ Input Validation — Missing validation at boundaries
- ✓ Auth/Authz — Missing checks, privilege escalation
- ✓ Data Exposure — Sensitive data in logs
- ✓ Supply Chain — Vulnerable dependencies
Severity levels: critical, high, medium, low Includes: CWE IDs, OWASP categories, remediation steps
Pipeline Behavior
- Triggers: Every pull request
- Runs: Code review + Security review (in parallel)
- Duration: ~15-60 seconds per review (depends on diff size)
- Fails if:
- Code review finds blockers
- Security review finds critical or high severity issues
- Output: Structured JSON in CI logs + markdown summary
Integration with Existing CI
The Codex review pipeline runs independently from the main .woodpecker.yml:
Main pipeline (.woodpecker.yml)
- Type checking (TypeScript)
- Linting (ESLint)
- Unit tests (Vitest)
- Integration tests (Playwright)
- Docker builds
Codex pipeline (.woodpecker/codex-review.yml)
- AI-powered code quality review
- AI-powered security review
Both run in parallel on PRs. A PR must pass BOTH to be mergeable.
Troubleshooting
"codex: command not found" locally
npm i -g @openai/codex
"codex: command not found" in CI
Check the node image version in .woodpecker/codex-review.yml (currently node:22-slim).
Pipeline passes but should fail
Check the failure thresholds in .woodpecker/codex-review.yml:
- Code review:
BLOCKERS=$(jq '.stats.blockers // 0') - Security review:
CRITICAL=$(jq '.stats.critical // 0') HIGH=$(jq '.stats.high // 0')
Review takes too long
Large diffs (500+ lines) may take 2-3 minutes. Consider:
- Breaking up large PRs into smaller changes
- Using
--baselocally to preview review before pushing
Documentation
- Pipeline README:
.woodpecker/README.md - Global scripts README:
~/.claude/scripts/codex/README.md - Codex CLI docs: https://developers.openai.com/codex/cli/
Next Steps
- ✅ Pipeline files created
- ⏳ Commit pipeline to repository
- ⏳ Add
codex_api_keysecret to Woodpecker - ⏳ Test with a small PR
- ⏳ Monitor findings and adjust thresholds if needed
This setup reuses the global Codex review infrastructure from ~/.claude/scripts/codex/, which is available across all repositories.