centralize guides and rails under mosaic with runtime compatibility links
This commit is contained in:
16
README.md
16
README.md
@@ -14,6 +14,8 @@ bash ~/src/mosaic-bootstrap/install.sh
|
|||||||
## What It Provides
|
## What It Provides
|
||||||
|
|
||||||
- Shared standards document: `~/.mosaic/STANDARDS.md`
|
- Shared standards document: `~/.mosaic/STANDARDS.md`
|
||||||
|
- Shared operational guides: `~/.mosaic/guides/`
|
||||||
|
- Shared quality rails/scripts: `~/.mosaic/rails/`
|
||||||
- Runtime adapter docs: `~/.mosaic/adapters/`
|
- Runtime adapter docs: `~/.mosaic/adapters/`
|
||||||
- Shared wrapper commands: `~/.mosaic/bin/`
|
- Shared wrapper commands: `~/.mosaic/bin/`
|
||||||
- Canonical skills directory: `~/.mosaic/skills`
|
- Canonical skills directory: `~/.mosaic/skills`
|
||||||
@@ -41,6 +43,20 @@ Manual commands:
|
|||||||
~/.mosaic/bin/mosaic-sync-skills --link-only
|
~/.mosaic/bin/mosaic-sync-skills --link-only
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Runtime Compatibility Linking
|
||||||
|
|
||||||
|
Installer also links Claude-compatible paths back to Mosaic canonicals:
|
||||||
|
|
||||||
|
- `~/.claude/agent-guides` -> `~/.mosaic/guides`
|
||||||
|
- `~/.claude/scripts/{git,codex,bootstrap}` -> `~/.mosaic/rails/...`
|
||||||
|
- `~/.claude/templates` -> `~/.mosaic/templates/agent`
|
||||||
|
|
||||||
|
Run manually:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
~/.mosaic/bin/mosaic-link-runtime-assets
|
||||||
|
```
|
||||||
|
|
||||||
Opt-out during install:
|
Opt-out during install:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
|||||||
@@ -12,12 +12,14 @@ Master/slave model:
|
|||||||
2. Load project-local `AGENTS.md` next.
|
2. Load project-local `AGENTS.md` next.
|
||||||
3. Respect repository-specific tooling and workflows.
|
3. Respect repository-specific tooling and workflows.
|
||||||
4. Use lifecycle scripts when available (`scripts/agent/*.sh`).
|
4. Use lifecycle scripts when available (`scripts/agent/*.sh`).
|
||||||
|
5. Use shared rails/guides from `~/.mosaic` as canonical references.
|
||||||
|
|
||||||
## Non-Negotiables
|
## Non-Negotiables
|
||||||
|
|
||||||
- Data files are authoritative; generated views are derived artifacts.
|
- Data files are authoritative; generated views are derived artifacts.
|
||||||
- Pull before edits when collaborating in shared repos.
|
- Pull before edits when collaborating in shared repos.
|
||||||
- Run validation checks before claiming completion.
|
- Run validation checks before claiming completion.
|
||||||
|
- Apply quality rails from `~/.mosaic/rails/` when relevant (review, QA, git workflow).
|
||||||
- Avoid hardcoded secrets and token leakage in remotes/commits.
|
- Avoid hardcoded secrets and token leakage in remotes/commits.
|
||||||
- Do not perform destructive git/file actions without explicit instruction.
|
- Do not perform destructive git/file actions without explicit instruction.
|
||||||
|
|
||||||
@@ -44,3 +46,8 @@ All runtime adapters should inject:
|
|||||||
- project `AGENTS.md`
|
- project `AGENTS.md`
|
||||||
|
|
||||||
before task execution.
|
before task execution.
|
||||||
|
|
||||||
|
Runtime-compatible guides and rails are hosted at:
|
||||||
|
|
||||||
|
- `~/.mosaic/guides/`
|
||||||
|
- `~/.mosaic/rails/`
|
||||||
|
|||||||
@@ -14,3 +14,4 @@ Use wrapper commands from `~/.mosaic/bin/` for lifecycle rituals.
|
|||||||
## Migration Note
|
## Migration Note
|
||||||
|
|
||||||
Project-local `.claude/commands/*.md` should call `scripts/agent/*.sh` so behavior stays runtime-neutral.
|
Project-local `.claude/commands/*.md` should call `scripts/agent/*.sh` so behavior stays runtime-neutral.
|
||||||
|
Guides and rails should resolve to `~/.mosaic/guides` and `~/.mosaic/rails` (linked into `~/.claude` for compatibility).
|
||||||
|
|||||||
@@ -73,6 +73,11 @@ bash scripts/agent/critical.sh
|
|||||||
bash scripts/agent/session-end.sh
|
bash scripts/agent/session-end.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Shared Rails
|
||||||
|
|
||||||
|
- Quality and orchestration guides: `~/.mosaic/guides/`
|
||||||
|
- Shared automation rails: `~/.mosaic/rails/`
|
||||||
|
|
||||||
## Repo-Specific Notes
|
## Repo-Specific Notes
|
||||||
|
|
||||||
- Add project constraints and workflows here.
|
- Add project constraints and workflows here.
|
||||||
|
|||||||
65
bin/mosaic-link-runtime-assets
Executable file
65
bin/mosaic-link-runtime-assets
Executable file
@@ -0,0 +1,65 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
MOSAIC_HOME="${MOSAIC_HOME:-$HOME/.mosaic}"
|
||||||
|
|
||||||
|
backup_stamp="$(date +%Y%m%d%H%M%S)"
|
||||||
|
|
||||||
|
link_file() {
|
||||||
|
local src="$1"
|
||||||
|
local dst="$2"
|
||||||
|
|
||||||
|
mkdir -p "$(dirname "$dst")"
|
||||||
|
|
||||||
|
if [[ -L "$dst" ]]; then
|
||||||
|
local cur
|
||||||
|
cur="$(readlink -f "$dst" 2>/dev/null || true)"
|
||||||
|
local want
|
||||||
|
want="$(readlink -f "$src" 2>/dev/null || true)"
|
||||||
|
if [[ -n "$cur" && -n "$want" && "$cur" == "$want" ]]; then
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
rm -f "$dst"
|
||||||
|
elif [[ -e "$dst" ]]; then
|
||||||
|
mv "$dst" "${dst}.mosaic-bak-${backup_stamp}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
ln -s "$src" "$dst"
|
||||||
|
}
|
||||||
|
|
||||||
|
link_tree_files() {
|
||||||
|
local src_root="$1"
|
||||||
|
local dst_root="$2"
|
||||||
|
|
||||||
|
[[ -d "$src_root" ]] || return
|
||||||
|
|
||||||
|
while IFS= read -r -d '' src; do
|
||||||
|
local rel
|
||||||
|
rel="${src#$src_root/}"
|
||||||
|
local dst="$dst_root/$rel"
|
||||||
|
link_file "$src" "$dst"
|
||||||
|
done < <(find "$src_root" -type f -print0)
|
||||||
|
}
|
||||||
|
|
||||||
|
# Claude compatibility layer: keep existing ~/.claude workflows functional,
|
||||||
|
# but source canonical rails from ~/.mosaic.
|
||||||
|
link_tree_files "$MOSAIC_HOME/guides" "$HOME/.claude/agent-guides"
|
||||||
|
link_tree_files "$MOSAIC_HOME/rails/git" "$HOME/.claude/scripts/git"
|
||||||
|
link_tree_files "$MOSAIC_HOME/rails/codex" "$HOME/.claude/scripts/codex"
|
||||||
|
link_tree_files "$MOSAIC_HOME/rails/bootstrap" "$HOME/.claude/scripts/bootstrap"
|
||||||
|
link_tree_files "$MOSAIC_HOME/templates/agent" "$HOME/.claude/templates"
|
||||||
|
|
||||||
|
for qa_script in \
|
||||||
|
debug-hook.sh \
|
||||||
|
qa-hook-handler.sh \
|
||||||
|
qa-hook-stdin.sh \
|
||||||
|
qa-hook-wrapper.sh \
|
||||||
|
qa-queue-monitor.sh \
|
||||||
|
remediation-hook-handler.sh; do
|
||||||
|
src="$MOSAIC_HOME/rails/qa/$qa_script"
|
||||||
|
[[ -f "$src" ]] || continue
|
||||||
|
link_file "$src" "$HOME/.claude/scripts/$qa_script"
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "[mosaic-link] Runtime compatibility assets linked"
|
||||||
|
echo "[mosaic-link] Canonical source: $MOSAIC_HOME"
|
||||||
144
guides/authentication.md
Normal file
144
guides/authentication.md
Normal file
@@ -0,0 +1,144 @@
|
|||||||
|
# Authentication & Authorization Guide
|
||||||
|
|
||||||
|
## Before Starting
|
||||||
|
1. Check assigned issue: `~/.mosaic/rails/git/issue-list.sh -a @me`
|
||||||
|
2. Review existing auth implementation in codebase
|
||||||
|
3. Review Vault secrets structure: `docs/vault-secrets-structure.md`
|
||||||
|
|
||||||
|
## Authentication Patterns
|
||||||
|
|
||||||
|
### JWT (JSON Web Tokens)
|
||||||
|
```
|
||||||
|
Vault Path: secret-{env}/backend-api/jwt/signing-key
|
||||||
|
Fields: key, algorithm, expiry_seconds
|
||||||
|
```
|
||||||
|
|
||||||
|
**Best Practices:**
|
||||||
|
- Use RS256 or ES256 (asymmetric) for distributed systems
|
||||||
|
- Use HS256 (symmetric) only for single-service auth
|
||||||
|
- Set reasonable expiry (15min-1hr for access tokens)
|
||||||
|
- Include minimal claims (sub, exp, iat, roles)
|
||||||
|
- Never store sensitive data in JWT payload
|
||||||
|
|
||||||
|
### Session-Based
|
||||||
|
```
|
||||||
|
Vault Path: secret-{env}/{service}/session/secret
|
||||||
|
Fields: secret, cookie_name, max_age
|
||||||
|
```
|
||||||
|
|
||||||
|
**Best Practices:**
|
||||||
|
- Use secure, httpOnly, sameSite cookies
|
||||||
|
- Regenerate session ID on privilege change
|
||||||
|
- Implement session timeout
|
||||||
|
- Store sessions server-side (Redis/database)
|
||||||
|
|
||||||
|
### OAuth2/OIDC
|
||||||
|
```
|
||||||
|
Vault Paths:
|
||||||
|
- secret-{env}/{service}/oauth/{provider}/client_id
|
||||||
|
- secret-{env}/{service}/oauth/{provider}/client_secret
|
||||||
|
```
|
||||||
|
|
||||||
|
**Best Practices:**
|
||||||
|
- Use PKCE for public clients
|
||||||
|
- Validate state parameter
|
||||||
|
- Verify token signatures
|
||||||
|
- Check issuer and audience claims
|
||||||
|
|
||||||
|
## Authorization Patterns
|
||||||
|
|
||||||
|
### Role-Based Access Control (RBAC)
|
||||||
|
```python
|
||||||
|
# Example middleware
|
||||||
|
def require_role(roles: list):
|
||||||
|
def decorator(handler):
|
||||||
|
def wrapper(request):
|
||||||
|
user_roles = get_user_roles(request.user_id)
|
||||||
|
if not any(role in user_roles for role in roles):
|
||||||
|
raise ForbiddenError()
|
||||||
|
return handler(request)
|
||||||
|
return wrapper
|
||||||
|
return decorator
|
||||||
|
|
||||||
|
@require_role(['admin', 'moderator'])
|
||||||
|
def delete_user(request):
|
||||||
|
pass
|
||||||
|
```
|
||||||
|
|
||||||
|
### Permission-Based
|
||||||
|
```python
|
||||||
|
# Check specific permissions
|
||||||
|
def check_permission(user_id, resource, action):
|
||||||
|
permissions = get_user_permissions(user_id)
|
||||||
|
return f"{resource}:{action}" in permissions
|
||||||
|
```
|
||||||
|
|
||||||
|
## Security Requirements
|
||||||
|
|
||||||
|
### Password Handling
|
||||||
|
- Use bcrypt, scrypt, or Argon2 for hashing
|
||||||
|
- Minimum 12 character passwords
|
||||||
|
- Check against breached password lists
|
||||||
|
- Implement account lockout after failed attempts
|
||||||
|
|
||||||
|
### Token Security
|
||||||
|
- Rotate secrets regularly
|
||||||
|
- Implement token revocation
|
||||||
|
- Use short-lived access tokens with refresh tokens
|
||||||
|
- Store refresh tokens securely (httpOnly cookies or encrypted storage)
|
||||||
|
|
||||||
|
### Multi-Factor Authentication
|
||||||
|
- Support TOTP (Google Authenticator compatible)
|
||||||
|
- Consider WebAuthn for passwordless
|
||||||
|
- Require MFA for sensitive operations
|
||||||
|
|
||||||
|
## Testing Authentication
|
||||||
|
|
||||||
|
### Test Cases Required
|
||||||
|
```python
|
||||||
|
class TestAuthentication:
|
||||||
|
def test_login_success_returns_token(self):
|
||||||
|
pass
|
||||||
|
def test_login_failure_returns_401(self):
|
||||||
|
pass
|
||||||
|
def test_invalid_token_returns_401(self):
|
||||||
|
pass
|
||||||
|
def test_expired_token_returns_401(self):
|
||||||
|
pass
|
||||||
|
def test_missing_token_returns_401(self):
|
||||||
|
pass
|
||||||
|
def test_insufficient_permissions_returns_403(self):
|
||||||
|
pass
|
||||||
|
def test_token_refresh_works(self):
|
||||||
|
pass
|
||||||
|
def test_logout_invalidates_token(self):
|
||||||
|
pass
|
||||||
|
```
|
||||||
|
|
||||||
|
## Common Vulnerabilities to Avoid
|
||||||
|
|
||||||
|
1. **Broken Authentication**
|
||||||
|
- Weak password requirements
|
||||||
|
- Missing brute-force protection
|
||||||
|
- Session fixation
|
||||||
|
|
||||||
|
2. **Broken Access Control**
|
||||||
|
- Missing authorization checks
|
||||||
|
- IDOR (Insecure Direct Object Reference)
|
||||||
|
- Privilege escalation
|
||||||
|
|
||||||
|
3. **Security Misconfiguration**
|
||||||
|
- Default credentials
|
||||||
|
- Verbose error messages
|
||||||
|
- Missing security headers
|
||||||
|
|
||||||
|
## Commit Format
|
||||||
|
```
|
||||||
|
feat(#89): Implement JWT authentication
|
||||||
|
|
||||||
|
- Add /auth/login and /auth/refresh endpoints
|
||||||
|
- Implement token validation middleware
|
||||||
|
- Configure 15min access token expiry
|
||||||
|
|
||||||
|
Fixes #89
|
||||||
|
```
|
||||||
111
guides/backend.md
Normal file
111
guides/backend.md
Normal file
@@ -0,0 +1,111 @@
|
|||||||
|
# Backend Development Guide
|
||||||
|
|
||||||
|
## Before Starting
|
||||||
|
1. Check assigned issue: `~/.mosaic/rails/git/issue-list.sh -a @me`
|
||||||
|
2. Create scratchpad: `docs/scratchpads/{issue-number}-{short-name}.md`
|
||||||
|
3. Review API contracts and database schema
|
||||||
|
|
||||||
|
## Development Standards
|
||||||
|
|
||||||
|
### API Design
|
||||||
|
- Follow RESTful conventions (or GraphQL patterns if applicable)
|
||||||
|
- Use consistent endpoint naming: `/api/v1/resource-name`
|
||||||
|
- Return appropriate HTTP status codes
|
||||||
|
- Include pagination for list endpoints
|
||||||
|
- Document all endpoints (OpenAPI/Swagger preferred)
|
||||||
|
|
||||||
|
### Database
|
||||||
|
- Write migrations for schema changes
|
||||||
|
- Use parameterized queries (prevent SQL injection)
|
||||||
|
- Index frequently queried columns
|
||||||
|
- Document relationships and constraints
|
||||||
|
|
||||||
|
### Error Handling
|
||||||
|
- Return structured error responses
|
||||||
|
- Log errors with context (request ID, user ID if applicable)
|
||||||
|
- Never expose internal errors to clients
|
||||||
|
- Use appropriate error codes
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"error": {
|
||||||
|
"code": "VALIDATION_ERROR",
|
||||||
|
"message": "User-friendly message",
|
||||||
|
"details": []
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Security
|
||||||
|
- Validate all input at API boundaries
|
||||||
|
- Implement rate limiting on public endpoints
|
||||||
|
- Use secrets from Vault (see `docs/vault-secrets-structure.md`)
|
||||||
|
- Never log sensitive data (passwords, tokens, PII)
|
||||||
|
- Follow OWASP guidelines
|
||||||
|
|
||||||
|
### Authentication/Authorization
|
||||||
|
- Use project's established auth pattern
|
||||||
|
- Validate tokens on every request
|
||||||
|
- Check permissions before operations
|
||||||
|
- See `~/.mosaic/guides/authentication.md` for details
|
||||||
|
|
||||||
|
## Testing Requirements (TDD)
|
||||||
|
1. Write tests BEFORE implementation
|
||||||
|
2. Minimum 85% coverage
|
||||||
|
3. Test categories:
|
||||||
|
- Unit tests for business logic
|
||||||
|
- Integration tests for API endpoints
|
||||||
|
- Database tests with transactions/rollback
|
||||||
|
|
||||||
|
### Test Patterns
|
||||||
|
```python
|
||||||
|
# API test example structure
|
||||||
|
class TestResourceEndpoint:
|
||||||
|
def test_create_returns_201(self):
|
||||||
|
pass
|
||||||
|
def test_create_validates_input(self):
|
||||||
|
pass
|
||||||
|
def test_get_returns_404_for_missing(self):
|
||||||
|
pass
|
||||||
|
def test_requires_authentication(self):
|
||||||
|
pass
|
||||||
|
```
|
||||||
|
|
||||||
|
## Code Style
|
||||||
|
- Follow Google Style Guide for your language
|
||||||
|
- **TypeScript: Follow `~/.mosaic/guides/typescript.md` — MANDATORY**
|
||||||
|
- Use linter/formatter from project configuration
|
||||||
|
- Keep functions focused and small
|
||||||
|
- Document complex business logic
|
||||||
|
|
||||||
|
### TypeScript Quick Rules (see typescript.md for full guide)
|
||||||
|
- **NO `any`** — define explicit types always
|
||||||
|
- **NO lazy `unknown`** — only for error catches and external data with validation
|
||||||
|
- **Explicit return types** on all exported functions
|
||||||
|
- **Explicit parameter types** always
|
||||||
|
- **Interface for DTOs** — never inline object types
|
||||||
|
- **Typed errors** — use custom error classes
|
||||||
|
|
||||||
|
## Performance
|
||||||
|
- Use database connection pooling
|
||||||
|
- Implement caching where appropriate
|
||||||
|
- Profile slow endpoints
|
||||||
|
- Use async operations for I/O
|
||||||
|
|
||||||
|
## Commit Format
|
||||||
|
```
|
||||||
|
feat(#45): Add user registration endpoint
|
||||||
|
|
||||||
|
- POST /api/v1/users for registration
|
||||||
|
- Email validation and uniqueness check
|
||||||
|
- Password hashing with bcrypt
|
||||||
|
|
||||||
|
Fixes #45
|
||||||
|
```
|
||||||
|
|
||||||
|
## Before Completing
|
||||||
|
1. Run full test suite
|
||||||
|
2. Verify migrations work (up and down)
|
||||||
|
3. Test API with curl/httpie
|
||||||
|
4. Update scratchpad with completion notes
|
||||||
|
5. Reference issue in commit
|
||||||
338
guides/bootstrap.md
Normal file
338
guides/bootstrap.md
Normal file
@@ -0,0 +1,338 @@
|
|||||||
|
# Project Bootstrap Guide
|
||||||
|
|
||||||
|
> Load this guide when setting up a new project for AI-assisted development.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This guide covers how to bootstrap a project so AI agents (Claude, Codex, etc.) can work on it effectively. Proper bootstrapping ensures:
|
||||||
|
|
||||||
|
1. Agents understand the project structure and conventions
|
||||||
|
2. Orchestration works correctly with quality gates
|
||||||
|
3. Independent code review and security review are configured
|
||||||
|
4. Issue tracking is consistent across projects
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Automated bootstrap (recommended)
|
||||||
|
~/.mosaic/rails/bootstrap/init-project.sh \
|
||||||
|
--name "my-project" \
|
||||||
|
--type "nestjs-nextjs" \
|
||||||
|
--repo "https://git.mosaicstack.dev/owner/repo"
|
||||||
|
|
||||||
|
# Or manually using templates
|
||||||
|
export PROJECT_NAME="My Project"
|
||||||
|
export PROJECT_DESCRIPTION="What this project does"
|
||||||
|
export TASK_PREFIX="MP"
|
||||||
|
envsubst < ~/.mosaic/templates/agent/CLAUDE.md.template > CLAUDE.md
|
||||||
|
envsubst < ~/.mosaic/templates/agent/AGENTS.md.template > AGENTS.md
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 1: Detect Project Type
|
||||||
|
|
||||||
|
Check what files exist in the project root to determine the type:
|
||||||
|
|
||||||
|
| File Present | Project Type | Template |
|
||||||
|
|---|---|---|
|
||||||
|
| `package.json` + `pnpm-workspace.yaml` + NestJS+Next.js | NestJS + Next.js Monorepo | `projects/nestjs-nextjs/` |
|
||||||
|
| `pyproject.toml` + `manage.py` | Django | `projects/django/` |
|
||||||
|
| `pyproject.toml` (no Django) | Python (generic) | Generic template |
|
||||||
|
| `package.json` (no monorepo) | Node.js (generic) | Generic template |
|
||||||
|
| Other | Generic | Generic template |
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Auto-detect project type
|
||||||
|
detect_project_type() {
|
||||||
|
if [[ -f "pnpm-workspace.yaml" ]] && [[ -f "turbo.json" ]]; then
|
||||||
|
# Check for NestJS + Next.js
|
||||||
|
if grep -q "nestjs" package.json 2>/dev/null && grep -q "next" package.json 2>/dev/null; then
|
||||||
|
echo "nestjs-nextjs"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
if [[ -f "manage.py" ]] && [[ -f "pyproject.toml" ]]; then
|
||||||
|
echo "django"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
if [[ -f "pyproject.toml" ]]; then
|
||||||
|
echo "python"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
if [[ -f "package.json" ]]; then
|
||||||
|
echo "nodejs"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
echo "generic"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 2: Create CLAUDE.md
|
||||||
|
|
||||||
|
The `CLAUDE.md` file is the primary configuration file for AI agents. It tells them everything they need to know about the project.
|
||||||
|
|
||||||
|
### Using a Tech-Stack Template
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Set variables
|
||||||
|
export PROJECT_NAME="My Project"
|
||||||
|
export PROJECT_DESCRIPTION="Multi-tenant SaaS platform"
|
||||||
|
export PROJECT_DIR="my-project"
|
||||||
|
export REPO_URL="https://git.mosaicstack.dev/owner/repo"
|
||||||
|
export TASK_PREFIX="MP"
|
||||||
|
|
||||||
|
# Use tech-stack-specific template if available
|
||||||
|
TYPE=$(detect_project_type)
|
||||||
|
TEMPLATE_DIR="$HOME/.mosaic/templates/agent/projects/$TYPE"
|
||||||
|
|
||||||
|
if [[ -d "$TEMPLATE_DIR" ]]; then
|
||||||
|
envsubst < "$TEMPLATE_DIR/CLAUDE.md.template" > CLAUDE.md
|
||||||
|
else
|
||||||
|
envsubst < "$HOME/.mosaic/templates/agent/CLAUDE.md.template" > CLAUDE.md
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using the Generic Template
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Set all required variables
|
||||||
|
export PROJECT_NAME="My Project"
|
||||||
|
export PROJECT_DESCRIPTION="What this project does"
|
||||||
|
export REPO_URL="https://git.mosaicstack.dev/owner/repo"
|
||||||
|
export PROJECT_DIR="my-project"
|
||||||
|
export SOURCE_DIR="src"
|
||||||
|
export CONFIG_FILES="pyproject.toml / package.json"
|
||||||
|
export FRONTEND_STACK="N/A"
|
||||||
|
export BACKEND_STACK="Python / FastAPI"
|
||||||
|
export DATABASE_STACK="PostgreSQL"
|
||||||
|
export TESTING_STACK="pytest"
|
||||||
|
export DEPLOYMENT_STACK="Docker"
|
||||||
|
export BUILD_COMMAND="pip install -e ."
|
||||||
|
export TEST_COMMAND="pytest tests/"
|
||||||
|
export LINT_COMMAND="ruff check ."
|
||||||
|
export TYPECHECK_COMMAND="mypy ."
|
||||||
|
export QUALITY_GATES="ruff check . && mypy . && pytest tests/"
|
||||||
|
|
||||||
|
envsubst < ~/.mosaic/templates/agent/CLAUDE.md.template > CLAUDE.md
|
||||||
|
```
|
||||||
|
|
||||||
|
### Required Sections
|
||||||
|
|
||||||
|
Every CLAUDE.md should contain:
|
||||||
|
|
||||||
|
1. **Project description** — One-line summary
|
||||||
|
2. **Conditional documentation loading** — Table of guides
|
||||||
|
3. **Technology stack** — What's used
|
||||||
|
4. **Repository structure** — Directory tree
|
||||||
|
5. **Development workflow** — Branch strategy, build, test
|
||||||
|
6. **Quality gates** — Commands that must pass
|
||||||
|
7. **Issue tracking** — Commit format, labels
|
||||||
|
8. **Code review** — Codex and fallback review commands
|
||||||
|
9. **Secrets management** — How secrets are handled
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 3: Create AGENTS.md
|
||||||
|
|
||||||
|
The `AGENTS.md` file contains agent-specific patterns, gotchas, and orchestrator integration details.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
TYPE=$(detect_project_type)
|
||||||
|
TEMPLATE_DIR="$HOME/.mosaic/templates/agent/projects/$TYPE"
|
||||||
|
|
||||||
|
if [[ -d "$TEMPLATE_DIR" ]]; then
|
||||||
|
envsubst < "$TEMPLATE_DIR/AGENTS.md.template" > AGENTS.md
|
||||||
|
else
|
||||||
|
envsubst < "$HOME/.mosaic/templates/agent/AGENTS.md.template" > AGENTS.md
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
### Living Document
|
||||||
|
|
||||||
|
AGENTS.md is a **living document**. Agents should update it when they discover:
|
||||||
|
- Reusable patterns (how similar features are built)
|
||||||
|
- Non-obvious requirements (e.g., "frontend env vars need NEXT_PUBLIC_ prefix")
|
||||||
|
- Common gotchas (e.g., "run migrations after schema changes")
|
||||||
|
- Testing approaches specific to this project
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 4: Create Directory Structure
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create standard directories
|
||||||
|
mkdir -p docs/scratchpads
|
||||||
|
mkdir -p docs/templates
|
||||||
|
mkdir -p docs/reports
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 5: Initialize Repository Labels & Milestones
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Use the init script
|
||||||
|
~/.mosaic/rails/bootstrap/init-repo-labels.sh
|
||||||
|
|
||||||
|
# Or manually create standard labels
|
||||||
|
~/.mosaic/rails/git/issue-create.sh # (labels are created on first use)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Standard Labels
|
||||||
|
|
||||||
|
| Label | Color | Purpose |
|
||||||
|
|-------|-------|---------|
|
||||||
|
| `epic` | `#3E4B9E` | Large feature spanning multiple issues |
|
||||||
|
| `feature` | `#0E8A16` | New functionality |
|
||||||
|
| `bug` | `#D73A4A` | Defect fix |
|
||||||
|
| `task` | `#0075CA` | General work item |
|
||||||
|
| `documentation` | `#0075CA` | Documentation updates |
|
||||||
|
| `security` | `#B60205` | Security-related |
|
||||||
|
| `breaking` | `#D93F0B` | Breaking change |
|
||||||
|
|
||||||
|
### Initial Milestone
|
||||||
|
|
||||||
|
Create the first milestone for MVP:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
~/.mosaic/rails/git/milestone-create.sh -t "0.1.0" -d "MVP - Minimum Viable Product"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 6: Set Up CI/CD Review Pipeline
|
||||||
|
|
||||||
|
### Woodpecker CI
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Copy Codex review pipeline
|
||||||
|
mkdir -p .woodpecker/schemas
|
||||||
|
cp ~/.mosaic/rails/codex/woodpecker/codex-review.yml .woodpecker/
|
||||||
|
cp ~/.mosaic/rails/codex/schemas/*.json .woodpecker/schemas/
|
||||||
|
|
||||||
|
# Add codex_api_key secret to Woodpecker CI dashboard
|
||||||
|
```
|
||||||
|
|
||||||
|
### GitHub Actions
|
||||||
|
|
||||||
|
For GitHub repos, use the official Codex GitHub Action instead:
|
||||||
|
```yaml
|
||||||
|
# .github/workflows/codex-review.yml
|
||||||
|
uses: openai/codex-action@v1
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 7: Verify Bootstrap
|
||||||
|
|
||||||
|
After bootstrapping, verify everything works:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check files exist
|
||||||
|
ls CLAUDE.md AGENTS.md docs/scratchpads/
|
||||||
|
|
||||||
|
# Verify CLAUDE.md has required sections
|
||||||
|
grep -c "Quality Gates" CLAUDE.md
|
||||||
|
grep -c "Technology Stack" CLAUDE.md
|
||||||
|
grep -c "Code Review" CLAUDE.md
|
||||||
|
|
||||||
|
# Verify quality gates run
|
||||||
|
eval "$(grep -A1 'Quality Gates' CLAUDE.md | tail -1)"
|
||||||
|
|
||||||
|
# Test Codex review (if configured)
|
||||||
|
~/.mosaic/rails/codex/codex-code-review.sh --help
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Available Templates
|
||||||
|
|
||||||
|
### Generic Templates
|
||||||
|
|
||||||
|
| Template | Path | Purpose |
|
||||||
|
|----------|------|---------|
|
||||||
|
| `CLAUDE.md.template` | `~/.mosaic/templates/agent/` | Generic project CLAUDE.md |
|
||||||
|
| `AGENTS.md.template` | `~/.mosaic/templates/agent/` | Generic agent context |
|
||||||
|
|
||||||
|
### Tech-Stack Templates
|
||||||
|
|
||||||
|
| Stack | Path | Includes |
|
||||||
|
|-------|------|----------|
|
||||||
|
| NestJS + Next.js | `~/.mosaic/templates/agent/projects/nestjs-nextjs/` | CLAUDE.md, AGENTS.md |
|
||||||
|
| Django | `~/.mosaic/templates/agent/projects/django/` | CLAUDE.md, AGENTS.md |
|
||||||
|
|
||||||
|
### Orchestrator Templates
|
||||||
|
|
||||||
|
| Template | Path | Purpose |
|
||||||
|
|----------|------|---------|
|
||||||
|
| `tasks.md.template` | `~/src/jarvis-brain/docs/templates/orchestrator/` | Task tracking |
|
||||||
|
| `orchestrator-learnings.json.template` | `~/src/jarvis-brain/docs/templates/orchestrator/` | Variance tracking |
|
||||||
|
| `phase-issue-body.md.template` | `~/src/jarvis-brain/docs/templates/orchestrator/` | Gitea issue body |
|
||||||
|
| `scratchpad.md.template` | `~/src/jarvis-brain/docs/templates/` | Per-task working doc |
|
||||||
|
|
||||||
|
### Variables Reference
|
||||||
|
|
||||||
|
| Variable | Description | Example |
|
||||||
|
|----------|-------------|---------|
|
||||||
|
| `${PROJECT_NAME}` | Human-readable project name | "Mosaic Stack" |
|
||||||
|
| `${PROJECT_DESCRIPTION}` | One-line description | "Multi-tenant platform" |
|
||||||
|
| `${PROJECT_DIR}` | Directory name | "mosaic-stack" |
|
||||||
|
| `${PROJECT_SLUG}` | Python package slug | "mosaic_stack" |
|
||||||
|
| `${REPO_URL}` | Git remote URL | "https://git.mosaicstack.dev/mosaic/stack" |
|
||||||
|
| `${TASK_PREFIX}` | Orchestrator task prefix | "MS" |
|
||||||
|
| `${SOURCE_DIR}` | Source code directory | "src" or "apps" |
|
||||||
|
| `${QUALITY_GATES}` | Quality gate commands | "pnpm typecheck && pnpm lint && pnpm test" |
|
||||||
|
| `${BUILD_COMMAND}` | Build command | "pnpm build" |
|
||||||
|
| `${TEST_COMMAND}` | Test command | "pnpm test" |
|
||||||
|
| `${LINT_COMMAND}` | Lint command | "pnpm lint" |
|
||||||
|
| `${TYPECHECK_COMMAND}` | Type check command | "pnpm typecheck" |
|
||||||
|
| `${FRONTEND_STACK}` | Frontend technologies | "Next.js + React" |
|
||||||
|
| `${BACKEND_STACK}` | Backend technologies | "NestJS + Prisma" |
|
||||||
|
| `${DATABASE_STACK}` | Database technologies | "PostgreSQL" |
|
||||||
|
| `${TESTING_STACK}` | Testing technologies | "Vitest + Playwright" |
|
||||||
|
| `${DEPLOYMENT_STACK}` | Deployment technologies | "Docker" |
|
||||||
|
| `${CONFIG_FILES}` | Key config files | "package.json, tsconfig.json" |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Bootstrap Scripts
|
||||||
|
|
||||||
|
### init-project.sh
|
||||||
|
|
||||||
|
Full project bootstrap with interactive and flag-based modes:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
~/.mosaic/rails/bootstrap/init-project.sh \
|
||||||
|
--name "My Project" \
|
||||||
|
--type "nestjs-nextjs" \
|
||||||
|
--repo "https://git.mosaicstack.dev/owner/repo" \
|
||||||
|
--prefix "MP" \
|
||||||
|
--description "Multi-tenant platform"
|
||||||
|
```
|
||||||
|
|
||||||
|
### init-repo-labels.sh
|
||||||
|
|
||||||
|
Initialize standard labels and MVP milestone:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
~/.mosaic/rails/bootstrap/init-repo-labels.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Checklist
|
||||||
|
|
||||||
|
After bootstrapping, verify:
|
||||||
|
|
||||||
|
- [ ] `CLAUDE.md` exists with all required sections
|
||||||
|
- [ ] `AGENTS.md` exists with patterns and quality gates
|
||||||
|
- [ ] `docs/scratchpads/` directory exists
|
||||||
|
- [ ] Git labels created (epic, feature, bug, task, etc.)
|
||||||
|
- [ ] Initial milestone created (0.1.0 MVP)
|
||||||
|
- [ ] Quality gates run successfully
|
||||||
|
- [ ] `.env.example` exists (if project uses env vars)
|
||||||
|
- [ ] CI/CD pipeline configured (if using Woodpecker/GitHub Actions)
|
||||||
|
- [ ] Codex review scripts accessible (`~/.mosaic/rails/codex/`)
|
||||||
904
guides/ci-cd-pipelines.md
Normal file
904
guides/ci-cd-pipelines.md
Normal file
@@ -0,0 +1,904 @@
|
|||||||
|
# CI/CD Pipeline Guide
|
||||||
|
|
||||||
|
> **Load this guide when:** Adding Docker build/push steps, configuring Woodpecker CI pipelines, publishing packages to registries, or implementing CI/CD for a new project.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This guide covers the canonical CI/CD pattern used across projects. The pipeline runs in Woodpecker CI and follows this flow:
|
||||||
|
|
||||||
|
```
|
||||||
|
GIT PUSH
|
||||||
|
↓
|
||||||
|
QUALITY GATES (lint, typecheck, test, audit)
|
||||||
|
↓ all pass
|
||||||
|
BUILD (compile all packages)
|
||||||
|
↓ only on main/develop/tags
|
||||||
|
DOCKER BUILD & PUSH (Kaniko → Gitea Container Registry)
|
||||||
|
↓ all images pushed
|
||||||
|
PACKAGE LINKING (associate images with repository in Gitea)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Reference Implementations
|
||||||
|
|
||||||
|
### Split Pipelines (Preferred for Monorepos)
|
||||||
|
|
||||||
|
**Mosaic Telemetry** (`~/src/mosaic-telemetry-monorepo/.woodpecker/`) is the canonical example of **split per-package pipelines** with path filtering, full security chain (source + container scanning), and efficient CI resource usage.
|
||||||
|
|
||||||
|
**Key features:**
|
||||||
|
- One YAML per package in `.woodpecker/` directory
|
||||||
|
- Path filtering: only the affected package's pipeline runs on push
|
||||||
|
- Security chain: source scanning (bandit/npm audit) + dependency audit (pip-audit) + container scanning (Trivy)
|
||||||
|
- Docker build gates on ALL quality steps
|
||||||
|
|
||||||
|
**Always use this pattern for monorepos.** It saves CI minutes and isolates failures.
|
||||||
|
|
||||||
|
### Single Pipeline (Legacy/Simple Projects)
|
||||||
|
|
||||||
|
**Mosaic Stack** (`~/src/mosaic-stack/.woodpecker/build.yml`) uses a single pipeline that builds everything on every push. This works but wastes CI resources on large monorepos. **Mosaic Stack is scheduled for migration to split pipelines.**
|
||||||
|
|
||||||
|
Always read the telemetry pipelines first when implementing a new pipeline.
|
||||||
|
|
||||||
|
## Infrastructure Instances
|
||||||
|
|
||||||
|
| Project | Gitea | Woodpecker | Registry |
|
||||||
|
|---------|-------|------------|----------|
|
||||||
|
| Mosaic Stack | `git.mosaicstack.dev` | `ci.mosaicstack.dev` | `git.mosaicstack.dev` |
|
||||||
|
| U-Connect | `git.uscllc.com` | `woodpecker.uscllc.net` | `git.uscllc.com` |
|
||||||
|
|
||||||
|
The patterns are identical — only the hostnames and org/repo names differ.
|
||||||
|
|
||||||
|
## Woodpecker Pipeline Structure
|
||||||
|
|
||||||
|
### YAML Anchors (DRY)
|
||||||
|
|
||||||
|
Define reusable values at the top of `.woodpecker.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
variables:
|
||||||
|
- &node_image "node:20-alpine"
|
||||||
|
- &install_deps |
|
||||||
|
corepack enable
|
||||||
|
npm ci
|
||||||
|
# For pnpm projects, use:
|
||||||
|
# - &install_deps |
|
||||||
|
# corepack enable
|
||||||
|
# pnpm install --frozen-lockfile
|
||||||
|
- &kaniko_setup |
|
||||||
|
mkdir -p /kaniko/.docker
|
||||||
|
echo "{\"auths\":{\"REGISTRY_HOST\":{\"username\":\"$GITEA_USER\",\"password\":\"$GITEA_TOKEN\"}}}" > /kaniko/.docker/config.json
|
||||||
|
```
|
||||||
|
|
||||||
|
Replace `REGISTRY_HOST` with the actual Gitea hostname (e.g., `git.uscllc.com`).
|
||||||
|
|
||||||
|
### Step Dependencies
|
||||||
|
|
||||||
|
Woodpecker runs steps in parallel by default. Use `depends_on` to create the dependency graph:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
steps:
|
||||||
|
install:
|
||||||
|
image: *node_image
|
||||||
|
commands:
|
||||||
|
- *install_deps
|
||||||
|
|
||||||
|
lint:
|
||||||
|
image: *node_image
|
||||||
|
commands:
|
||||||
|
- npm run lint
|
||||||
|
depends_on:
|
||||||
|
- install
|
||||||
|
|
||||||
|
typecheck:
|
||||||
|
image: *node_image
|
||||||
|
commands:
|
||||||
|
- npm run type-check
|
||||||
|
depends_on:
|
||||||
|
- install
|
||||||
|
|
||||||
|
test:
|
||||||
|
image: *node_image
|
||||||
|
commands:
|
||||||
|
- npm run test
|
||||||
|
depends_on:
|
||||||
|
- install
|
||||||
|
|
||||||
|
build:
|
||||||
|
image: *node_image
|
||||||
|
environment:
|
||||||
|
NODE_ENV: "production"
|
||||||
|
commands:
|
||||||
|
- npm run build
|
||||||
|
depends_on:
|
||||||
|
- lint
|
||||||
|
- typecheck
|
||||||
|
- test
|
||||||
|
```
|
||||||
|
|
||||||
|
### Conditional Execution
|
||||||
|
|
||||||
|
Use `when` clauses to limit expensive steps (Docker builds) to relevant branches:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
when:
|
||||||
|
# Top-level: run quality gates on everything
|
||||||
|
- event: [push, pull_request, manual]
|
||||||
|
|
||||||
|
# Per-step: only build Docker images on main/develop/tags
|
||||||
|
docker-build-api:
|
||||||
|
when:
|
||||||
|
- branch: [main, develop]
|
||||||
|
event: [push, manual, tag]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Docker Build & Push with Kaniko
|
||||||
|
|
||||||
|
### Why Kaniko
|
||||||
|
|
||||||
|
Kaniko builds container images without requiring a Docker daemon. This is the standard approach in Woodpecker CI because:
|
||||||
|
- No privileged mode needed
|
||||||
|
- No Docker-in-Docker security concerns
|
||||||
|
- Multi-destination tagging in a single build
|
||||||
|
- Works in any container runtime
|
||||||
|
|
||||||
|
### Kaniko Step Template
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
docker-build-SERVICE:
|
||||||
|
image: gcr.io/kaniko-project/executor:debug
|
||||||
|
environment:
|
||||||
|
GITEA_USER:
|
||||||
|
from_secret: gitea_username
|
||||||
|
GITEA_TOKEN:
|
||||||
|
from_secret: gitea_token
|
||||||
|
CI_COMMIT_BRANCH: ${CI_COMMIT_BRANCH}
|
||||||
|
CI_COMMIT_TAG: ${CI_COMMIT_TAG}
|
||||||
|
CI_COMMIT_SHA: ${CI_COMMIT_SHA}
|
||||||
|
commands:
|
||||||
|
- *kaniko_setup
|
||||||
|
- |
|
||||||
|
DESTINATIONS="--destination REGISTRY/ORG/IMAGE_NAME:${CI_COMMIT_SHA:0:8}"
|
||||||
|
if [ "$CI_COMMIT_BRANCH" = "main" ]; then
|
||||||
|
DESTINATIONS="$DESTINATIONS --destination REGISTRY/ORG/IMAGE_NAME:latest"
|
||||||
|
elif [ "$CI_COMMIT_BRANCH" = "develop" ]; then
|
||||||
|
DESTINATIONS="$DESTINATIONS --destination REGISTRY/ORG/IMAGE_NAME:dev"
|
||||||
|
fi
|
||||||
|
if [ -n "$CI_COMMIT_TAG" ]; then
|
||||||
|
DESTINATIONS="$DESTINATIONS --destination REGISTRY/ORG/IMAGE_NAME:$CI_COMMIT_TAG"
|
||||||
|
fi
|
||||||
|
/kaniko/executor --context . --dockerfile PATH/TO/Dockerfile $DESTINATIONS
|
||||||
|
when:
|
||||||
|
- branch: [main, develop]
|
||||||
|
event: [push, manual, tag]
|
||||||
|
depends_on:
|
||||||
|
- build
|
||||||
|
```
|
||||||
|
|
||||||
|
**Replace these placeholders:**
|
||||||
|
|
||||||
|
| Placeholder | Example (Mosaic) | Example (U-Connect) |
|
||||||
|
|-------------|-------------------|----------------------|
|
||||||
|
| `REGISTRY` | `git.mosaicstack.dev` | `git.uscllc.com` |
|
||||||
|
| `ORG` | `mosaic` | `usc` |
|
||||||
|
| `IMAGE_NAME` | `stack-api` | `uconnect-backend-api` |
|
||||||
|
| `PATH/TO/Dockerfile` | `apps/api/Dockerfile` | `src/backend-api/Dockerfile` |
|
||||||
|
|
||||||
|
### Image Tagging Strategy
|
||||||
|
|
||||||
|
Every build produces multiple tags:
|
||||||
|
|
||||||
|
| Condition | Tag | Purpose |
|
||||||
|
|-----------|-----|---------|
|
||||||
|
| Always | `${CI_COMMIT_SHA:0:8}` | Immutable reference to exact commit |
|
||||||
|
| `main` branch | `latest` | Current production release |
|
||||||
|
| `develop` branch | `dev` | Current development build |
|
||||||
|
| Git tag (e.g., `v1.0.0`) | `v1.0.0` | Semantic version release |
|
||||||
|
|
||||||
|
### Kaniko Options
|
||||||
|
|
||||||
|
Common flags for `/kaniko/executor`:
|
||||||
|
|
||||||
|
| Flag | Purpose |
|
||||||
|
|------|---------|
|
||||||
|
| `--context .` | Build context directory |
|
||||||
|
| `--dockerfile path/Dockerfile` | Dockerfile location |
|
||||||
|
| `--destination registry/org/image:tag` | Push target (repeatable) |
|
||||||
|
| `--build-arg KEY=VALUE` | Pass build arguments |
|
||||||
|
| `--cache=true` | Enable layer caching |
|
||||||
|
| `--cache-repo registry/org/image-cache` | Cache storage location |
|
||||||
|
|
||||||
|
### Build Arguments
|
||||||
|
|
||||||
|
Pass environment-specific values at build time:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
/kaniko/executor --context . --dockerfile apps/web/Dockerfile \
|
||||||
|
--build-arg NEXT_PUBLIC_API_URL=https://api.example.com \
|
||||||
|
$DESTINATIONS
|
||||||
|
```
|
||||||
|
|
||||||
|
## Gitea Container Registry
|
||||||
|
|
||||||
|
### How It Works
|
||||||
|
|
||||||
|
Gitea has a built-in container registry. When you push an image to `git.example.com/org/image:tag`, Gitea stores it and makes it available in the Packages section.
|
||||||
|
|
||||||
|
### Authentication
|
||||||
|
|
||||||
|
Kaniko authenticates via a Docker config file created at pipeline start:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"auths": {
|
||||||
|
"git.example.com": {
|
||||||
|
"username": "GITEA_USER",
|
||||||
|
"password": "GITEA_TOKEN"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The token must have `package:write` scope. Generate it at: `https://GITEA_HOST/user/settings/applications`
|
||||||
|
|
||||||
|
### Pulling Images
|
||||||
|
|
||||||
|
After pushing, images are available at:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker pull git.example.com/org/image:tag
|
||||||
|
```
|
||||||
|
|
||||||
|
In `docker-compose.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
api:
|
||||||
|
image: git.example.com/org/image:${IMAGE_TAG:-dev}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Package Linking
|
||||||
|
|
||||||
|
After pushing images to the Gitea registry, link them to the source repository so they appear on the repository's Packages tab.
|
||||||
|
|
||||||
|
### Gitea Package Linking API
|
||||||
|
|
||||||
|
```
|
||||||
|
POST /api/v1/packages/{owner}/{type}/{name}/-/link/{repo}
|
||||||
|
```
|
||||||
|
|
||||||
|
| Parameter | Value |
|
||||||
|
|-----------|-------|
|
||||||
|
| `owner` | Organization name (e.g., `mosaic`, `usc`) |
|
||||||
|
| `type` | `container` |
|
||||||
|
| `name` | Image name (e.g., `stack-api`) |
|
||||||
|
| `repo` | Repository name (e.g., `stack`, `uconnect`) |
|
||||||
|
|
||||||
|
### Link Step Template
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
link-packages:
|
||||||
|
image: alpine:3
|
||||||
|
environment:
|
||||||
|
GITEA_TOKEN:
|
||||||
|
from_secret: gitea_token
|
||||||
|
commands:
|
||||||
|
- apk add --no-cache curl
|
||||||
|
- echo "Waiting 10 seconds for packages to be indexed..."
|
||||||
|
- sleep 10
|
||||||
|
- |
|
||||||
|
set -e
|
||||||
|
link_package() {
|
||||||
|
PKG="$$1"
|
||||||
|
echo "Linking $$PKG..."
|
||||||
|
|
||||||
|
for attempt in 1 2 3; do
|
||||||
|
STATUS=$$(curl -s -o /tmp/link-response.txt -w "%{http_code}" -X POST \
|
||||||
|
-H "Authorization: token $$GITEA_TOKEN" \
|
||||||
|
"https://GITEA_HOST/api/v1/packages/ORG/container/$$PKG/-/link/REPO")
|
||||||
|
|
||||||
|
if [ "$$STATUS" = "201" ] || [ "$$STATUS" = "204" ]; then
|
||||||
|
echo " Linked $$PKG"
|
||||||
|
return 0
|
||||||
|
elif [ "$$STATUS" = "400" ]; then
|
||||||
|
echo " $$PKG already linked"
|
||||||
|
return 0
|
||||||
|
elif [ "$$STATUS" = "404" ] && [ $$attempt -lt 3 ]; then
|
||||||
|
echo " $$PKG not found yet, retrying in 5s (attempt $$attempt/3)..."
|
||||||
|
sleep 5
|
||||||
|
else
|
||||||
|
echo " FAILED: $$PKG status $$STATUS"
|
||||||
|
cat /tmp/link-response.txt
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
link_package "image-name-1"
|
||||||
|
link_package "image-name-2"
|
||||||
|
when:
|
||||||
|
- branch: [main, develop]
|
||||||
|
event: [push, manual, tag]
|
||||||
|
depends_on:
|
||||||
|
- docker-build-image-1
|
||||||
|
- docker-build-image-2
|
||||||
|
```
|
||||||
|
|
||||||
|
**Replace:** `GITEA_HOST`, `ORG`, `REPO`, and the `link_package` calls with actual image names.
|
||||||
|
|
||||||
|
**Note on `$$`:** Woodpecker uses `$$` to escape `$` in shell commands within YAML. Use `$$` for shell variables and `${CI_*}` (single `$`) for Woodpecker CI variables.
|
||||||
|
|
||||||
|
### Status Codes
|
||||||
|
|
||||||
|
| Code | Meaning | Action |
|
||||||
|
|------|---------|--------|
|
||||||
|
| 201 | Created | Success |
|
||||||
|
| 204 | No content | Success |
|
||||||
|
| 400 | Bad request | Already linked (OK) |
|
||||||
|
| 404 | Not found | Retry — package may not be indexed yet |
|
||||||
|
|
||||||
|
### Known Issue
|
||||||
|
|
||||||
|
The Gitea package linking API (added in Gitea 1.24.0) can return 404 for recently pushed packages. The retry logic with 5-second delays handles this. If linking still fails, packages are usable — they just won't appear on the repository Packages tab. They can be linked manually via the Gitea web UI.
|
||||||
|
|
||||||
|
## Woodpecker Secrets
|
||||||
|
|
||||||
|
### Required Secrets
|
||||||
|
|
||||||
|
Configure these in the Woodpecker UI (Settings > Secrets) or via CLI:
|
||||||
|
|
||||||
|
| Secret Name | Value | Scope |
|
||||||
|
|-------------|-------|-------|
|
||||||
|
| `gitea_username` | Gitea username or service account | `push`, `manual`, `tag` |
|
||||||
|
| `gitea_token` | Gitea token with `package:write` scope | `push`, `manual`, `tag` |
|
||||||
|
|
||||||
|
### Setting Secrets via CLI
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Woodpecker CLI
|
||||||
|
woodpecker secret add ORG/REPO --name gitea_username --value "USERNAME"
|
||||||
|
woodpecker secret add ORG/REPO --name gitea_token --value "TOKEN"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Security Rules
|
||||||
|
|
||||||
|
- Never hardcode tokens in pipeline YAML
|
||||||
|
- Use `from_secret` for all credentials
|
||||||
|
- Limit secret event scope (don't expose on `pull_request` from forks)
|
||||||
|
- Use dedicated service accounts, not personal tokens
|
||||||
|
- Rotate tokens periodically
|
||||||
|
|
||||||
|
## npm Package Publishing
|
||||||
|
|
||||||
|
For projects with publishable npm packages (e.g., shared libraries, design systems).
|
||||||
|
|
||||||
|
### Publishing to Gitea npm Registry
|
||||||
|
|
||||||
|
Gitea includes a built-in npm registry at `https://GITEA_HOST/api/packages/ORG/npm/`.
|
||||||
|
|
||||||
|
**Pipeline step:**
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
publish-packages:
|
||||||
|
image: *node_image
|
||||||
|
environment:
|
||||||
|
GITEA_TOKEN:
|
||||||
|
from_secret: gitea_token
|
||||||
|
commands:
|
||||||
|
- |
|
||||||
|
echo "//GITEA_HOST/api/packages/ORG/npm/:_authToken=$$GITEA_TOKEN" > .npmrc
|
||||||
|
echo "@SCOPE:registry=https://GITEA_HOST/api/packages/ORG/npm/" >> .npmrc
|
||||||
|
- npm publish -w @SCOPE/package-name
|
||||||
|
when:
|
||||||
|
- branch: [main]
|
||||||
|
event: [push, manual, tag]
|
||||||
|
depends_on:
|
||||||
|
- build
|
||||||
|
```
|
||||||
|
|
||||||
|
**Replace:** `GITEA_HOST`, `ORG`, `SCOPE`, `package-name`.
|
||||||
|
|
||||||
|
### Why Gitea npm (not Verdaccio)
|
||||||
|
|
||||||
|
Gitea's built-in npm registry eliminates the need for a separate Verdaccio instance. Benefits:
|
||||||
|
|
||||||
|
- **Same auth** — Gitea token with `package:write` scope works for git, containers, AND npm
|
||||||
|
- **No extra service** — No Verdaccio container, no OAuth/Authentik integration, no separate compose stack
|
||||||
|
- **Same UI** — Packages appear alongside container images in Gitea's Packages tab
|
||||||
|
- **Same secrets** — `gitea_token` in Woodpecker handles both Docker push and npm publish
|
||||||
|
|
||||||
|
If a project currently uses Verdaccio (e.g., U-Connect at `npm.uscllc.net`), migrate to Gitea npm. See the migration checklist below.
|
||||||
|
|
||||||
|
### Versioning
|
||||||
|
|
||||||
|
Only publish when the version in `package.json` has changed. Add a version check:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
commands:
|
||||||
|
- |
|
||||||
|
CURRENT=$(node -p "require('./src/PACKAGE/package.json').version")
|
||||||
|
PUBLISHED=$(npm view @SCOPE/PACKAGE version 2>/dev/null || echo "0.0.0")
|
||||||
|
if [ "$CURRENT" = "$PUBLISHED" ]; then
|
||||||
|
echo "Version $CURRENT already published, skipping"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
echo "Publishing $CURRENT (was $PUBLISHED)"
|
||||||
|
npm publish -w @SCOPE/PACKAGE
|
||||||
|
```
|
||||||
|
|
||||||
|
## CI Services (Test Databases)
|
||||||
|
|
||||||
|
For projects that need a database during CI (migrations, integration tests):
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
postgres:
|
||||||
|
image: postgres:17-alpine
|
||||||
|
environment:
|
||||||
|
POSTGRES_DB: test_db
|
||||||
|
POSTGRES_USER: test_user
|
||||||
|
POSTGRES_PASSWORD: test_password
|
||||||
|
|
||||||
|
steps:
|
||||||
|
test:
|
||||||
|
image: *node_image
|
||||||
|
environment:
|
||||||
|
DATABASE_URL: "postgresql://test_user:test_password@postgres:5432/test_db?schema=public"
|
||||||
|
commands:
|
||||||
|
- npm run test
|
||||||
|
depends_on:
|
||||||
|
- install
|
||||||
|
```
|
||||||
|
|
||||||
|
The service name (`postgres`) becomes the hostname within the pipeline network.
|
||||||
|
|
||||||
|
## Split Pipelines for Monorepos (REQUIRED)
|
||||||
|
|
||||||
|
For any monorepo with multiple packages/apps, use **split pipelines** — one YAML per package in `.woodpecker/`.
|
||||||
|
|
||||||
|
### Why Split?
|
||||||
|
|
||||||
|
| Aspect | Single pipeline | Split pipelines |
|
||||||
|
|--------|----------------|-----------------|
|
||||||
|
| Path filtering | None — everything rebuilds | Per-package — only affected code |
|
||||||
|
| Security scanning | Often missing | Required per-package |
|
||||||
|
| CI minutes | Wasted on unaffected packages | Efficient |
|
||||||
|
| Failure isolation | One failure blocks everything | Per-package failures isolated |
|
||||||
|
| Readability | One massive file | Focused, maintainable |
|
||||||
|
|
||||||
|
### Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
.woodpecker/
|
||||||
|
├── api.yml # Only runs when apps/api/** changes
|
||||||
|
├── web.yml # Only runs when apps/web/** changes
|
||||||
|
└── (infra.yml) # Optional: shared infra (DB images, etc.)
|
||||||
|
```
|
||||||
|
|
||||||
|
**IMPORTANT:** Do NOT also have `.woodpecker.yml` at root — `.woodpecker/` directory takes precedence and the `.yml` file will be silently ignored.
|
||||||
|
|
||||||
|
### Path Filtering Template
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
when:
|
||||||
|
- event: [push, pull_request, manual]
|
||||||
|
path:
|
||||||
|
include: ['apps/api/**', '.woodpecker/api.yml']
|
||||||
|
```
|
||||||
|
|
||||||
|
Each pipeline self-triggers on its own YAML changes. Manual triggers run regardless of path.
|
||||||
|
|
||||||
|
### Kaniko Context Scoping
|
||||||
|
|
||||||
|
In split pipelines, scope the Kaniko context to the app directory:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
/kaniko/executor --context apps/api --dockerfile apps/api/Dockerfile $$DESTINATIONS
|
||||||
|
```
|
||||||
|
|
||||||
|
This means Dockerfile `COPY . .` only copies the app's files, not the entire monorepo.
|
||||||
|
|
||||||
|
### Reference: Telemetry Split Pipeline
|
||||||
|
|
||||||
|
See `~/src/mosaic-telemetry-monorepo/.woodpecker/api.yml` and `web.yml` for a complete working example with path filtering, security chain, and Trivy scanning.
|
||||||
|
|
||||||
|
## Security Scanning (REQUIRED)
|
||||||
|
|
||||||
|
Every pipeline MUST include security scanning. Docker build steps MUST gate on all security steps passing.
|
||||||
|
|
||||||
|
### Source-Level Security (per tech stack)
|
||||||
|
|
||||||
|
**Python:**
|
||||||
|
```yaml
|
||||||
|
security-bandit:
|
||||||
|
image: *uv_image
|
||||||
|
commands:
|
||||||
|
- |
|
||||||
|
cd apps/api
|
||||||
|
uv sync --all-extras --frozen
|
||||||
|
uv run bandit -r src/ -f screen
|
||||||
|
depends_on: [install]
|
||||||
|
|
||||||
|
security-audit:
|
||||||
|
image: *uv_image
|
||||||
|
commands:
|
||||||
|
- |
|
||||||
|
cd apps/api
|
||||||
|
uv sync --all-extras --frozen
|
||||||
|
uv run pip-audit
|
||||||
|
depends_on: [install]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Node.js:**
|
||||||
|
```yaml
|
||||||
|
security-audit:
|
||||||
|
image: node:22-alpine
|
||||||
|
commands:
|
||||||
|
- cd apps/web && npm audit --audit-level=high
|
||||||
|
depends_on: [install]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Container Scanning (Trivy) — Post-Build
|
||||||
|
|
||||||
|
Run Trivy against every built image to catch OS-level and runtime vulnerabilities:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
security-trivy:
|
||||||
|
image: aquasec/trivy:latest
|
||||||
|
environment:
|
||||||
|
GITEA_USER:
|
||||||
|
from_secret: gitea_username
|
||||||
|
GITEA_TOKEN:
|
||||||
|
from_secret: gitea_token
|
||||||
|
CI_COMMIT_SHA: ${CI_COMMIT_SHA}
|
||||||
|
commands:
|
||||||
|
- |
|
||||||
|
mkdir -p ~/.docker
|
||||||
|
echo "{\"auths\":{\"REGISTRY\":{\"username\":\"$$GITEA_USER\",\"password\":\"$$GITEA_TOKEN\"}}}" > ~/.docker/config.json
|
||||||
|
trivy image --exit-code 1 --severity HIGH,CRITICAL --ignore-unfixed \
|
||||||
|
REGISTRY/ORG/IMAGE:$${CI_COMMIT_SHA:0:8}
|
||||||
|
when:
|
||||||
|
- branch: [main, develop]
|
||||||
|
event: [push, manual, tag]
|
||||||
|
depends_on:
|
||||||
|
- docker-build-SERVICE
|
||||||
|
```
|
||||||
|
|
||||||
|
**Replace:** `REGISTRY`, `ORG`, `IMAGE`, `SERVICE`.
|
||||||
|
|
||||||
|
### Full Dependency Chain
|
||||||
|
|
||||||
|
```
|
||||||
|
install → [lint, typecheck, security-source, security-deps, test] → docker-build → trivy → link-package
|
||||||
|
```
|
||||||
|
|
||||||
|
Docker build MUST depend on ALL quality + security steps. Trivy runs AFTER build. Package linking runs AFTER Trivy.
|
||||||
|
|
||||||
|
## Monorepo Considerations
|
||||||
|
|
||||||
|
### pnpm + Turbo (Mosaic Stack pattern)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
variables:
|
||||||
|
- &install_deps |
|
||||||
|
corepack enable
|
||||||
|
pnpm install --frozen-lockfile
|
||||||
|
|
||||||
|
steps:
|
||||||
|
build:
|
||||||
|
commands:
|
||||||
|
- *install_deps
|
||||||
|
- pnpm build # Turbo handles dependency order and caching
|
||||||
|
```
|
||||||
|
|
||||||
|
### npm Workspaces (U-Connect pattern)
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
variables:
|
||||||
|
- &install_deps |
|
||||||
|
corepack enable
|
||||||
|
npm ci
|
||||||
|
|
||||||
|
steps:
|
||||||
|
# Build shared dependencies first
|
||||||
|
build-deps:
|
||||||
|
commands:
|
||||||
|
- npm run build -w @scope/shared-auth
|
||||||
|
- npm run build -w @scope/shared-types
|
||||||
|
|
||||||
|
# Then build everything
|
||||||
|
build-all:
|
||||||
|
commands:
|
||||||
|
- npm run build -w @scope/package-1
|
||||||
|
- npm run build -w @scope/package-2
|
||||||
|
# ... in dependency order
|
||||||
|
depends_on:
|
||||||
|
- build-deps
|
||||||
|
```
|
||||||
|
|
||||||
|
### Per-Package Quality Checks
|
||||||
|
|
||||||
|
For large monorepos, run checks per-package in parallel:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
lint-api:
|
||||||
|
commands:
|
||||||
|
- npm run lint -w @scope/api
|
||||||
|
depends_on: [install]
|
||||||
|
|
||||||
|
lint-web:
|
||||||
|
commands:
|
||||||
|
- npm run lint -w @scope/web
|
||||||
|
depends_on: [install]
|
||||||
|
|
||||||
|
# These run in parallel since they share the same dependency
|
||||||
|
```
|
||||||
|
|
||||||
|
## Complete Pipeline Example
|
||||||
|
|
||||||
|
This is a minimal but complete pipeline for a project with two services:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
when:
|
||||||
|
- event: [push, pull_request, manual]
|
||||||
|
|
||||||
|
variables:
|
||||||
|
- &node_image "node:20-alpine"
|
||||||
|
- &install_deps |
|
||||||
|
corepack enable
|
||||||
|
npm ci
|
||||||
|
- &kaniko_setup |
|
||||||
|
mkdir -p /kaniko/.docker
|
||||||
|
echo "{\"auths\":{\"git.example.com\":{\"username\":\"$GITEA_USER\",\"password\":\"$GITEA_TOKEN\"}}}" > /kaniko/.docker/config.json
|
||||||
|
|
||||||
|
steps:
|
||||||
|
# === Quality Gates ===
|
||||||
|
install:
|
||||||
|
image: *node_image
|
||||||
|
commands:
|
||||||
|
- *install_deps
|
||||||
|
|
||||||
|
lint:
|
||||||
|
image: *node_image
|
||||||
|
commands:
|
||||||
|
- npm run lint
|
||||||
|
depends_on: [install]
|
||||||
|
|
||||||
|
test:
|
||||||
|
image: *node_image
|
||||||
|
commands:
|
||||||
|
- npm run test
|
||||||
|
depends_on: [install]
|
||||||
|
|
||||||
|
build:
|
||||||
|
image: *node_image
|
||||||
|
environment:
|
||||||
|
NODE_ENV: "production"
|
||||||
|
commands:
|
||||||
|
- npm run build
|
||||||
|
depends_on: [lint, test]
|
||||||
|
|
||||||
|
# === Docker Build & Push ===
|
||||||
|
docker-build-api:
|
||||||
|
image: gcr.io/kaniko-project/executor:debug
|
||||||
|
environment:
|
||||||
|
GITEA_USER:
|
||||||
|
from_secret: gitea_username
|
||||||
|
GITEA_TOKEN:
|
||||||
|
from_secret: gitea_token
|
||||||
|
CI_COMMIT_BRANCH: ${CI_COMMIT_BRANCH}
|
||||||
|
CI_COMMIT_TAG: ${CI_COMMIT_TAG}
|
||||||
|
CI_COMMIT_SHA: ${CI_COMMIT_SHA}
|
||||||
|
commands:
|
||||||
|
- *kaniko_setup
|
||||||
|
- |
|
||||||
|
DESTINATIONS="--destination git.example.com/org/api:${CI_COMMIT_SHA:0:8}"
|
||||||
|
if [ "$CI_COMMIT_BRANCH" = "main" ]; then
|
||||||
|
DESTINATIONS="$DESTINATIONS --destination git.example.com/org/api:latest"
|
||||||
|
elif [ "$CI_COMMIT_BRANCH" = "develop" ]; then
|
||||||
|
DESTINATIONS="$DESTINATIONS --destination git.example.com/org/api:dev"
|
||||||
|
fi
|
||||||
|
if [ -n "$CI_COMMIT_TAG" ]; then
|
||||||
|
DESTINATIONS="$DESTINATIONS --destination git.example.com/org/api:$CI_COMMIT_TAG"
|
||||||
|
fi
|
||||||
|
/kaniko/executor --context . --dockerfile src/api/Dockerfile $DESTINATIONS
|
||||||
|
when:
|
||||||
|
- branch: [main, develop]
|
||||||
|
event: [push, manual, tag]
|
||||||
|
depends_on: [build]
|
||||||
|
|
||||||
|
docker-build-web:
|
||||||
|
image: gcr.io/kaniko-project/executor:debug
|
||||||
|
environment:
|
||||||
|
GITEA_USER:
|
||||||
|
from_secret: gitea_username
|
||||||
|
GITEA_TOKEN:
|
||||||
|
from_secret: gitea_token
|
||||||
|
CI_COMMIT_BRANCH: ${CI_COMMIT_BRANCH}
|
||||||
|
CI_COMMIT_TAG: ${CI_COMMIT_TAG}
|
||||||
|
CI_COMMIT_SHA: ${CI_COMMIT_SHA}
|
||||||
|
commands:
|
||||||
|
- *kaniko_setup
|
||||||
|
- |
|
||||||
|
DESTINATIONS="--destination git.example.com/org/web:${CI_COMMIT_SHA:0:8}"
|
||||||
|
if [ "$CI_COMMIT_BRANCH" = "main" ]; then
|
||||||
|
DESTINATIONS="$DESTINATIONS --destination git.example.com/org/web:latest"
|
||||||
|
elif [ "$CI_COMMIT_BRANCH" = "develop" ]; then
|
||||||
|
DESTINATIONS="$DESTINATIONS --destination git.example.com/org/web:dev"
|
||||||
|
fi
|
||||||
|
if [ -n "$CI_COMMIT_TAG" ]; then
|
||||||
|
DESTINATIONS="$DESTINATIONS --destination git.example.com/org/web:$CI_COMMIT_TAG"
|
||||||
|
fi
|
||||||
|
/kaniko/executor --context . --dockerfile src/web/Dockerfile $DESTINATIONS
|
||||||
|
when:
|
||||||
|
- branch: [main, develop]
|
||||||
|
event: [push, manual, tag]
|
||||||
|
depends_on: [build]
|
||||||
|
|
||||||
|
# === Package Linking ===
|
||||||
|
link-packages:
|
||||||
|
image: alpine:3
|
||||||
|
environment:
|
||||||
|
GITEA_TOKEN:
|
||||||
|
from_secret: gitea_token
|
||||||
|
commands:
|
||||||
|
- apk add --no-cache curl
|
||||||
|
- sleep 10
|
||||||
|
- |
|
||||||
|
set -e
|
||||||
|
link_package() {
|
||||||
|
PKG="$$1"
|
||||||
|
for attempt in 1 2 3; do
|
||||||
|
STATUS=$$(curl -s -o /dev/null -w "%{http_code}" -X POST \
|
||||||
|
-H "Authorization: token $$GITEA_TOKEN" \
|
||||||
|
"https://git.example.com/api/v1/packages/org/container/$$PKG/-/link/repo")
|
||||||
|
if [ "$$STATUS" = "201" ] || [ "$$STATUS" = "204" ] || [ "$$STATUS" = "400" ]; then
|
||||||
|
echo "Linked $$PKG ($$STATUS)"
|
||||||
|
return 0
|
||||||
|
elif [ $$attempt -lt 3 ]; then
|
||||||
|
sleep 5
|
||||||
|
else
|
||||||
|
echo "FAILED: $$PKG ($$STATUS)"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
}
|
||||||
|
link_package "api"
|
||||||
|
link_package "web"
|
||||||
|
when:
|
||||||
|
- branch: [main, develop]
|
||||||
|
event: [push, manual, tag]
|
||||||
|
depends_on:
|
||||||
|
- docker-build-api
|
||||||
|
- docker-build-web
|
||||||
|
```
|
||||||
|
|
||||||
|
## Checklist: Adding CI/CD to a Project
|
||||||
|
|
||||||
|
1. **Verify Dockerfiles exist** for each service that needs an image
|
||||||
|
2. **Create Woodpecker secrets** (`gitea_username`, `gitea_token`) in the Woodpecker UI
|
||||||
|
3. **Verify Gitea token scope** includes `package:write`
|
||||||
|
4. **Add Docker build steps** to `.woodpecker.yml` using the Kaniko template above
|
||||||
|
5. **Add package linking step** after all Docker builds
|
||||||
|
6. **Update `docker-compose.yml`** to reference registry images instead of local builds:
|
||||||
|
```yaml
|
||||||
|
image: git.example.com/org/service:${IMAGE_TAG:-dev}
|
||||||
|
```
|
||||||
|
7. **Test on develop branch first** — push a small change and verify the pipeline
|
||||||
|
8. **Verify images appear** in Gitea Packages tab after successful pipeline
|
||||||
|
|
||||||
|
## Gitea as Unified Platform
|
||||||
|
|
||||||
|
Gitea provides **three services in one**, eliminating the need for separate Harbor and Verdaccio deployments:
|
||||||
|
|
||||||
|
| Service | What Gitea Replaces | Registry URL |
|
||||||
|
|---------|---------------------|-------------|
|
||||||
|
| **Git hosting** | GitHub/GitLab | `https://GITEA_HOST/org/repo` |
|
||||||
|
| **Container registry** | Harbor, Docker Hub | `docker pull GITEA_HOST/org/image:tag` |
|
||||||
|
| **npm registry** | Verdaccio, Artifactory | `https://GITEA_HOST/api/packages/org/npm/` |
|
||||||
|
|
||||||
|
### Additional Package Types
|
||||||
|
|
||||||
|
Gitea also supports PyPI, Maven, NuGet, Cargo, Composer, Conan, Conda, Generic, and more. All use the same token authentication.
|
||||||
|
|
||||||
|
### Single Token, Three Services
|
||||||
|
|
||||||
|
A Gitea token with `package:write` scope handles:
|
||||||
|
- `git push` / `git pull`
|
||||||
|
- `docker push` / `docker pull` (container registry)
|
||||||
|
- `npm publish` / `npm install` (npm registry)
|
||||||
|
|
||||||
|
This means a single `gitea_token` secret in Woodpecker CI covers all CI/CD operations.
|
||||||
|
|
||||||
|
### Architecture Simplification
|
||||||
|
|
||||||
|
**Before (3 services):**
|
||||||
|
```
|
||||||
|
Gitea (git) + Harbor (containers) + Verdaccio (npm)
|
||||||
|
↓ separate auth ↓ separate auth ↓ OAuth/Authentik
|
||||||
|
3 tokens 1 robot account 1 OIDC integration
|
||||||
|
3 backup targets complex RBAC group-based access
|
||||||
|
```
|
||||||
|
|
||||||
|
**After (1 service):**
|
||||||
|
```
|
||||||
|
Gitea (git + containers + npm)
|
||||||
|
↓ single token
|
||||||
|
1 secret in Woodpecker
|
||||||
|
1 backup target
|
||||||
|
unified RBAC via Gitea teams
|
||||||
|
```
|
||||||
|
|
||||||
|
## Migrating from Verdaccio to Gitea npm
|
||||||
|
|
||||||
|
If a project currently uses Verdaccio (e.g., U-Connect at `npm.uscllc.net`), follow this migration checklist:
|
||||||
|
|
||||||
|
### Migration Steps
|
||||||
|
|
||||||
|
1. **Verify Gitea npm registry is accessible:**
|
||||||
|
```bash
|
||||||
|
curl -s https://GITEA_HOST/api/packages/ORG/npm/ | head -5
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Update `.npmrc` in project root:**
|
||||||
|
```ini
|
||||||
|
# Before (Verdaccio)
|
||||||
|
@uconnect:registry=https://npm.uscllc.net
|
||||||
|
|
||||||
|
# After (Gitea)
|
||||||
|
@uconnect:registry=https://git.uscllc.com/api/packages/usc/npm/
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Update CI pipeline** — replace `npm_token` secret with `gitea_token`:
|
||||||
|
```yaml
|
||||||
|
# Uses same token as Docker push — no extra secret needed
|
||||||
|
echo "//GITEA_HOST/api/packages/ORG/npm/:_authToken=$$GITEA_TOKEN" > .npmrc
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Re-publish existing packages** to Gitea registry:
|
||||||
|
```bash
|
||||||
|
# For each @scope/package
|
||||||
|
npm publish -w @scope/package --registry https://GITEA_HOST/api/packages/ORG/npm/
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Update consumer projects** — any project that `npm install`s from the old registry needs its `.npmrc` updated
|
||||||
|
|
||||||
|
6. **Remove Verdaccio infrastructure:**
|
||||||
|
- Docker compose stack (`compose.verdaccio.yml`)
|
||||||
|
- Authentik OAuth provider/blueprints
|
||||||
|
- Verdaccio config files
|
||||||
|
- DNS entry for `npm.uscllc.net` (eventually)
|
||||||
|
|
||||||
|
### What You Can Remove
|
||||||
|
|
||||||
|
| Component | Location | Purpose (was) |
|
||||||
|
|-----------|----------|---------------|
|
||||||
|
| Verdaccio compose | `compose.verdaccio.yml` | npm registry container |
|
||||||
|
| Verdaccio config | `config/verdaccio/` | Server configuration |
|
||||||
|
| Authentik blueprints | `config/authentik/blueprints/*/verdaccio-*` | OAuth integration |
|
||||||
|
| Verdaccio scripts | `scripts/verdaccio/` | Blueprint application |
|
||||||
|
| OIDC env vars | `.env` | `AUTHENTIK_VERDACCIO_*`, `VERDACCIO_OPENID_*` |
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### "unauthorized: authentication required"
|
||||||
|
- Verify `gitea_username` and `gitea_token` secrets are set in Woodpecker
|
||||||
|
- Verify the token has `package:write` scope
|
||||||
|
- Check the registry hostname in `kaniko_setup` matches the Gitea instance
|
||||||
|
|
||||||
|
### Kaniko build fails with "error building image"
|
||||||
|
- Verify the Dockerfile path is correct relative to `--context`
|
||||||
|
- Check that multi-stage builds don't reference stages that don't exist
|
||||||
|
- Run `docker build` locally first to verify the Dockerfile works
|
||||||
|
|
||||||
|
### Package linking returns 404
|
||||||
|
- Normal for recently pushed packages — the retry logic handles this
|
||||||
|
- If persistent: verify the package name matches exactly (case-sensitive)
|
||||||
|
- Check Gitea version is 1.24.0+ (package linking API requirement)
|
||||||
|
|
||||||
|
### Images not visible in Gitea Packages
|
||||||
|
- Linking may have failed — check the `link-packages` step logs
|
||||||
|
- Images are still usable via `docker pull` even without linking
|
||||||
|
- Link manually: Gitea UI > Packages > Select package > Link to repository
|
||||||
|
|
||||||
|
### Pipeline runs Docker builds on pull requests
|
||||||
|
- Verify `when` clause on Docker build steps restricts to `branch: [main, develop]`
|
||||||
|
- Pull requests should only run quality gates, not build/push images
|
||||||
101
guides/code-review.md
Normal file
101
guides/code-review.md
Normal file
@@ -0,0 +1,101 @@
|
|||||||
|
# Code Review Guide
|
||||||
|
|
||||||
|
## Review Checklist
|
||||||
|
|
||||||
|
### 1. Correctness
|
||||||
|
- [ ] Code does what the issue/PR description says
|
||||||
|
- [ ] Edge cases are handled
|
||||||
|
- [ ] Error conditions are managed properly
|
||||||
|
- [ ] No obvious bugs or logic errors
|
||||||
|
|
||||||
|
### 2. Security
|
||||||
|
- [ ] No hardcoded secrets or credentials
|
||||||
|
- [ ] Input validation at boundaries
|
||||||
|
- [ ] SQL injection prevention (parameterized queries)
|
||||||
|
- [ ] XSS prevention (output encoding)
|
||||||
|
- [ ] Authentication/authorization checks present
|
||||||
|
- [ ] Sensitive data not logged
|
||||||
|
- [ ] Secrets follow Vault structure (see `docs/vault-secrets-structure.md`)
|
||||||
|
|
||||||
|
### 3. Testing
|
||||||
|
- [ ] Tests exist for new functionality
|
||||||
|
- [ ] Tests cover happy path AND error cases
|
||||||
|
- [ ] Coverage meets 85% minimum
|
||||||
|
- [ ] Tests are readable and maintainable
|
||||||
|
- [ ] No flaky tests introduced
|
||||||
|
|
||||||
|
### 4. Code Quality
|
||||||
|
- [ ] Follows Google Style Guide for the language
|
||||||
|
- [ ] Functions are focused and reasonably sized
|
||||||
|
- [ ] No unnecessary complexity
|
||||||
|
- [ ] DRY - no significant duplication
|
||||||
|
- [ ] Clear naming for variables and functions
|
||||||
|
- [ ] No dead code or commented-out code
|
||||||
|
|
||||||
|
### 4a. TypeScript Strict Typing (see `typescript.md`)
|
||||||
|
- [ ] **NO `any` types** — explicit types required everywhere
|
||||||
|
- [ ] **NO lazy `unknown`** — only for error catches with immediate narrowing
|
||||||
|
- [ ] **Explicit return types** on all exported/public functions
|
||||||
|
- [ ] **Explicit parameter types** — never implicit any
|
||||||
|
- [ ] **No type assertions** (`as Type`) — use type guards instead
|
||||||
|
- [ ] **No non-null assertions** (`!`) — use proper null handling
|
||||||
|
- [ ] **Interfaces for objects** — not inline types
|
||||||
|
- [ ] **Discriminated unions** for variant types
|
||||||
|
|
||||||
|
### 5. Documentation
|
||||||
|
- [ ] Complex logic has explanatory comments
|
||||||
|
- [ ] Public APIs are documented
|
||||||
|
- [ ] README updated if needed
|
||||||
|
- [ ] Breaking changes noted
|
||||||
|
|
||||||
|
### 6. Performance
|
||||||
|
- [ ] No obvious N+1 queries
|
||||||
|
- [ ] No blocking operations in hot paths
|
||||||
|
- [ ] Resource cleanup (connections, file handles)
|
||||||
|
- [ ] Reasonable memory usage
|
||||||
|
|
||||||
|
### 7. Dependencies
|
||||||
|
- [ ] No deprecated packages
|
||||||
|
- [ ] No unnecessary new dependencies
|
||||||
|
- [ ] Dependency versions pinned appropriately
|
||||||
|
|
||||||
|
## Review Process
|
||||||
|
|
||||||
|
### Getting Context
|
||||||
|
```bash
|
||||||
|
# List the issue being addressed
|
||||||
|
~/.mosaic/rails/git/issue-list.sh -i {issue-number}
|
||||||
|
|
||||||
|
# View the changes
|
||||||
|
git diff main...HEAD
|
||||||
|
```
|
||||||
|
|
||||||
|
### Providing Feedback
|
||||||
|
- Be specific: point to exact lines/files
|
||||||
|
- Explain WHY something is problematic
|
||||||
|
- Suggest alternatives when possible
|
||||||
|
- Distinguish between blocking issues and suggestions
|
||||||
|
- Be constructive, not critical of the person
|
||||||
|
|
||||||
|
### Feedback Categories
|
||||||
|
- **Blocker**: Must fix before merge (security, bugs, test failures)
|
||||||
|
- **Should Fix**: Important but not blocking (code quality, minor issues)
|
||||||
|
- **Suggestion**: Optional improvements (style preferences, nice-to-haves)
|
||||||
|
- **Question**: Seeking clarification
|
||||||
|
|
||||||
|
### Review Comment Format
|
||||||
|
```
|
||||||
|
[BLOCKER] Line 42: SQL injection vulnerability
|
||||||
|
The user input is directly interpolated into the query.
|
||||||
|
Use parameterized queries instead:
|
||||||
|
`db.query("SELECT * FROM users WHERE id = ?", [userId])`
|
||||||
|
|
||||||
|
[SUGGESTION] Line 78: Consider extracting to helper
|
||||||
|
This pattern appears in 3 places. A shared helper would reduce duplication.
|
||||||
|
```
|
||||||
|
|
||||||
|
## After Review
|
||||||
|
1. Update issue with review status
|
||||||
|
2. If changes requested, assign back to author
|
||||||
|
3. If approved, note approval in issue comments
|
||||||
|
4. For merges, ensure CI passes first
|
||||||
80
guides/frontend.md
Normal file
80
guides/frontend.md
Normal file
@@ -0,0 +1,80 @@
|
|||||||
|
# Frontend Development Guide
|
||||||
|
|
||||||
|
## Before Starting
|
||||||
|
1. Check assigned issue in git repo: `~/.mosaic/rails/git/issue-list.sh -a @me`
|
||||||
|
2. Create scratchpad: `docs/scratchpads/{issue-number}-{short-name}.md`
|
||||||
|
3. Review existing components and patterns in the codebase
|
||||||
|
|
||||||
|
## Development Standards
|
||||||
|
|
||||||
|
### Framework Conventions
|
||||||
|
- Follow project's existing framework patterns (React, Vue, Svelte, etc.)
|
||||||
|
- Use existing component library/design system if present
|
||||||
|
- Maintain consistent file structure with existing code
|
||||||
|
|
||||||
|
### Styling
|
||||||
|
- Use project's established styling approach (CSS modules, Tailwind, styled-components, etc.)
|
||||||
|
- Follow existing naming conventions for CSS classes
|
||||||
|
- Ensure responsive design unless explicitly single-platform
|
||||||
|
|
||||||
|
### State Management
|
||||||
|
- Use project's existing state management solution
|
||||||
|
- Keep component state local when possible
|
||||||
|
- Document any new global state additions
|
||||||
|
|
||||||
|
### Accessibility
|
||||||
|
- Include proper ARIA labels
|
||||||
|
- Ensure keyboard navigation works
|
||||||
|
- Test with screen reader considerations
|
||||||
|
- Maintain color contrast ratios (WCAG 2.1 AA minimum)
|
||||||
|
|
||||||
|
## Testing Requirements (TDD)
|
||||||
|
1. Write tests BEFORE implementation
|
||||||
|
2. Minimum 85% coverage
|
||||||
|
3. Test categories:
|
||||||
|
- Unit tests for utility functions
|
||||||
|
- Component tests for UI behavior
|
||||||
|
- Integration tests for user flows
|
||||||
|
|
||||||
|
### Test Patterns
|
||||||
|
```javascript
|
||||||
|
// Component test example structure
|
||||||
|
describe('ComponentName', () => {
|
||||||
|
it('renders without crashing', () => {});
|
||||||
|
it('handles user interaction correctly', () => {});
|
||||||
|
it('displays error states appropriately', () => {});
|
||||||
|
it('is accessible', () => {});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Code Style
|
||||||
|
- Follow Google JavaScript/TypeScript Style Guide
|
||||||
|
- **TypeScript: Follow `~/.mosaic/guides/typescript.md` — MANDATORY**
|
||||||
|
- Use ESLint/Prettier configuration from project
|
||||||
|
- Prefer functional components over class components (React)
|
||||||
|
- TypeScript strict mode is REQUIRED, not optional
|
||||||
|
|
||||||
|
### TypeScript Quick Rules (see typescript.md for full guide)
|
||||||
|
- **NO `any`** — define explicit types always
|
||||||
|
- **NO lazy `unknown`** — only for error catches and external data with validation
|
||||||
|
- **Explicit return types** on all exported functions
|
||||||
|
- **Explicit parameter types** always
|
||||||
|
- **Interface for props** — never inline object types
|
||||||
|
- **Event handlers** — use proper React event types
|
||||||
|
|
||||||
|
## Commit Format
|
||||||
|
```
|
||||||
|
feat(#123): Add user profile component
|
||||||
|
|
||||||
|
- Implement avatar display
|
||||||
|
- Add edit mode toggle
|
||||||
|
- Include form validation
|
||||||
|
|
||||||
|
Refs #123
|
||||||
|
```
|
||||||
|
|
||||||
|
## Before Completing
|
||||||
|
1. Run full test suite
|
||||||
|
2. Verify build succeeds
|
||||||
|
3. Update scratchpad with completion notes
|
||||||
|
4. Reference issue in commit: `Fixes #N` or `Refs #N`
|
||||||
165
guides/infrastructure.md
Normal file
165
guides/infrastructure.md
Normal file
@@ -0,0 +1,165 @@
|
|||||||
|
# Infrastructure & DevOps Guide
|
||||||
|
|
||||||
|
## Before Starting
|
||||||
|
1. Check assigned issue: `~/.mosaic/rails/git/issue-list.sh -a @me`
|
||||||
|
2. Create scratchpad: `docs/scratchpads/{issue-number}-{short-name}.md`
|
||||||
|
3. Review existing infrastructure configuration
|
||||||
|
|
||||||
|
## Vault Secrets Management
|
||||||
|
|
||||||
|
**CRITICAL**: Follow canonical Vault structure for ALL secrets.
|
||||||
|
|
||||||
|
### Structure
|
||||||
|
```
|
||||||
|
{mount}/{service}/{component}/{secret-name}
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
- secret-prod/postgres/database/app
|
||||||
|
- secret-prod/redis/auth/default
|
||||||
|
- secret-prod/authentik/admin/token
|
||||||
|
```
|
||||||
|
|
||||||
|
### Environment Mounts
|
||||||
|
- `secret-dev/` - Development environment
|
||||||
|
- `secret-staging/` - Staging environment
|
||||||
|
- `secret-prod/` - Production environment
|
||||||
|
|
||||||
|
### Standard Field Names
|
||||||
|
- Credentials: `username`, `password`
|
||||||
|
- Tokens: `token`
|
||||||
|
- OAuth: `client_id`, `client_secret`
|
||||||
|
- Connection strings: `url`, `host`, `port`
|
||||||
|
|
||||||
|
See `docs/vault-secrets-structure.md` for complete reference.
|
||||||
|
|
||||||
|
## Container Standards
|
||||||
|
|
||||||
|
### Dockerfile Best Practices
|
||||||
|
```dockerfile
|
||||||
|
# Use specific version tags
|
||||||
|
FROM node:20-alpine
|
||||||
|
|
||||||
|
# Create non-root user
|
||||||
|
RUN addgroup -S app && adduser -S app -G app
|
||||||
|
|
||||||
|
# Set working directory
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Copy dependency files first (layer caching)
|
||||||
|
COPY package*.json ./
|
||||||
|
RUN npm ci --only=production
|
||||||
|
|
||||||
|
# Copy application code
|
||||||
|
COPY --chown=app:app . .
|
||||||
|
|
||||||
|
# Switch to non-root user
|
||||||
|
USER app
|
||||||
|
|
||||||
|
# Use exec form for CMD
|
||||||
|
CMD ["node", "server.js"]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Container Security
|
||||||
|
- Use minimal base images (alpine, distroless)
|
||||||
|
- Run as non-root user
|
||||||
|
- Don't store secrets in images
|
||||||
|
- Scan images for vulnerabilities
|
||||||
|
- Pin dependency versions
|
||||||
|
|
||||||
|
## Kubernetes/Docker Compose
|
||||||
|
|
||||||
|
### Resource Limits
|
||||||
|
Always set resource limits to prevent runaway containers:
|
||||||
|
```yaml
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
memory: "128Mi"
|
||||||
|
cpu: "100m"
|
||||||
|
limits:
|
||||||
|
memory: "256Mi"
|
||||||
|
cpu: "500m"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Health Checks
|
||||||
|
```yaml
|
||||||
|
livenessProbe:
|
||||||
|
httpGet:
|
||||||
|
path: /health
|
||||||
|
port: 8080
|
||||||
|
initialDelaySeconds: 10
|
||||||
|
periodSeconds: 5
|
||||||
|
|
||||||
|
readinessProbe:
|
||||||
|
httpGet:
|
||||||
|
path: /ready
|
||||||
|
port: 8080
|
||||||
|
initialDelaySeconds: 5
|
||||||
|
periodSeconds: 3
|
||||||
|
```
|
||||||
|
|
||||||
|
## CI/CD Pipelines
|
||||||
|
|
||||||
|
### Pipeline Stages
|
||||||
|
1. **Lint**: Code style and static analysis
|
||||||
|
2. **Test**: Unit and integration tests
|
||||||
|
3. **Build**: Compile and package
|
||||||
|
4. **Scan**: Security and vulnerability scanning
|
||||||
|
5. **Deploy**: Environment-specific deployment
|
||||||
|
|
||||||
|
### Pipeline Security
|
||||||
|
- Use secrets management (not hardcoded)
|
||||||
|
- Pin action/image versions
|
||||||
|
- Implement approval gates for production
|
||||||
|
- Audit pipeline access
|
||||||
|
|
||||||
|
## Monitoring & Logging
|
||||||
|
|
||||||
|
### Logging Standards
|
||||||
|
- Use structured logging (JSON)
|
||||||
|
- Include correlation IDs
|
||||||
|
- Log at appropriate levels (ERROR, WARN, INFO, DEBUG)
|
||||||
|
- Never log sensitive data
|
||||||
|
|
||||||
|
### Metrics to Collect
|
||||||
|
- Request latency (p50, p95, p99)
|
||||||
|
- Error rates
|
||||||
|
- Resource utilization (CPU, memory)
|
||||||
|
- Business metrics
|
||||||
|
|
||||||
|
### Alerting
|
||||||
|
- Define SLOs (Service Level Objectives)
|
||||||
|
- Alert on symptoms, not causes
|
||||||
|
- Include runbook links in alerts
|
||||||
|
- Avoid alert fatigue
|
||||||
|
|
||||||
|
## Testing Infrastructure
|
||||||
|
|
||||||
|
### Test Categories
|
||||||
|
1. **Unit tests**: Terraform/Ansible logic
|
||||||
|
2. **Integration tests**: Deployed resources work together
|
||||||
|
3. **Smoke tests**: Critical paths after deployment
|
||||||
|
4. **Chaos tests**: Failure mode validation
|
||||||
|
|
||||||
|
### Infrastructure Testing Tools
|
||||||
|
- Terraform: `terraform validate`, `terraform plan`
|
||||||
|
- Ansible: `ansible-lint`, molecule
|
||||||
|
- Kubernetes: `kubectl dry-run`, kubeval
|
||||||
|
- General: Terratest, ServerSpec
|
||||||
|
|
||||||
|
## Commit Format
|
||||||
|
```
|
||||||
|
chore(#67): Configure Redis cluster
|
||||||
|
|
||||||
|
- Add Redis StatefulSet with 3 replicas
|
||||||
|
- Configure persistence with PVC
|
||||||
|
- Add Vault secret for auth password
|
||||||
|
|
||||||
|
Refs #67
|
||||||
|
```
|
||||||
|
|
||||||
|
## Before Completing
|
||||||
|
1. Validate configuration syntax
|
||||||
|
2. Run infrastructure tests
|
||||||
|
3. Test in dev/staging first
|
||||||
|
4. Document any manual steps required
|
||||||
|
5. Update scratchpad and close issue
|
||||||
126
guides/orchestrator-learnings.md
Normal file
126
guides/orchestrator-learnings.md
Normal file
@@ -0,0 +1,126 @@
|
|||||||
|
# Orchestrator Learnings (Universal)
|
||||||
|
|
||||||
|
> Cross-project heuristic adjustments based on observed variance data.
|
||||||
|
>
|
||||||
|
> **Note:** This file contains generic patterns only. Project-specific evidence is stored in each project's `docs/orchestrator-learnings.json`.
|
||||||
|
|
||||||
|
## Task Type Multipliers
|
||||||
|
|
||||||
|
Apply these multipliers to base estimates from `orchestrator.md`:
|
||||||
|
|
||||||
|
| Task Type | Base Estimate | Multiplier | Confidence | Samples | Last Updated |
|
||||||
|
|-----------|---------------|------------|------------|---------|--------------|
|
||||||
|
| STYLE_FIX | 3-5K | 0.64 | MEDIUM | n=1 | 2026-02-05 |
|
||||||
|
| BULK_CLEANUP | file_count × 550 | 1.0 | MEDIUM | n=2 | 2026-02-05 |
|
||||||
|
| GUARD_ADD | 5-8K | 1.0 | LOW | n=0 | - |
|
||||||
|
| SECURITY_FIX | 8-12K | 2.5 | LOW | n=0 | - |
|
||||||
|
| AUTH_ADD | 15-25K | 1.0 | HIGH | n=1 | 2026-02-05 |
|
||||||
|
| REFACTOR | 10-15K | 1.0 | LOW | n=0 | - |
|
||||||
|
| TEST_ADD | 15-25K | 1.0 | LOW | n=0 | - |
|
||||||
|
| ERROR_HANDLING | 8-12K | 2.3 | MEDIUM | n=1 | 2026-02-05 |
|
||||||
|
| CONFIG_DEFAULT_CHANGE | 5-10K | 1.8 | MEDIUM | n=1 | 2026-02-05 |
|
||||||
|
| INPUT_VALIDATION | 5-8K | 1.7 | MEDIUM | n=1 | 2026-02-05 |
|
||||||
|
|
||||||
|
## Phase Factors
|
||||||
|
|
||||||
|
Apply to all estimates based on task position in milestone:
|
||||||
|
|
||||||
|
| Phase Position | Factor | Rationale |
|
||||||
|
|----------------|--------|-----------|
|
||||||
|
| Early (tasks 1-3) | 1.45 | Codebase learning overhead |
|
||||||
|
| Mid (tasks 4-7) | 1.25 | Pattern recognition phase |
|
||||||
|
| Late (tasks 8+) | 1.10 | Established patterns |
|
||||||
|
|
||||||
|
## Estimation Formula
|
||||||
|
|
||||||
|
```
|
||||||
|
Final Estimate = Base Estimate × Type Multiplier × Phase Factor × TDD Overhead
|
||||||
|
|
||||||
|
Where:
|
||||||
|
- Base Estimate: From orchestrator.md task type table
|
||||||
|
- Type Multiplier: From table above (default 1.0)
|
||||||
|
- Phase Factor: 1.45 / 1.25 / 1.10 based on position
|
||||||
|
- TDD Overhead: 1.20 if tests required
|
||||||
|
```
|
||||||
|
|
||||||
|
## Known Patterns
|
||||||
|
|
||||||
|
### BULK_CLEANUP
|
||||||
|
|
||||||
|
**Pattern:** Multi-file cleanup tasks are severely underestimated.
|
||||||
|
|
||||||
|
**Why:** Iterative testing across many files, cascading fixes, and debugging compound the effort.
|
||||||
|
|
||||||
|
**Observed:** +112% to +276% variance when using fixed estimates.
|
||||||
|
|
||||||
|
**Recommendation:** Use `file_count × 550` instead of fixed estimate.
|
||||||
|
|
||||||
|
### ERROR_HANDLING
|
||||||
|
|
||||||
|
**Pattern:** Error handling changes that modify type interfaces cascade through the codebase.
|
||||||
|
|
||||||
|
**Why:** Adding fields to result types requires updating all callers, error messages, and tests.
|
||||||
|
|
||||||
|
**Observed:** +131% variance.
|
||||||
|
|
||||||
|
**Multiplier:** 2.3x base estimate when type interfaces are modified.
|
||||||
|
|
||||||
|
### CONFIG_DEFAULT_CHANGE
|
||||||
|
|
||||||
|
**Pattern:** Config default changes require more test coverage than expected.
|
||||||
|
|
||||||
|
**Why:** Security-sensitive defaults need validation tests, warning tests, and edge case coverage.
|
||||||
|
|
||||||
|
**Observed:** +80% variance.
|
||||||
|
|
||||||
|
**Multiplier:** 1.8x when config changes need security validation.
|
||||||
|
|
||||||
|
### INPUT_VALIDATION
|
||||||
|
|
||||||
|
**Pattern:** Security input validation with allowlists is more complex than simple validation.
|
||||||
|
|
||||||
|
**Why:** Comprehensive allowlists (e.g., OAuth error codes), encoding requirements, and security tests add up.
|
||||||
|
|
||||||
|
**Observed:** +70% variance.
|
||||||
|
|
||||||
|
**Multiplier:** 1.7x when security allowlists are involved.
|
||||||
|
|
||||||
|
### STYLE_FIX
|
||||||
|
|
||||||
|
**Pattern:** Pure formatting fixes are faster than estimated when isolated.
|
||||||
|
|
||||||
|
**Observed:** -36% variance.
|
||||||
|
|
||||||
|
**Multiplier:** 0.64x for isolated style-only fixes.
|
||||||
|
|
||||||
|
## Changelog
|
||||||
|
|
||||||
|
| Date | Change | Samples | Confidence |
|
||||||
|
|------|--------|---------|------------|
|
||||||
|
| 2026-02-05 | Added BULK_CLEANUP category | n=2 | MEDIUM |
|
||||||
|
| 2026-02-05 | Added STYLE_FIX multiplier 0.64 | n=1 | MEDIUM |
|
||||||
|
| 2026-02-05 | Confirmed AUTH_ADD heuristic accurate | n=1 | HIGH |
|
||||||
|
| 2026-02-05 | Added ERROR_HANDLING multiplier 2.3x | n=1 | MEDIUM |
|
||||||
|
| 2026-02-05 | Added CONFIG_DEFAULT_CHANGE multiplier 1.8x | n=1 | MEDIUM |
|
||||||
|
| 2026-02-05 | Added INPUT_VALIDATION multiplier 1.7x | n=1 | MEDIUM |
|
||||||
|
|
||||||
|
## Update Protocol
|
||||||
|
|
||||||
|
**Graduated Autonomy:**
|
||||||
|
|
||||||
|
| Phase | Condition | Action |
|
||||||
|
|-------|-----------|--------|
|
||||||
|
| **Now** | All proposals | Human review required |
|
||||||
|
| **After 3 milestones** | <30% change, n≥3 samples, HIGH confidence | Auto-update allowed |
|
||||||
|
| **Mature** | All changes | Auto with notification, revert on regression |
|
||||||
|
|
||||||
|
**Validation Before Update:**
|
||||||
|
1. Minimum 3 samples for same task type
|
||||||
|
2. Standard deviation < 30% of mean
|
||||||
|
3. Outliers (>2σ) excluded
|
||||||
|
4. New formula must not increase variance on historical data
|
||||||
|
|
||||||
|
## Where to Find Project-Specific Data
|
||||||
|
|
||||||
|
- **Project learnings:** `<project>/docs/orchestrator-learnings.json`
|
||||||
|
- **Cross-project metrics:** `jarvis-brain/data/orchestrator-metrics.json`
|
||||||
866
guides/orchestrator.md
Normal file
866
guides/orchestrator.md
Normal file
@@ -0,0 +1,866 @@
|
|||||||
|
# Autonomous Orchestrator Guide
|
||||||
|
|
||||||
|
> Load this guide when orchestrating autonomous task completion across any project.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The orchestrator **cold-starts** on any project with just a review report location and minimal kickstart. It autonomously:
|
||||||
|
1. Parses review reports to extract findings
|
||||||
|
2. Categorizes findings into phases by severity
|
||||||
|
3. Estimates token usage per task
|
||||||
|
4. Creates Gitea issues (phase-level)
|
||||||
|
5. Bootstraps `docs/tasks.md` from scratch
|
||||||
|
6. Coordinates completion using worker agents
|
||||||
|
|
||||||
|
**Key principle:** The orchestrator is the **sole writer** of `docs/tasks.md`. Worker agents execute tasks and report results — they never modify the tracking file.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Orchestrator Boundaries (CRITICAL)
|
||||||
|
|
||||||
|
**The orchestrator NEVER:**
|
||||||
|
- Edits source code directly (*.ts, *.tsx, *.js, *.py, etc.)
|
||||||
|
- Runs quality gates itself (that's the worker's job)
|
||||||
|
- Makes commits containing code changes
|
||||||
|
- "Quickly fixes" something to save time — this is how drift starts
|
||||||
|
|
||||||
|
**The orchestrator ONLY:**
|
||||||
|
- Reads/writes `docs/tasks.md`
|
||||||
|
- Reads/writes `docs/orchestrator-learnings.json`
|
||||||
|
- Spawns workers via the Task tool for ALL code changes
|
||||||
|
- Parses worker JSON results
|
||||||
|
- Commits task tracking updates (tasks.md, learnings)
|
||||||
|
- Outputs status reports and handoff messages
|
||||||
|
|
||||||
|
**If you find yourself about to edit source code, STOP.**
|
||||||
|
Spawn a worker instead. No exceptions. No "quick fixes."
|
||||||
|
|
||||||
|
**Worker Limits:**
|
||||||
|
- Maximum **2 parallel workers** at any time
|
||||||
|
- Wait for at least one worker to complete before spawning more
|
||||||
|
- This optimizes token usage and reduces context pressure
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Bootstrap Templates
|
||||||
|
|
||||||
|
Use templates from `jarvis-brain/docs/templates/` to scaffold tracking files:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Set environment variables
|
||||||
|
export PROJECT="project-name"
|
||||||
|
export MILESTONE="M1-Feature"
|
||||||
|
export CURRENT_DATETIME=$(date -Iseconds)
|
||||||
|
export TASK_PREFIX="PR-SEC"
|
||||||
|
export PHASE_ISSUE="#1"
|
||||||
|
export PHASE_BRANCH="fix/security"
|
||||||
|
|
||||||
|
# Copy templates
|
||||||
|
TEMPLATES=~/src/jarvis-brain/docs/templates
|
||||||
|
|
||||||
|
# Create tasks.md (then populate with findings)
|
||||||
|
envsubst < $TEMPLATES/orchestrator/tasks.md.template > docs/tasks.md
|
||||||
|
|
||||||
|
# Create learnings tracking
|
||||||
|
envsubst < $TEMPLATES/orchestrator/orchestrator-learnings.json.template > docs/orchestrator-learnings.json
|
||||||
|
|
||||||
|
# Create review report structure (if doing new review)
|
||||||
|
$TEMPLATES/reports/review-report-scaffold.sh codebase-review
|
||||||
|
```
|
||||||
|
|
||||||
|
**Available templates:**
|
||||||
|
|
||||||
|
| Template | Purpose |
|
||||||
|
|----------|---------|
|
||||||
|
| `orchestrator/tasks.md.template` | Task tracking table with schema |
|
||||||
|
| `orchestrator/orchestrator-learnings.json.template` | Variance tracking |
|
||||||
|
| `orchestrator/phase-issue-body.md.template` | Gitea issue body |
|
||||||
|
| `orchestrator/compaction-summary.md.template` | 60% checkpoint format |
|
||||||
|
| `reports/review-report-scaffold.sh` | Creates report directory |
|
||||||
|
| `scratchpad.md.template` | Per-task working document |
|
||||||
|
|
||||||
|
See `jarvis-brain/docs/templates/README.md` for full documentation.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 1: Bootstrap
|
||||||
|
|
||||||
|
### Step 1: Parse Review Reports
|
||||||
|
|
||||||
|
Review reports typically follow this structure:
|
||||||
|
```
|
||||||
|
docs/reports/{report-name}/
|
||||||
|
├── 00-executive-summary.md # Start here - overview and counts
|
||||||
|
├── 01-security-review.md # Security findings with IDs like SEC-*
|
||||||
|
├── 02-code-quality-review.md # Code quality findings like CQ-*
|
||||||
|
├── 03-qa-test-coverage.md # Test coverage gaps like TEST-*
|
||||||
|
└── ...
|
||||||
|
```
|
||||||
|
|
||||||
|
**Extract findings by looking for:**
|
||||||
|
- Finding IDs (e.g., `SEC-API-1`, `CQ-WEB-3`, `TEST-001`)
|
||||||
|
- Severity labels: Critical, High, Medium, Low
|
||||||
|
- Affected files/components (use for `repo` column)
|
||||||
|
- Specific line numbers or code patterns
|
||||||
|
|
||||||
|
**Parse each finding into:**
|
||||||
|
```
|
||||||
|
{
|
||||||
|
id: "SEC-API-1",
|
||||||
|
severity: "critical",
|
||||||
|
title: "Brief description",
|
||||||
|
component: "api", // For repo column
|
||||||
|
file: "path/to/file.ts", // Reference for worker
|
||||||
|
lines: "45-67" // Specific location
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Categorize into Phases
|
||||||
|
|
||||||
|
Map severity to phases:
|
||||||
|
|
||||||
|
| Severity | Phase | Focus | Branch Pattern |
|
||||||
|
|----------|-------|-------|----------------|
|
||||||
|
| Critical | 1 | Security vulnerabilities, data exposure | `fix/security` |
|
||||||
|
| High | 2 | Security hardening, auth gaps | `fix/security` |
|
||||||
|
| Medium | 3 | Code quality, performance, bugs | `fix/code-quality` |
|
||||||
|
| Low | 4 | Tests, documentation, cleanup | `fix/test-coverage` |
|
||||||
|
|
||||||
|
**Within each phase, order tasks by:**
|
||||||
|
1. Blockers first (tasks that unblock others)
|
||||||
|
2. Same-file tasks grouped together
|
||||||
|
3. Simpler fixes before complex ones
|
||||||
|
|
||||||
|
### Step 3: Estimate Token Usage
|
||||||
|
|
||||||
|
Use these heuristics based on task type:
|
||||||
|
|
||||||
|
| Task Type | Estimate | Examples |
|
||||||
|
|-----------|----------|----------|
|
||||||
|
| Single-line fix | 3-5K | Typo, wrong operator, missing null check |
|
||||||
|
| Add guard/validation | 5-8K | Add auth decorator, input validation |
|
||||||
|
| Fix error handling | 8-12K | Proper try/catch, error propagation |
|
||||||
|
| Refactor pattern | 10-15K | Replace KEYS with SCAN, fix memory leak |
|
||||||
|
| Add new functionality | 15-25K | New service method, new component |
|
||||||
|
| Write tests | 15-25K | Unit tests for untested service |
|
||||||
|
| Complex refactor | 25-40K | Architectural change, multi-file refactor |
|
||||||
|
|
||||||
|
**Adjust estimates based on:**
|
||||||
|
- Number of files affected (+5K per additional file)
|
||||||
|
- Test requirements (+5-10K if tests needed)
|
||||||
|
- Documentation needs (+2-3K if docs needed)
|
||||||
|
|
||||||
|
### Step 4: Determine Dependencies
|
||||||
|
|
||||||
|
**Automatic dependency rules:**
|
||||||
|
1. All tasks in Phase N depend on the Phase N-1 verification task
|
||||||
|
2. Tasks touching the same file should be sequential (earlier blocks later)
|
||||||
|
3. Auth/security foundation tasks block tasks that rely on them
|
||||||
|
4. Each phase ends with a verification task that depends on all phase tasks
|
||||||
|
|
||||||
|
**Create verification tasks:**
|
||||||
|
- `{PREFIX}-SEC-{LAST}`: Phase 1 verification (run security tests)
|
||||||
|
- `{PREFIX}-HIGH-{LAST}`: Phase 2 verification
|
||||||
|
- `{PREFIX}-CQ-{LAST}`: Phase 3 verification
|
||||||
|
- `{PREFIX}-TEST-{LAST}`: Phase 4 verification (final quality gates)
|
||||||
|
|
||||||
|
### Step 5: Create Gitea Issues (Phase-Level)
|
||||||
|
|
||||||
|
Create ONE issue per phase using git scripts:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
~/.mosaic/rails/git/issue-create.sh \
|
||||||
|
-t "Phase 1: Critical Security Fixes" \
|
||||||
|
-b "$(cat <<'EOF'
|
||||||
|
## Findings
|
||||||
|
|
||||||
|
- SEC-API-1: Description
|
||||||
|
- SEC-WEB-2: Description
|
||||||
|
- SEC-ORCH-1: Description
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- [ ] All critical findings remediated
|
||||||
|
- [ ] Quality gates passing
|
||||||
|
- [ ] No new regressions
|
||||||
|
EOF
|
||||||
|
)" \
|
||||||
|
-l "security,critical" \
|
||||||
|
-m "{milestone-name}"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Capture issue numbers** — you'll link tasks to these.
|
||||||
|
|
||||||
|
### Step 6: Create docs/tasks.md
|
||||||
|
|
||||||
|
Create the file with this exact schema:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Tasks
|
||||||
|
|
||||||
|
| id | status | description | issue | repo | branch | depends_on | blocks | agent | started_at | completed_at | estimate | used |
|
||||||
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
||||||
|
| {PREFIX}-SEC-001 | not-started | SEC-API-1: Brief description | #{N} | api | fix/security | | {PREFIX}-SEC-002 | | | | 8K | |
|
||||||
|
```
|
||||||
|
|
||||||
|
**Column definitions:**
|
||||||
|
|
||||||
|
| Column | Format | Purpose |
|
||||||
|
|--------|--------|---------|
|
||||||
|
| `id` | `{PREFIX}-{CAT}-{NNN}` | Unique task ID (e.g., MS-SEC-001) |
|
||||||
|
| `status` | `not-started` \| `in-progress` \| `done` \| `failed` | Current state |
|
||||||
|
| `description` | `{FindingID}: Brief summary` | What to fix |
|
||||||
|
| `issue` | `#NNN` | Gitea issue (phase-level, all tasks in phase share) |
|
||||||
|
| `repo` | Workspace name | `api`, `web`, `orchestrator`, etc. |
|
||||||
|
| `branch` | Branch name | `fix/security`, `fix/code-quality`, etc. |
|
||||||
|
| `depends_on` | Comma-separated IDs | Must complete first |
|
||||||
|
| `blocks` | Comma-separated IDs | Tasks waiting on this |
|
||||||
|
| `agent` | Agent identifier | Assigned worker (fill when claiming) |
|
||||||
|
| `started_at` | ISO 8601 | When work began |
|
||||||
|
| `completed_at` | ISO 8601 | When work finished |
|
||||||
|
| `estimate` | `5K`, `15K`, etc. | Predicted token usage |
|
||||||
|
| `used` | `4.2K`, `12.8K`, etc. | Actual usage (fill on completion) |
|
||||||
|
|
||||||
|
**Category prefixes:**
|
||||||
|
- `SEC` — Security (Phase 1-2)
|
||||||
|
- `HIGH` — High priority (Phase 2)
|
||||||
|
- `CQ` — Code quality (Phase 3)
|
||||||
|
- `TEST` — Test coverage (Phase 4)
|
||||||
|
- `PERF` — Performance (Phase 3)
|
||||||
|
|
||||||
|
### Step 7: Commit Bootstrap
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add docs/tasks.md
|
||||||
|
git commit -m "chore(orchestrator): Bootstrap tasks.md from review report
|
||||||
|
|
||||||
|
Parsed {N} findings into {M} tasks across {P} phases.
|
||||||
|
Estimated total: {X}K tokens."
|
||||||
|
git push
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 2: Execution Loop
|
||||||
|
|
||||||
|
```
|
||||||
|
1. git pull --rebase
|
||||||
|
2. Read docs/tasks.md
|
||||||
|
3. Find next task: status=not-started AND all depends_on are done
|
||||||
|
4. If no task available:
|
||||||
|
- All done? → Report success, run final retrospective, STOP
|
||||||
|
- Some blocked? → Report deadlock, STOP
|
||||||
|
5. Update tasks.md: status=in-progress, agent={identifier}, started_at={now}
|
||||||
|
6. Spawn worker agent (Task tool) with task details
|
||||||
|
7. Wait for worker completion
|
||||||
|
8. Parse worker result (JSON)
|
||||||
|
9. **Variance check**: Calculate (actual - estimate) / estimate × 100
|
||||||
|
- If |variance| > 50%: Capture learning (see Learning & Retrospective)
|
||||||
|
- If |variance| > 100%: Flag as CRITICAL — review task classification
|
||||||
|
10. **Post-Coding Review** (see Phase 2b below)
|
||||||
|
11. Update tasks.md: status=done/failed/needs-qa, completed_at={now}, used={actual}
|
||||||
|
12. **Cleanup reports**: Remove processed report files for completed task
|
||||||
|
```bash
|
||||||
|
# Find and remove reports matching the finding ID
|
||||||
|
find docs/reports/qa-automation/pending/ -name "*{finding_id}*" -delete 2>/dev/null || true
|
||||||
|
# If task failed, move reports to escalated/ instead
|
||||||
|
```
|
||||||
|
13. Commit + push: git add docs/tasks.md .gitignore && git commit && git push
|
||||||
|
14. If phase verification task: Run phase retrospective, clean up all phase reports
|
||||||
|
15. Check context usage
|
||||||
|
16. If >= 55%: Output COMPACTION REQUIRED checkpoint, STOP, wait for user
|
||||||
|
17. If < 55%: Go to step 1
|
||||||
|
18. After user runs /compact and says "continue": Go to step 1
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 2b: Post-Coding Review (MANDATORY)
|
||||||
|
|
||||||
|
**CRITICAL:** After any worker completes a task that modifies source code, the orchestrator MUST run an independent review before marking the task as done. This catches bugs, security issues, and regressions that the worker missed.
|
||||||
|
|
||||||
|
### When to Review
|
||||||
|
|
||||||
|
Run review when the worker's result includes code changes (commits). Skip for tasks that only modify docs, config, or tracking files.
|
||||||
|
|
||||||
|
### Step 1: Run Codex Review (Primary)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Navigate to the project directory
|
||||||
|
cd {project_path}
|
||||||
|
|
||||||
|
# Code quality review
|
||||||
|
~/.mosaic/rails/codex/codex-code-review.sh -b {base_branch} -o /tmp/review-{task_id}.json
|
||||||
|
|
||||||
|
# Security review
|
||||||
|
~/.mosaic/rails/codex/codex-security-review.sh -b {base_branch} -o /tmp/security-{task_id}.json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Parse Review Results
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check code review
|
||||||
|
CODE_BLOCKERS=$(jq '.stats.blockers // 0' /tmp/review-{task_id}.json)
|
||||||
|
CODE_VERDICT=$(jq -r '.verdict // "comment"' /tmp/review-{task_id}.json)
|
||||||
|
|
||||||
|
# Check security review
|
||||||
|
SEC_CRITICAL=$(jq '.stats.critical // 0' /tmp/security-{task_id}.json)
|
||||||
|
SEC_HIGH=$(jq '.stats.high // 0' /tmp/security-{task_id}.json)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Decision Tree
|
||||||
|
|
||||||
|
```
|
||||||
|
IF Codex is unavailable (command not found, auth failure, API error):
|
||||||
|
→ Use fallback review (Step 4)
|
||||||
|
|
||||||
|
IF CODE_BLOCKERS > 0 OR SEC_CRITICAL > 0 OR SEC_HIGH > 0:
|
||||||
|
→ Mark task as "needs-qa" in tasks.md
|
||||||
|
→ Create a remediation task:
|
||||||
|
- ID: {task_id}-QA
|
||||||
|
- Description: Fix findings from review (list specific issues)
|
||||||
|
- depends_on: (none — it's a follow-up, not a blocker)
|
||||||
|
- Notes: Include finding titles and file locations
|
||||||
|
→ Continue to next task (remediation task will be picked up in order)
|
||||||
|
|
||||||
|
IF CODE_VERDICT == "request-changes" (but no blockers):
|
||||||
|
→ Log should-fix findings in task notes
|
||||||
|
→ Mark task as done (non-blocking suggestions)
|
||||||
|
→ Consider creating a tech-debt issue for significant suggestions
|
||||||
|
|
||||||
|
IF CODE_VERDICT == "approve" AND SEC_CRITICAL == 0 AND SEC_HIGH == 0:
|
||||||
|
→ Mark task as done
|
||||||
|
→ Log: "Review passed — no issues found"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Fallback Review (When Codex is Unavailable)
|
||||||
|
|
||||||
|
If the `codex` CLI is not installed or authentication fails, use Claude's built-in review capabilities:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Fallback: Spawn a Review Agent
|
||||||
|
|
||||||
|
Use the Task tool to spawn a review subagent:
|
||||||
|
|
||||||
|
Prompt:
|
||||||
|
---
|
||||||
|
## Independent Code Review
|
||||||
|
|
||||||
|
Review the code changes on branch {branch} against {base_branch}.
|
||||||
|
|
||||||
|
1. Run: `git diff {base_branch}...HEAD`
|
||||||
|
2. Review for:
|
||||||
|
- Correctness (bugs, logic errors, edge cases)
|
||||||
|
- Security (OWASP Top 10, secrets, injection)
|
||||||
|
- Testing (coverage, quality)
|
||||||
|
- Code quality (complexity, duplication)
|
||||||
|
3. Reference: ~/.mosaic/guides/code-review.md
|
||||||
|
|
||||||
|
Report findings as JSON:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"verdict": "approve|request-changes",
|
||||||
|
"blockers": 0,
|
||||||
|
"critical_security": 0,
|
||||||
|
"findings": [
|
||||||
|
{"severity": "blocker|should-fix|suggestion", "title": "...", "file": "...", "description": "..."}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
### Review Timing Guidelines
|
||||||
|
|
||||||
|
| Task Type | Review Required? |
|
||||||
|
|-----------|-----------------|
|
||||||
|
| Source code changes (*.ts, *.py, etc.) | **YES — always** |
|
||||||
|
| Configuration changes (*.yml, *.toml) | YES — security review only |
|
||||||
|
| Documentation changes (*.md) | No |
|
||||||
|
| Task tracking updates (tasks.md) | No |
|
||||||
|
| Test-only changes | YES — code review only |
|
||||||
|
|
||||||
|
### Logging Review Results
|
||||||
|
|
||||||
|
In the task notes column of tasks.md, append review results:
|
||||||
|
|
||||||
|
```
|
||||||
|
Review: approve (0 blockers, 0 critical) | Codex 0.98.0
|
||||||
|
```
|
||||||
|
|
||||||
|
or:
|
||||||
|
|
||||||
|
```
|
||||||
|
Review: needs-qa (1 blocker, 2 high) → QA task {task_id}-QA created
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Worker Prompt Template
|
||||||
|
|
||||||
|
Construct this from the task row and pass to worker via Task tool:
|
||||||
|
|
||||||
|
````markdown
|
||||||
|
## Task Assignment: {id}
|
||||||
|
|
||||||
|
**Description:** {description}
|
||||||
|
**Repository:** {project_path}/apps/{repo}
|
||||||
|
**Branch:** {branch}
|
||||||
|
|
||||||
|
**Reference:** See `docs/reports/` for detailed finding description. Search for the finding ID.
|
||||||
|
|
||||||
|
## Workflow
|
||||||
|
|
||||||
|
1. Checkout branch: `git checkout {branch} || git checkout -b {branch} develop && git pull`
|
||||||
|
2. Read the finding details from the report
|
||||||
|
3. Implement the fix following existing code patterns
|
||||||
|
4. Run quality gates (ALL must pass — zero lint errors, zero type errors, all tests green):
|
||||||
|
```bash
|
||||||
|
{quality_gates_command}
|
||||||
|
```
|
||||||
|
**MANDATORY:** This ALWAYS includes linting. If the project has a linter configured
|
||||||
|
(ESLint, Biome, ruff, etc.), you MUST run it and fix ALL violations in files you touched.
|
||||||
|
Do NOT leave lint warnings or errors for someone else to clean up.
|
||||||
|
5. If gates fail: Fix and retry. Do NOT report success with failures.
|
||||||
|
6. Commit: `git commit -m "fix({finding_id}): brief description"`
|
||||||
|
7. Push: `git push origin {branch}`
|
||||||
|
8. Report result as JSON (see format below)
|
||||||
|
|
||||||
|
## Git Scripts
|
||||||
|
|
||||||
|
For issue/PR/milestone operations, use scripts (NOT raw tea/gh):
|
||||||
|
- `~/.mosaic/rails/git/issue-view.sh -i {N}`
|
||||||
|
- `~/.mosaic/rails/git/pr-create.sh -t "Title" -b "Desc" -B develop`
|
||||||
|
|
||||||
|
Standard git commands (pull, commit, push, checkout) are fine.
|
||||||
|
|
||||||
|
## Result Format (MANDATORY)
|
||||||
|
|
||||||
|
End your response with this JSON block:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"task_id": "{id}",
|
||||||
|
"status": "success|failed",
|
||||||
|
"used": "5.2K",
|
||||||
|
"commit_sha": "abc123",
|
||||||
|
"notes": "Brief summary of what was done"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Post-Coding Review
|
||||||
|
|
||||||
|
After you complete and push your changes, the orchestrator will independently
|
||||||
|
review your code using Codex (or a fallback review agent). If the review finds
|
||||||
|
blockers or critical security issues, a follow-up remediation task will be
|
||||||
|
created. You do NOT need to run the review yourself — the orchestrator handles it.
|
||||||
|
|
||||||
|
## Rules
|
||||||
|
|
||||||
|
- DO NOT modify docs/tasks.md
|
||||||
|
- DO NOT claim other tasks
|
||||||
|
- Complete this single task, report results, done
|
||||||
|
````
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Context Threshold Protocol (Orchestrator Replacement)
|
||||||
|
|
||||||
|
**Threshold:** 55-60% context usage
|
||||||
|
|
||||||
|
**Why replacement, not compaction?**
|
||||||
|
- Compaction causes **protocol drift** — agent "remembers" gist but loses specifics
|
||||||
|
- Post-compaction agents may violate core rules (e.g., letting workers modify tasks.md)
|
||||||
|
- Fresh orchestrator has **100% protocol fidelity**
|
||||||
|
- All state lives in `docs/tasks.md` — the orchestrator is **stateless and replaceable**
|
||||||
|
|
||||||
|
**At threshold (55-60%):**
|
||||||
|
|
||||||
|
1. Complete current task
|
||||||
|
2. Persist all state:
|
||||||
|
- Update docs/tasks.md with all progress
|
||||||
|
- Update docs/orchestrator-learnings.json with variances
|
||||||
|
- Commit and push both files
|
||||||
|
3. Output **ORCHESTRATOR HANDOFF** message with ready-to-use takeover kickstart
|
||||||
|
4. **STOP COMPLETELY** — do not continue working
|
||||||
|
|
||||||
|
**Handoff message format:**
|
||||||
|
|
||||||
|
```
|
||||||
|
---
|
||||||
|
⚠️ ORCHESTRATOR HANDOFF REQUIRED
|
||||||
|
|
||||||
|
Context: {X}% — Replacement recommended to prevent drift
|
||||||
|
|
||||||
|
Progress: {completed}/{total} tasks ({percentage}%)
|
||||||
|
Current phase: Phase {N} ({phase_name})
|
||||||
|
|
||||||
|
State persisted:
|
||||||
|
- docs/tasks.md ✓
|
||||||
|
- docs/orchestrator-learnings.json ✓
|
||||||
|
|
||||||
|
## Takeover Kickstart
|
||||||
|
|
||||||
|
Copy and paste this to spawn a fresh orchestrator:
|
||||||
|
|
||||||
|
---
|
||||||
|
## Continuation Mission
|
||||||
|
|
||||||
|
Continue {mission_description} from existing state.
|
||||||
|
|
||||||
|
## Setup
|
||||||
|
- Project: {project_path}
|
||||||
|
- State: docs/tasks.md (already populated)
|
||||||
|
- Protocol: docs/claude/orchestrator.md
|
||||||
|
- Quality gates: {quality_gates_command}
|
||||||
|
|
||||||
|
## Resume Point
|
||||||
|
- Next task: {task_id}
|
||||||
|
- Phase: {current_phase}
|
||||||
|
- Progress: {completed}/{total} tasks ({percentage}%)
|
||||||
|
|
||||||
|
## Instructions
|
||||||
|
1. Read docs/claude/orchestrator.md for protocol
|
||||||
|
2. Read docs/tasks.md to understand current state
|
||||||
|
3. Continue execution from task {task_id}
|
||||||
|
4. Follow Two-Phase Completion Protocol
|
||||||
|
5. You are the SOLE writer of docs/tasks.md
|
||||||
|
---
|
||||||
|
|
||||||
|
STOP: Terminate this session and spawn fresh orchestrator with the kickstart above.
|
||||||
|
---
|
||||||
|
```
|
||||||
|
|
||||||
|
**Rules:**
|
||||||
|
- Do NOT attempt to compact yourself — compaction causes drift
|
||||||
|
- Do NOT continue past 60%
|
||||||
|
- Do NOT claim you can "just continue" — protocol drift is real
|
||||||
|
- STOP means STOP — the user (Coordinator) will spawn your replacement
|
||||||
|
- Include ALL context needed for the replacement in the takeover kickstart
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Two-Phase Completion Protocol
|
||||||
|
|
||||||
|
Each major phase uses a two-phase approach to maximize completion while managing diminishing returns.
|
||||||
|
|
||||||
|
### Bulk Phase (Target: 90%)
|
||||||
|
|
||||||
|
- Focus on tractable errors
|
||||||
|
- Parallelize where possible
|
||||||
|
- When 90% reached, transition to Polish (do NOT declare success)
|
||||||
|
|
||||||
|
### Polish Phase (Target: 100%)
|
||||||
|
|
||||||
|
1. **Inventory:** List all remaining errors with file:line
|
||||||
|
2. **Categorize:**
|
||||||
|
| Category | Criteria | Action |
|
||||||
|
|----------|----------|--------|
|
||||||
|
| Quick-win | <5 min, straightforward | Fix immediately |
|
||||||
|
| Medium | 5-30 min, clear path | Fix in order |
|
||||||
|
| Hard | >30 min or uncertain | Attempt 15 min, then document |
|
||||||
|
| Architectural | Requires design change | Document and defer |
|
||||||
|
|
||||||
|
3. **Work priority:** Quick-win → Medium → Hard
|
||||||
|
4. **Document deferrals** in `docs/deferred-errors.md`:
|
||||||
|
```markdown
|
||||||
|
## {PREFIX}-XXX: [Error description]
|
||||||
|
- File: path/to/file.ts:123
|
||||||
|
- Error: [exact error message]
|
||||||
|
- Category: Hard | Architectural | Framework Limitation
|
||||||
|
- Reason: [why this is non-trivial]
|
||||||
|
- Suggested approach: [how to fix in future]
|
||||||
|
- Risk: Low | Medium | High
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Phase complete when:**
|
||||||
|
- All Quick-win/Medium fixed
|
||||||
|
- All Hard attempted (fixed or documented)
|
||||||
|
- Architectural items documented with justification
|
||||||
|
|
||||||
|
### Phase Boundary Rule
|
||||||
|
|
||||||
|
Do NOT proceed to the next major phase until the current phase reaches Polish completion:
|
||||||
|
|
||||||
|
```
|
||||||
|
✅ Phase 2 Bulk: 91%
|
||||||
|
✅ Phase 2 Polish: 118 errors triaged
|
||||||
|
- 40 medium → fixed
|
||||||
|
- 78 low → EACH documented with rationale
|
||||||
|
✅ Phase 2 Complete: Created docs/deferred-errors.md
|
||||||
|
→ NOW proceed to Phase 3
|
||||||
|
|
||||||
|
❌ WRONG: Phase 2 at 91%, "low priority acceptable", starting Phase 3
|
||||||
|
```
|
||||||
|
|
||||||
|
### Reporting
|
||||||
|
|
||||||
|
When transitioning from Bulk to Polish:
|
||||||
|
```
|
||||||
|
Phase X Bulk Complete: {N}% ({fixed}/{total})
|
||||||
|
Entering Polish Phase: {remaining} errors to triage
|
||||||
|
```
|
||||||
|
|
||||||
|
When Polish Phase complete:
|
||||||
|
```
|
||||||
|
Phase X Complete: {final_pct}% ({fixed}/{total})
|
||||||
|
- Quick-wins: {n} fixed
|
||||||
|
- Medium: {n} fixed
|
||||||
|
- Hard: {n} fixed, {n} documented
|
||||||
|
- Framework limitations: {n} documented
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Learning & Retrospective
|
||||||
|
|
||||||
|
Orchestrators capture learnings to improve future estimation accuracy.
|
||||||
|
|
||||||
|
### Variance Thresholds
|
||||||
|
|
||||||
|
| Variance | Action |
|
||||||
|
|----------|--------|
|
||||||
|
| 0-30% | Log only (acceptable) |
|
||||||
|
| 30-50% | Flag for review |
|
||||||
|
| 50-100% | Capture learning to `docs/orchestrator-learnings.json` |
|
||||||
|
| >100% | CRITICAL — review task classification, possible mismatch |
|
||||||
|
|
||||||
|
### Task Type Classification
|
||||||
|
|
||||||
|
Classify tasks by description keywords for pattern analysis:
|
||||||
|
|
||||||
|
| Type | Keywords | Base Estimate |
|
||||||
|
|------|----------|---------------|
|
||||||
|
| STYLE_FIX | "formatting", "prettier", "lint" | 3-5K |
|
||||||
|
| BULK_CLEANUP | "unused", "warnings", "~N files" | file_count × 550 |
|
||||||
|
| GUARD_ADD | "add guard", "decorator", "validation" | 5-8K |
|
||||||
|
| SECURITY_FIX | "sanitize", "injection", "XSS" | 8-12K × 2.5 |
|
||||||
|
| AUTH_ADD | "authentication", "auth" | 15-25K |
|
||||||
|
| REFACTOR | "refactor", "replace", "migrate" | 10-15K |
|
||||||
|
| TEST_ADD | "add tests", "coverage" | 15-25K |
|
||||||
|
|
||||||
|
### Capture Learning
|
||||||
|
|
||||||
|
When |variance| > 50%, append to `docs/orchestrator-learnings.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"task_id": "UC-CLEAN-003",
|
||||||
|
"task_type": "BULK_CLEANUP",
|
||||||
|
"estimate_k": 30,
|
||||||
|
"actual_k": 112.8,
|
||||||
|
"variance_pct": 276,
|
||||||
|
"characteristics": {
|
||||||
|
"file_count": 200,
|
||||||
|
"keywords": ["object injection", "type guards"]
|
||||||
|
},
|
||||||
|
"analysis": "Multi-file type guards severely underestimated",
|
||||||
|
"captured_at": "2026-02-05T19:45:00Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Retrospective Triggers
|
||||||
|
|
||||||
|
| Trigger | Action |
|
||||||
|
|---------|--------|
|
||||||
|
| Phase verification task | Analyze phase variance, summarize patterns |
|
||||||
|
| 60% compaction | Persist learnings buffer, include in summary |
|
||||||
|
| Milestone complete | Full retrospective, generate heuristic proposals |
|
||||||
|
|
||||||
|
### Enhanced Compaction Summary
|
||||||
|
|
||||||
|
Include learnings in compaction output:
|
||||||
|
|
||||||
|
```
|
||||||
|
Session Summary (Compacting at 60%):
|
||||||
|
|
||||||
|
Completed: MS-SEC-001 (15K→0.3K, -98%), MS-SEC-002 (8K→12K, +50%)
|
||||||
|
Quality: All gates passing
|
||||||
|
|
||||||
|
Learnings Captured:
|
||||||
|
- MS-SEC-001: -98% variance — AUTH_ADD may need SKIP_IF_EXISTS category
|
||||||
|
- MS-SEC-002: +50% variance — XSS sanitization more complex than expected
|
||||||
|
|
||||||
|
Remaining: MS-SEC-004 (ready), MS-SEC-005 through MS-SEC-010
|
||||||
|
Next: MS-SEC-004
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cross-Project Learnings
|
||||||
|
|
||||||
|
Universal heuristics are maintained in `~/.mosaic/guides/orchestrator-learnings.md`.
|
||||||
|
After completing a milestone, review variance patterns and propose updates to the universal guide.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Report Cleanup
|
||||||
|
|
||||||
|
QA automation generates report files in `docs/reports/qa-automation/pending/`. These must be cleaned up to prevent accumulation.
|
||||||
|
|
||||||
|
**Directory structure:**
|
||||||
|
```
|
||||||
|
docs/reports/qa-automation/
|
||||||
|
├── pending/ # Reports awaiting processing
|
||||||
|
└── escalated/ # Reports for failed tasks (manual review needed)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Gitignore:** Add this to project `.gitignore`:
|
||||||
|
```
|
||||||
|
# Orchestrator reports (generated by QA automation, cleaned up after processing)
|
||||||
|
docs/reports/qa-automation/
|
||||||
|
```
|
||||||
|
|
||||||
|
**Cleanup timing:**
|
||||||
|
| Event | Action |
|
||||||
|
|-------|--------|
|
||||||
|
| Task success | Delete matching reports from `pending/` |
|
||||||
|
| Task failed | Move reports to `escalated/` for investigation |
|
||||||
|
| Phase verification | Clean up all `pending/` reports for that phase |
|
||||||
|
| Milestone complete | Archive or delete entire `escalated/` directory |
|
||||||
|
|
||||||
|
**Cleanup commands:**
|
||||||
|
```bash
|
||||||
|
# After successful task (finding ID pattern, e.g., SEC-API-1)
|
||||||
|
find docs/reports/qa-automation/pending/ -name "*relevant-file-pattern*" -delete
|
||||||
|
|
||||||
|
# After phase verification - clean all pending
|
||||||
|
rm -rf docs/reports/qa-automation/pending/*
|
||||||
|
|
||||||
|
# Move failed task reports to escalated
|
||||||
|
mv docs/reports/qa-automation/pending/*failing-file* docs/reports/qa-automation/escalated/
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
**Quality gates fail:**
|
||||||
|
1. Worker should retry up to 2 times
|
||||||
|
2. If still failing, worker reports `failed` with error details
|
||||||
|
3. Orchestrator updates tasks.md: keep `in-progress`, add notes
|
||||||
|
4. Orchestrator may re-spawn with error context, or mark `failed` and continue
|
||||||
|
5. If failed task blocks others: Report deadlock, STOP
|
||||||
|
|
||||||
|
**Worker reports blocker:**
|
||||||
|
1. Update tasks.md with blocker notes
|
||||||
|
2. Skip to next unblocked task if possible
|
||||||
|
3. If all remaining tasks blocked: Report blockers, STOP
|
||||||
|
|
||||||
|
**Git push conflict:**
|
||||||
|
1. `git pull --rebase`
|
||||||
|
2. If auto-resolves: push again
|
||||||
|
3. If conflict on tasks.md: Report, STOP (human resolves)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Stopping Criteria
|
||||||
|
|
||||||
|
**ONLY stop if:**
|
||||||
|
1. All tasks in docs/tasks.md are `done`
|
||||||
|
2. Critical blocker preventing progress (document and alert)
|
||||||
|
3. Context usage >= 55% — output COMPACTION REQUIRED checkpoint and wait
|
||||||
|
4. Absolute context limit reached AND cannot compact further
|
||||||
|
|
||||||
|
**DO NOT stop to ask "should I continue?"** — the answer is always YES.
|
||||||
|
**DO stop at 55-60%** — output the compaction checkpoint and wait for user to run `/compact`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Sprint Completion Protocol
|
||||||
|
|
||||||
|
When all tasks in `docs/tasks.md` are `done` (or triaged as `deferred`), archive the sprint artifacts before stopping. This preserves them for post-mortems, variance calibration, and historical reference.
|
||||||
|
|
||||||
|
### Archive Steps
|
||||||
|
|
||||||
|
1. **Create archive directory** (if it doesn't exist):
|
||||||
|
```bash
|
||||||
|
mkdir -p docs/tasks/
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Move tasks.md to archive:**
|
||||||
|
```bash
|
||||||
|
mv docs/tasks.md docs/tasks/{milestone-name}-tasks.md
|
||||||
|
```
|
||||||
|
Example: `docs/tasks/M6-AgentOrchestration-Fixes-tasks.md`
|
||||||
|
|
||||||
|
3. **Move learnings to archive:**
|
||||||
|
```bash
|
||||||
|
mv docs/orchestrator-learnings.json docs/tasks/{milestone-name}-learnings.json
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Commit the archive:**
|
||||||
|
```bash
|
||||||
|
git add docs/tasks/
|
||||||
|
git rm docs/tasks.md docs/orchestrator-learnings.json 2>/dev/null || true
|
||||||
|
git commit -m "chore(orchestrator): Archive {milestone-name} sprint artifacts
|
||||||
|
|
||||||
|
{completed}/{total} tasks completed, {deferred} deferred.
|
||||||
|
Archived to docs/tasks/ for post-mortem reference."
|
||||||
|
git push
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Run final retrospective** — review variance patterns and propose updates to estimation heuristics.
|
||||||
|
|
||||||
|
### Recovery
|
||||||
|
|
||||||
|
If an orchestrator starts and `docs/tasks.md` does not exist, check `docs/tasks/` for the most recent archive:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ls -t docs/tasks/*-tasks.md 2>/dev/null | head -1
|
||||||
|
```
|
||||||
|
|
||||||
|
If found, this may indicate another session archived the file. The orchestrator should:
|
||||||
|
1. Report what it found in `docs/tasks/`
|
||||||
|
2. Ask whether to resume from the archived file or bootstrap fresh
|
||||||
|
3. If resuming: copy the archive back to `docs/tasks.md` and continue
|
||||||
|
|
||||||
|
### Retention Policy
|
||||||
|
|
||||||
|
Keep all archived sprints indefinitely. They are small text files and valuable for:
|
||||||
|
- Post-mortem analysis
|
||||||
|
- Estimation variance calibration across milestones
|
||||||
|
- Understanding what was deferred and why
|
||||||
|
- Onboarding new orchestrators to project history
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Kickstart Message Format
|
||||||
|
|
||||||
|
The kickstart should be **minimal** — the orchestrator figures out the rest:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Mission
|
||||||
|
Remediate findings from the codebase review.
|
||||||
|
|
||||||
|
## Setup
|
||||||
|
- Project: /path/to/project
|
||||||
|
- Review: docs/reports/{report-name}/
|
||||||
|
- Quality gates: {command}
|
||||||
|
- Milestone: {milestone-name} (for issue creation)
|
||||||
|
- Task prefix: {PREFIX} (e.g., MS, UC)
|
||||||
|
|
||||||
|
## Protocol
|
||||||
|
Read ~/.mosaic/guides/orchestrator.md for full instructions.
|
||||||
|
|
||||||
|
## Start
|
||||||
|
Bootstrap from the review report, then execute until complete.
|
||||||
|
```
|
||||||
|
|
||||||
|
**The orchestrator will:**
|
||||||
|
1. Read this guide
|
||||||
|
2. Parse the review reports
|
||||||
|
3. Determine phases, estimates, dependencies
|
||||||
|
4. Create issues and tasks.md
|
||||||
|
5. Execute until done or blocked
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
| Phase | Action |
|
||||||
|
|-------|--------|
|
||||||
|
| Bootstrap | Parse reports → Categorize → Estimate → Create issues → Create tasks.md |
|
||||||
|
| Execute | Loop: claim → spawn worker → update → commit |
|
||||||
|
| Compact | At 60%: summarize, clear history, continue |
|
||||||
|
| Stop | Queue empty, blocker, or context limit |
|
||||||
|
|
||||||
|
**Orchestrator owns tasks.md. Workers execute and report. Single writer eliminates conflicts.**
|
||||||
202
guides/qa-testing.md
Normal file
202
guides/qa-testing.md
Normal file
@@ -0,0 +1,202 @@
|
|||||||
|
# QA & Testing Guide
|
||||||
|
|
||||||
|
## Before Starting
|
||||||
|
1. Check assigned issue: `~/.mosaic/rails/git/issue-list.sh -a @me`
|
||||||
|
2. Create scratchpad: `docs/scratchpads/{issue-number}-{short-name}.md`
|
||||||
|
3. Review existing test structure and patterns
|
||||||
|
|
||||||
|
## Test-Driven Development (TDD) Process
|
||||||
|
|
||||||
|
### The TDD Cycle
|
||||||
|
1. **Red**: Write a failing test first
|
||||||
|
2. **Green**: Write minimal code to pass
|
||||||
|
3. **Refactor**: Improve code while keeping tests green
|
||||||
|
|
||||||
|
### TDD Rules
|
||||||
|
- Never write production code without a failing test
|
||||||
|
- Write only enough test to fail
|
||||||
|
- Write only enough code to pass
|
||||||
|
- Refactor continuously
|
||||||
|
|
||||||
|
## Coverage Requirements
|
||||||
|
|
||||||
|
### Minimum Standards
|
||||||
|
- **Overall Coverage**: 85% minimum
|
||||||
|
- **Critical Paths**: 95% minimum (auth, payments, data mutations)
|
||||||
|
- **New Code**: 90% minimum
|
||||||
|
|
||||||
|
### What to Cover
|
||||||
|
- All public interfaces
|
||||||
|
- Error handling paths
|
||||||
|
- Edge cases and boundaries
|
||||||
|
- Integration points
|
||||||
|
|
||||||
|
### What NOT to Count
|
||||||
|
- Generated code
|
||||||
|
- Configuration files
|
||||||
|
- Third-party library wrappers (thin wrappers only)
|
||||||
|
|
||||||
|
## Test Categories
|
||||||
|
|
||||||
|
### Unit Tests
|
||||||
|
- Test single functions/methods in isolation
|
||||||
|
- Mock external dependencies
|
||||||
|
- Fast execution (< 100ms per test)
|
||||||
|
- No network, database, or filesystem access
|
||||||
|
|
||||||
|
```python
|
||||||
|
def test_calculate_discount_applies_percentage():
|
||||||
|
result = calculate_discount(100, 0.20)
|
||||||
|
assert result == 80
|
||||||
|
```
|
||||||
|
|
||||||
|
### Integration Tests
|
||||||
|
- Test multiple components together
|
||||||
|
- Use real databases (test containers)
|
||||||
|
- Test API contracts
|
||||||
|
- Slower execution acceptable
|
||||||
|
|
||||||
|
```python
|
||||||
|
def test_create_user_persists_to_database(db_session):
|
||||||
|
user = create_user(db_session, "test@example.com")
|
||||||
|
retrieved = get_user_by_email(db_session, "test@example.com")
|
||||||
|
assert retrieved.id == user.id
|
||||||
|
```
|
||||||
|
|
||||||
|
### End-to-End Tests
|
||||||
|
- Test complete user workflows
|
||||||
|
- Use real browser (Playwright, Cypress)
|
||||||
|
- Test critical paths only (expensive to maintain)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
test('user can complete checkout', async ({ page }) => {
|
||||||
|
await page.goto('/products');
|
||||||
|
await page.click('[data-testid="add-to-cart"]');
|
||||||
|
await page.click('[data-testid="checkout"]');
|
||||||
|
await page.fill('#email', 'test@example.com');
|
||||||
|
await page.click('[data-testid="submit-order"]');
|
||||||
|
await expect(page.locator('.order-confirmation')).toBeVisible();
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Structure
|
||||||
|
|
||||||
|
### Naming Convention
|
||||||
|
```
|
||||||
|
test_{what}_{condition}_{expected_result}
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
- test_login_with_valid_credentials_returns_token
|
||||||
|
- test_login_with_invalid_password_returns_401
|
||||||
|
- test_get_user_when_not_found_returns_404
|
||||||
|
```
|
||||||
|
|
||||||
|
### Arrange-Act-Assert Pattern
|
||||||
|
```python
|
||||||
|
def test_add_item_to_cart_increases_count():
|
||||||
|
# Arrange
|
||||||
|
cart = Cart()
|
||||||
|
item = Item(id=1, name="Widget", price=9.99)
|
||||||
|
|
||||||
|
# Act
|
||||||
|
cart.add(item)
|
||||||
|
|
||||||
|
# Assert
|
||||||
|
assert cart.item_count == 1
|
||||||
|
assert cart.total == 9.99
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test Isolation
|
||||||
|
- Each test should be independent
|
||||||
|
- Use setup/teardown for common state
|
||||||
|
- Clean up after tests
|
||||||
|
- Don't rely on test execution order
|
||||||
|
|
||||||
|
## Mocking Guidelines
|
||||||
|
|
||||||
|
### When to Mock
|
||||||
|
- External APIs and services
|
||||||
|
- Time-dependent operations
|
||||||
|
- Random number generation
|
||||||
|
- Expensive operations
|
||||||
|
|
||||||
|
### When NOT to Mock
|
||||||
|
- The code under test
|
||||||
|
- Simple data structures
|
||||||
|
- Database in integration tests
|
||||||
|
|
||||||
|
### Mock Example
|
||||||
|
```python
|
||||||
|
def test_send_notification_calls_email_service(mocker):
|
||||||
|
mock_email = mocker.patch('services.email.send')
|
||||||
|
|
||||||
|
send_notification(user_id=1, message="Hello")
|
||||||
|
|
||||||
|
mock_email.assert_called_once_with(
|
||||||
|
to="user@example.com",
|
||||||
|
subject="Notification",
|
||||||
|
body="Hello"
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Data Management
|
||||||
|
|
||||||
|
### Fixtures
|
||||||
|
- Use factories for complex objects
|
||||||
|
- Keep test data close to tests
|
||||||
|
- Use realistic but anonymized data
|
||||||
|
|
||||||
|
### Database Tests
|
||||||
|
- Use transactions with rollback
|
||||||
|
- Or use test containers
|
||||||
|
- Never test against production data
|
||||||
|
|
||||||
|
## Reporting
|
||||||
|
|
||||||
|
### Test Reports Should Include
|
||||||
|
- Total tests run
|
||||||
|
- Pass/fail counts
|
||||||
|
- Coverage percentage
|
||||||
|
- Execution time
|
||||||
|
- Flaky test identification
|
||||||
|
|
||||||
|
### QA Report Template
|
||||||
|
```markdown
|
||||||
|
# QA Report - Issue #{number}
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
- Tests Added: X
|
||||||
|
- Tests Modified: Y
|
||||||
|
- Coverage: XX%
|
||||||
|
|
||||||
|
## Test Results
|
||||||
|
- Passed: X
|
||||||
|
- Failed: X
|
||||||
|
- Skipped: X
|
||||||
|
|
||||||
|
## Coverage Analysis
|
||||||
|
- Lines: XX%
|
||||||
|
- Branches: XX%
|
||||||
|
- Functions: XX%
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
[Any observations or concerns]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Commit Format
|
||||||
|
```
|
||||||
|
test(#34): Add user registration tests
|
||||||
|
|
||||||
|
- Unit tests for validation logic
|
||||||
|
- Integration tests for /api/users endpoint
|
||||||
|
- Coverage increased from 72% to 87%
|
||||||
|
|
||||||
|
Refs #34
|
||||||
|
```
|
||||||
|
|
||||||
|
## Before Completing
|
||||||
|
1. All tests pass locally
|
||||||
|
2. Coverage meets 85% threshold
|
||||||
|
3. No flaky tests introduced
|
||||||
|
4. CI pipeline passes
|
||||||
|
5. Update scratchpad with results
|
||||||
382
guides/typescript.md
Normal file
382
guides/typescript.md
Normal file
@@ -0,0 +1,382 @@
|
|||||||
|
# TypeScript Style Guide
|
||||||
|
|
||||||
|
**Authority**: This guide is MANDATORY for all TypeScript code. No exceptions without explicit approval.
|
||||||
|
|
||||||
|
Based on Google TypeScript Style Guide with stricter enforcement.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Core Principles
|
||||||
|
|
||||||
|
1. **Explicit over implicit** — Always declare types, never rely on inference for public APIs
|
||||||
|
2. **Specific over generic** — Use the narrowest type that works
|
||||||
|
3. **Safe over convenient** — Type safety is not negotiable
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Forbidden Patterns (NEVER USE)
|
||||||
|
|
||||||
|
### `any` Type — FORBIDDEN
|
||||||
|
```typescript
|
||||||
|
// ❌ NEVER
|
||||||
|
function process(data: any) { }
|
||||||
|
const result: any = fetchData();
|
||||||
|
Record<string, any>
|
||||||
|
|
||||||
|
// ✅ ALWAYS define explicit types
|
||||||
|
interface UserData {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
email: string;
|
||||||
|
}
|
||||||
|
function process(data: UserData) { }
|
||||||
|
```
|
||||||
|
|
||||||
|
### `unknown` as Lazy Typing — FORBIDDEN
|
||||||
|
`unknown` is only acceptable in these specific cases:
|
||||||
|
1. Error catch blocks (then immediately narrow)
|
||||||
|
2. JSON.parse results (then validate with Zod/schema)
|
||||||
|
3. External API responses before validation
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ NEVER - using unknown to avoid typing
|
||||||
|
function getData(): unknown { }
|
||||||
|
const config: Record<string, unknown> = {};
|
||||||
|
|
||||||
|
// ✅ ACCEPTABLE - error handling with immediate narrowing
|
||||||
|
try {
|
||||||
|
riskyOperation();
|
||||||
|
} catch (error: unknown) {
|
||||||
|
if (error instanceof Error) {
|
||||||
|
logger.error(error.message);
|
||||||
|
} else {
|
||||||
|
logger.error('Unknown error', { error: String(error) });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ ACCEPTABLE - external data with validation
|
||||||
|
const raw: unknown = JSON.parse(response);
|
||||||
|
const validated = UserSchema.parse(raw); // Zod validation
|
||||||
|
```
|
||||||
|
|
||||||
|
### Implicit `any` — FORBIDDEN
|
||||||
|
```typescript
|
||||||
|
// ❌ NEVER - implicit any from missing types
|
||||||
|
function process(data) { } // Parameter has implicit any
|
||||||
|
const handler = (e) => { } // Parameter has implicit any
|
||||||
|
|
||||||
|
// ✅ ALWAYS - explicit types
|
||||||
|
function process(data: RequestPayload): ProcessedResult { }
|
||||||
|
const handler = (e: React.MouseEvent<HTMLButtonElement>): void => { }
|
||||||
|
```
|
||||||
|
|
||||||
|
### Type Assertions to Bypass Safety — FORBIDDEN
|
||||||
|
```typescript
|
||||||
|
// ❌ NEVER - lying to the compiler
|
||||||
|
const user = data as User;
|
||||||
|
const element = document.getElementById('app') as HTMLDivElement;
|
||||||
|
|
||||||
|
// ✅ USE - type guards and narrowing
|
||||||
|
function isUser(data: unknown): data is User {
|
||||||
|
return typeof data === 'object' && data !== null && 'id' in data;
|
||||||
|
}
|
||||||
|
if (isUser(data)) {
|
||||||
|
console.log(data.id); // Safe
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ USE - null checks
|
||||||
|
const element = document.getElementById('app');
|
||||||
|
if (element instanceof HTMLDivElement) {
|
||||||
|
element.style.display = 'none'; // Safe
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Non-null Assertion (`!`) — FORBIDDEN (except tests)
|
||||||
|
```typescript
|
||||||
|
// ❌ NEVER in production code
|
||||||
|
const name = user!.name;
|
||||||
|
const element = document.getElementById('app')!;
|
||||||
|
|
||||||
|
// ✅ USE - proper null handling
|
||||||
|
const name = user?.name ?? 'Anonymous';
|
||||||
|
const element = document.getElementById('app');
|
||||||
|
if (element) {
|
||||||
|
// Safe to use element
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Required Patterns
|
||||||
|
|
||||||
|
### Explicit Return Types — REQUIRED for all public functions
|
||||||
|
```typescript
|
||||||
|
// ❌ WRONG - missing return type
|
||||||
|
export function calculateTotal(items: Item[]) {
|
||||||
|
return items.reduce((sum, item) => sum + item.price, 0);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ CORRECT - explicit return type
|
||||||
|
export function calculateTotal(items: Item[]): number {
|
||||||
|
return items.reduce((sum, item) => sum + item.price, 0);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Explicit Parameter Types — REQUIRED always
|
||||||
|
```typescript
|
||||||
|
// ❌ WRONG
|
||||||
|
const multiply = (a, b) => a * b;
|
||||||
|
users.map(user => user.name); // If user type isn't inferred
|
||||||
|
|
||||||
|
// ✅ CORRECT
|
||||||
|
const multiply = (a: number, b: number): number => a * b;
|
||||||
|
users.map((user: User): string => user.name);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Interface Over Type Alias — PREFERRED for objects
|
||||||
|
```typescript
|
||||||
|
// ✅ PREFERRED - interface (extendable, better error messages)
|
||||||
|
interface User {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
email: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ ACCEPTABLE - type alias for unions, intersections, primitives
|
||||||
|
type Status = 'active' | 'inactive' | 'pending';
|
||||||
|
type ID = string | number;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Const Assertions for Literals — REQUIRED
|
||||||
|
```typescript
|
||||||
|
// ❌ WRONG - loses literal types
|
||||||
|
const config = {
|
||||||
|
endpoint: '/api/users',
|
||||||
|
method: 'GET',
|
||||||
|
};
|
||||||
|
// config.method is string, not 'GET'
|
||||||
|
|
||||||
|
// ✅ CORRECT - preserves literal types
|
||||||
|
const config = {
|
||||||
|
endpoint: '/api/users',
|
||||||
|
method: 'GET',
|
||||||
|
} as const;
|
||||||
|
// config.method is 'GET'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Discriminated Unions — REQUIRED for variants
|
||||||
|
```typescript
|
||||||
|
// ❌ WRONG - optional properties for variants
|
||||||
|
interface ApiResponse {
|
||||||
|
success: boolean;
|
||||||
|
data?: User;
|
||||||
|
error?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ CORRECT - discriminated union
|
||||||
|
interface SuccessResponse {
|
||||||
|
success: true;
|
||||||
|
data: User;
|
||||||
|
}
|
||||||
|
interface ErrorResponse {
|
||||||
|
success: false;
|
||||||
|
error: string;
|
||||||
|
}
|
||||||
|
type ApiResponse = SuccessResponse | ErrorResponse;
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Generic Constraints
|
||||||
|
|
||||||
|
### Meaningful Constraints — REQUIRED
|
||||||
|
```typescript
|
||||||
|
// ❌ WRONG - unconstrained generic
|
||||||
|
function merge<T>(a: T, b: T): T { }
|
||||||
|
|
||||||
|
// ✅ CORRECT - constrained generic
|
||||||
|
function merge<T extends object>(a: T, b: Partial<T>): T { }
|
||||||
|
```
|
||||||
|
|
||||||
|
### Default Generic Parameters — USE SPECIFIC TYPES
|
||||||
|
```typescript
|
||||||
|
// ❌ WRONG
|
||||||
|
interface Repository<T = unknown> { }
|
||||||
|
|
||||||
|
// ✅ CORRECT - no default if type should be explicit
|
||||||
|
interface Repository<T extends Entity> { }
|
||||||
|
|
||||||
|
// ✅ ACCEPTABLE - meaningful default
|
||||||
|
interface Cache<T extends Serializable = JsonValue> { }
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## React/JSX Specific
|
||||||
|
|
||||||
|
### Event Handlers — EXPLICIT TYPES REQUIRED
|
||||||
|
```typescript
|
||||||
|
// ❌ WRONG
|
||||||
|
const handleClick = (e) => { };
|
||||||
|
const handleChange = (e) => { };
|
||||||
|
|
||||||
|
// ✅ CORRECT
|
||||||
|
const handleClick = (e: React.MouseEvent<HTMLButtonElement>): void => { };
|
||||||
|
const handleChange = (e: React.ChangeEvent<HTMLInputElement>): void => { };
|
||||||
|
const handleSubmit = (e: React.FormEvent<HTMLFormElement>): void => { };
|
||||||
|
```
|
||||||
|
|
||||||
|
### Component Props — INTERFACE REQUIRED
|
||||||
|
```typescript
|
||||||
|
// ❌ WRONG - inline types
|
||||||
|
function Button({ label, onClick }: { label: string; onClick: () => void }) { }
|
||||||
|
|
||||||
|
// ✅ CORRECT - named interface
|
||||||
|
interface ButtonProps {
|
||||||
|
label: string;
|
||||||
|
onClick: () => void;
|
||||||
|
disabled?: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
function Button({ label, onClick, disabled = false }: ButtonProps): JSX.Element {
|
||||||
|
return <button onClick={onClick} disabled={disabled}>{label}</button>;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Children Prop — USE React.ReactNode
|
||||||
|
```typescript
|
||||||
|
interface LayoutProps {
|
||||||
|
children: React.ReactNode;
|
||||||
|
sidebar?: React.ReactNode;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## API Response Typing
|
||||||
|
|
||||||
|
### Define Explicit Response Types
|
||||||
|
```typescript
|
||||||
|
// ❌ WRONG
|
||||||
|
const response = await fetch('/api/users');
|
||||||
|
const data = await response.json(); // data is any
|
||||||
|
|
||||||
|
// ✅ CORRECT
|
||||||
|
interface UsersResponse {
|
||||||
|
users: User[];
|
||||||
|
pagination: PaginationInfo;
|
||||||
|
}
|
||||||
|
|
||||||
|
const response = await fetch('/api/users');
|
||||||
|
const data: UsersResponse = await response.json();
|
||||||
|
|
||||||
|
// ✅ BEST - with runtime validation
|
||||||
|
const response = await fetch('/api/users');
|
||||||
|
const raw = await response.json();
|
||||||
|
const data = UsersResponseSchema.parse(raw); // Zod validates at runtime
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
### Typed Error Classes — REQUIRED for domain errors
|
||||||
|
```typescript
|
||||||
|
class ValidationError extends Error {
|
||||||
|
constructor(
|
||||||
|
message: string,
|
||||||
|
public readonly field: string,
|
||||||
|
public readonly code: string
|
||||||
|
) {
|
||||||
|
super(message);
|
||||||
|
this.name = 'ValidationError';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
class NotFoundError extends Error {
|
||||||
|
constructor(
|
||||||
|
public readonly resource: string,
|
||||||
|
public readonly id: string
|
||||||
|
) {
|
||||||
|
super(`${resource} with id ${id} not found`);
|
||||||
|
this.name = 'NotFoundError';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Error Narrowing — REQUIRED
|
||||||
|
```typescript
|
||||||
|
try {
|
||||||
|
await saveUser(user);
|
||||||
|
} catch (error: unknown) {
|
||||||
|
if (error instanceof ValidationError) {
|
||||||
|
return { error: error.message, field: error.field };
|
||||||
|
}
|
||||||
|
if (error instanceof NotFoundError) {
|
||||||
|
return { error: 'Not found', resource: error.resource };
|
||||||
|
}
|
||||||
|
if (error instanceof Error) {
|
||||||
|
logger.error('Unexpected error', { message: error.message, stack: error.stack });
|
||||||
|
return { error: 'Internal error' };
|
||||||
|
}
|
||||||
|
logger.error('Unknown error type', { error: String(error) });
|
||||||
|
return { error: 'Internal error' };
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ESLint Rules — ENFORCE THESE
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
"@typescript-eslint/no-explicit-any": "error",
|
||||||
|
"@typescript-eslint/explicit-function-return-type": ["error", {
|
||||||
|
"allowExpressions": true,
|
||||||
|
"allowTypedFunctionExpressions": true
|
||||||
|
}],
|
||||||
|
"@typescript-eslint/explicit-module-boundary-types": "error",
|
||||||
|
"@typescript-eslint/no-inferrable-types": "off", // Allow explicit primitives
|
||||||
|
"@typescript-eslint/no-non-null-assertion": "error",
|
||||||
|
"@typescript-eslint/strict-boolean-expressions": "error",
|
||||||
|
"@typescript-eslint/no-unsafe-assignment": "error",
|
||||||
|
"@typescript-eslint/no-unsafe-member-access": "error",
|
||||||
|
"@typescript-eslint/no-unsafe-call": "error",
|
||||||
|
"@typescript-eslint/no-unsafe-return": "error"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## TSConfig Strict Mode — REQUIRED
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"compilerOptions": {
|
||||||
|
"strict": true,
|
||||||
|
"noImplicitAny": true,
|
||||||
|
"strictNullChecks": true,
|
||||||
|
"strictFunctionTypes": true,
|
||||||
|
"strictBindCallApply": true,
|
||||||
|
"strictPropertyInitialization": true,
|
||||||
|
"noImplicitThis": true,
|
||||||
|
"useUnknownInCatchVariables": true,
|
||||||
|
"noUncheckedIndexedAccess": true,
|
||||||
|
"noImplicitReturns": true,
|
||||||
|
"noFallthroughCasesInSwitch": true,
|
||||||
|
"noImplicitOverride": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Summary: The Type Safety Hierarchy
|
||||||
|
|
||||||
|
From best to worst:
|
||||||
|
1. **Explicit specific type** (interface/type) — REQUIRED
|
||||||
|
2. **Generic with constraints** — ACCEPTABLE
|
||||||
|
3. **`unknown` with immediate validation** — ONLY for external data
|
||||||
|
4. **`any`** — FORBIDDEN
|
||||||
|
|
||||||
|
**When in doubt, define an interface.**
|
||||||
192
guides/vault-secrets.md
Normal file
192
guides/vault-secrets.md
Normal file
@@ -0,0 +1,192 @@
|
|||||||
|
# Vault Secrets Management Guide
|
||||||
|
|
||||||
|
This guide applies when the project uses HashiCorp Vault for secrets management.
|
||||||
|
|
||||||
|
## Before Starting
|
||||||
|
1. Verify Vault access: `vault status`
|
||||||
|
2. Authenticate: `vault login` (method depends on environment)
|
||||||
|
3. Check your permissions for the required paths
|
||||||
|
|
||||||
|
## Canonical Structure
|
||||||
|
|
||||||
|
**ALL Vault secrets MUST follow this structure:**
|
||||||
|
|
||||||
|
```
|
||||||
|
{mount}/{service}/{component}/{secret-name}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Components
|
||||||
|
- **mount**: Environment-specific mount point
|
||||||
|
- **service**: The service or application name
|
||||||
|
- **component**: Logical grouping (database, api, oauth, etc.)
|
||||||
|
- **secret-name**: Specific secret identifier
|
||||||
|
|
||||||
|
## Environment Mounts
|
||||||
|
|
||||||
|
| Mount | Environment | Usage |
|
||||||
|
|-------|-------------|-------|
|
||||||
|
| `secret-dev/` | Development | Local dev, CI |
|
||||||
|
| `secret-staging/` | Staging | Pre-production testing |
|
||||||
|
| `secret-prod/` | Production | Live systems |
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Database credentials
|
||||||
|
secret-prod/postgres/database/app
|
||||||
|
secret-prod/mysql/database/readonly
|
||||||
|
secret-staging/redis/auth/default
|
||||||
|
|
||||||
|
# API tokens
|
||||||
|
secret-prod/authentik/admin/token
|
||||||
|
secret-prod/stripe/api/live-key
|
||||||
|
secret-dev/sendgrid/api/test-key
|
||||||
|
|
||||||
|
# JWT/Authentication
|
||||||
|
secret-prod/backend-api/jwt/signing-key
|
||||||
|
secret-prod/auth-service/session/secret
|
||||||
|
|
||||||
|
# OAuth providers
|
||||||
|
secret-prod/backend-api/oauth/google
|
||||||
|
secret-prod/backend-api/oauth/github
|
||||||
|
|
||||||
|
# Internal services
|
||||||
|
secret-prod/loki/read-auth/admin
|
||||||
|
secret-prod/grafana/admin/password
|
||||||
|
```
|
||||||
|
|
||||||
|
## Standard Field Names
|
||||||
|
|
||||||
|
Use consistent field names within secrets:
|
||||||
|
|
||||||
|
| Purpose | Fields |
|
||||||
|
|---------|--------|
|
||||||
|
| Credentials | `username`, `password` |
|
||||||
|
| Tokens | `token` |
|
||||||
|
| OAuth | `client_id`, `client_secret` |
|
||||||
|
| Connection | `url`, `host`, `port` |
|
||||||
|
| Keys | `public_key`, `private_key` |
|
||||||
|
|
||||||
|
### Example Secret Structure
|
||||||
|
```json
|
||||||
|
// secret-prod/postgres/database/app
|
||||||
|
{
|
||||||
|
"username": "app_user",
|
||||||
|
"password": "secure-password-here",
|
||||||
|
"host": "db.example.com",
|
||||||
|
"port": "5432",
|
||||||
|
"database": "myapp"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Rules
|
||||||
|
|
||||||
|
1. **DO NOT GUESS** secret paths - Always verify the path exists
|
||||||
|
2. **Use helper scripts** in `scripts/vault/` when available
|
||||||
|
3. **All lowercase, hyphenated** (kebab-case) for all path segments
|
||||||
|
4. **Standard field names** - Use the conventions above
|
||||||
|
5. **No sensitive data in path names** - Path itself should not reveal secrets
|
||||||
|
6. **Environment separation** - Never reference prod secrets from dev
|
||||||
|
|
||||||
|
## Deprecated Paths (DO NOT USE)
|
||||||
|
|
||||||
|
These legacy patterns are deprecated and should be migrated:
|
||||||
|
|
||||||
|
| Deprecated | Migrate To |
|
||||||
|
|------------|------------|
|
||||||
|
| `secret/infrastructure/*` | `secret-{env}/{service}/...` |
|
||||||
|
| `secret/oauth/*` | `secret-{env}/{service}/oauth/{provider}` |
|
||||||
|
| `secret/database/*` | `secret-{env}/{service}/database/{user}` |
|
||||||
|
| `secret/credentials/*` | `secret-{env}/{service}/{component}/{name}` |
|
||||||
|
|
||||||
|
## Reading Secrets
|
||||||
|
|
||||||
|
### CLI
|
||||||
|
```bash
|
||||||
|
# Read a secret
|
||||||
|
vault kv get secret-prod/postgres/database/app
|
||||||
|
|
||||||
|
# Get specific field
|
||||||
|
vault kv get -field=password secret-prod/postgres/database/app
|
||||||
|
|
||||||
|
# JSON output
|
||||||
|
vault kv get -format=json secret-prod/postgres/database/app
|
||||||
|
```
|
||||||
|
|
||||||
|
### Application Code
|
||||||
|
|
||||||
|
**Python (hvac):**
|
||||||
|
```python
|
||||||
|
import hvac
|
||||||
|
|
||||||
|
client = hvac.Client(url='https://vault.example.com')
|
||||||
|
secret = client.secrets.kv.v2.read_secret_version(
|
||||||
|
path='postgres/database/app',
|
||||||
|
mount_point='secret-prod'
|
||||||
|
)
|
||||||
|
password = secret['data']['data']['password']
|
||||||
|
```
|
||||||
|
|
||||||
|
**Node.js (node-vault):**
|
||||||
|
```javascript
|
||||||
|
const vault = require('node-vault')({ endpoint: 'https://vault.example.com' });
|
||||||
|
const secret = await vault.read('secret-prod/data/postgres/database/app');
|
||||||
|
const password = secret.data.data.password;
|
||||||
|
```
|
||||||
|
|
||||||
|
**Go:**
|
||||||
|
```go
|
||||||
|
secret, err := client.Logical().Read("secret-prod/data/postgres/database/app")
|
||||||
|
password := secret.Data["data"].(map[string]interface{})["password"].(string)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Writing Secrets
|
||||||
|
|
||||||
|
Only authorized personnel should write secrets. If you need a new secret:
|
||||||
|
|
||||||
|
1. Request through proper channels (ticket, PR to IaC repo)
|
||||||
|
2. Follow the canonical structure
|
||||||
|
3. Document the secret's purpose
|
||||||
|
4. Set appropriate access policies
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Example (requires write permissions)
|
||||||
|
vault kv put secret-dev/myapp/database/app \
|
||||||
|
username="dev_user" \
|
||||||
|
password="dev-password" \
|
||||||
|
host="localhost" \
|
||||||
|
port="5432"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Permission Denied
|
||||||
|
```
|
||||||
|
Error: permission denied
|
||||||
|
```
|
||||||
|
- Verify your token has read access to the path
|
||||||
|
- Check if you're using the correct mount point
|
||||||
|
- Confirm the secret path exists
|
||||||
|
|
||||||
|
### Secret Not Found
|
||||||
|
```
|
||||||
|
Error: no value found at secret-prod/data/service/component/name
|
||||||
|
```
|
||||||
|
- Verify the exact path (use `vault kv list` to explore)
|
||||||
|
- Check for typos in service/component names
|
||||||
|
- Confirm you're using the correct environment mount
|
||||||
|
|
||||||
|
### Token Expired
|
||||||
|
```
|
||||||
|
Error: token expired
|
||||||
|
```
|
||||||
|
- Re-authenticate: `vault login`
|
||||||
|
- Check token TTL: `vault token lookup`
|
||||||
|
|
||||||
|
## Security Best Practices
|
||||||
|
|
||||||
|
1. **Least privilege** - Request only the permissions you need
|
||||||
|
2. **Short-lived tokens** - Use tokens with appropriate TTLs
|
||||||
|
3. **Audit logging** - All access is logged; act accordingly
|
||||||
|
4. **No local copies** - Don't store secrets in files or env vars long-term
|
||||||
|
5. **Rotate on compromise** - Immediately rotate any exposed secrets
|
||||||
@@ -18,6 +18,11 @@ chmod +x "$TARGET_DIR"/install.sh
|
|||||||
|
|
||||||
echo "[mosaic-install] Installed framework to $TARGET_DIR"
|
echo "[mosaic-install] Installed framework to $TARGET_DIR"
|
||||||
|
|
||||||
|
echo "[mosaic-install] Linking runtime compatibility assets"
|
||||||
|
if ! "$TARGET_DIR/bin/mosaic-link-runtime-assets"; then
|
||||||
|
echo "[mosaic-install] WARNING: runtime asset linking failed (framework install still complete)" >&2
|
||||||
|
fi
|
||||||
|
|
||||||
echo "[mosaic-install] Syncing universal skills"
|
echo "[mosaic-install] Syncing universal skills"
|
||||||
if [[ "${MOSAIC_SKIP_SKILLS_SYNC:-0}" == "1" ]]; then
|
if [[ "${MOSAIC_SKIP_SKILLS_SYNC:-0}" == "1" ]]; then
|
||||||
echo "[mosaic-install] Skipping skills sync (MOSAIC_SKIP_SKILLS_SYNC=1)"
|
echo "[mosaic-install] Skipping skills sync (MOSAIC_SKIP_SKILLS_SYNC=1)"
|
||||||
|
|||||||
289
rails/bootstrap/agent-lint.sh
Executable file
289
rails/bootstrap/agent-lint.sh
Executable file
@@ -0,0 +1,289 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# agent-lint.sh — Audit agent configuration across all coding projects
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# agent-lint.sh # Scan all projects in ~/src/
|
||||||
|
# agent-lint.sh --project <path> # Scan single project
|
||||||
|
# agent-lint.sh --json # Output JSON for jarvis-brain
|
||||||
|
# agent-lint.sh --verbose # Show per-check details
|
||||||
|
# agent-lint.sh --fix-hint # Show fix commands for failures
|
||||||
|
#
|
||||||
|
# Checks per project:
|
||||||
|
# 1. Has CLAUDE.md?
|
||||||
|
# 2. Has AGENTS.md?
|
||||||
|
# 3. CLAUDE.md references agent-guides (conditional loading)?
|
||||||
|
# 4. CLAUDE.md has quality gates?
|
||||||
|
# 5. For monorepos: sub-directories have AGENTS.md?
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Defaults
|
||||||
|
SRC_DIR="$HOME/src"
|
||||||
|
SINGLE_PROJECT=""
|
||||||
|
JSON_OUTPUT=false
|
||||||
|
VERBOSE=false
|
||||||
|
FIX_HINT=false
|
||||||
|
|
||||||
|
# Exclusion patterns (not coding projects)
|
||||||
|
EXCLUDE_PATTERNS=(
|
||||||
|
"_worktrees"
|
||||||
|
".backup"
|
||||||
|
"_old"
|
||||||
|
"_bak"
|
||||||
|
"junk"
|
||||||
|
"traefik"
|
||||||
|
"infrastructure"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Parse args
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case "$1" in
|
||||||
|
--project) SINGLE_PROJECT="$2"; shift 2 ;;
|
||||||
|
--json) JSON_OUTPUT=true; shift ;;
|
||||||
|
--verbose) VERBOSE=true; shift ;;
|
||||||
|
--fix-hint) FIX_HINT=true; shift ;;
|
||||||
|
--src-dir) SRC_DIR="$2"; shift 2 ;;
|
||||||
|
-h|--help)
|
||||||
|
echo "Usage: agent-lint.sh [--project <path>] [--json] [--verbose] [--fix-hint] [--src-dir <dir>]"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*) echo "Unknown option: $1"; exit 1 ;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Colors (disabled for JSON mode)
|
||||||
|
if $JSON_OUTPUT; then
|
||||||
|
GREEN="" RED="" YELLOW="" NC="" BOLD="" DIM=""
|
||||||
|
else
|
||||||
|
GREEN='\033[0;32m' RED='\033[0;31m' YELLOW='\033[0;33m'
|
||||||
|
NC='\033[0m' BOLD='\033[1m' DIM='\033[2m'
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Determine if a directory is a coding project
|
||||||
|
is_coding_project() {
|
||||||
|
local dir="$1"
|
||||||
|
[[ -f "$dir/package.json" ]] || \
|
||||||
|
[[ -f "$dir/pyproject.toml" ]] || \
|
||||||
|
[[ -f "$dir/Cargo.toml" ]] || \
|
||||||
|
[[ -f "$dir/go.mod" ]] || \
|
||||||
|
[[ -f "$dir/Makefile" && -f "$dir/src/main.rs" ]] || \
|
||||||
|
[[ -f "$dir/pom.xml" ]] || \
|
||||||
|
[[ -f "$dir/build.gradle" ]]
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check if directory should be excluded
|
||||||
|
is_excluded() {
|
||||||
|
local dir_name
|
||||||
|
dir_name=$(basename "$1")
|
||||||
|
for pattern in "${EXCLUDE_PATTERNS[@]}"; do
|
||||||
|
if [[ "$dir_name" == *"$pattern"* ]]; then
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Detect if project is a monorepo
|
||||||
|
is_monorepo() {
|
||||||
|
local dir="$1"
|
||||||
|
[[ -f "$dir/pnpm-workspace.yaml" ]] || \
|
||||||
|
[[ -f "$dir/turbo.json" ]] || \
|
||||||
|
[[ -f "$dir/lerna.json" ]] || \
|
||||||
|
(grep -q '"workspaces"' "$dir/package.json" 2>/dev/null)
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check for CLAUDE.md
|
||||||
|
check_claude_md() {
|
||||||
|
[[ -f "$1/CLAUDE.md" ]]
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check for AGENTS.md
|
||||||
|
check_agents_md() {
|
||||||
|
[[ -f "$1/AGENTS.md" ]]
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check conditional loading (references agent-guides)
|
||||||
|
check_conditional_loading() {
|
||||||
|
local claude_md="$1/CLAUDE.md"
|
||||||
|
[[ -f "$claude_md" ]] && grep -qi "agent-guides\|conditional.*loading\|conditional.*documentation" "$claude_md" 2>/dev/null
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check quality gates
|
||||||
|
check_quality_gates() {
|
||||||
|
local claude_md="$1/CLAUDE.md"
|
||||||
|
[[ -f "$claude_md" ]] && grep -qi "quality.gates\|must pass before\|lint\|typecheck\|test" "$claude_md" 2>/dev/null
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check monorepo sub-AGENTS.md
|
||||||
|
check_monorepo_sub_agents() {
|
||||||
|
local dir="$1"
|
||||||
|
local missing=()
|
||||||
|
|
||||||
|
if ! is_monorepo "$dir"; then
|
||||||
|
echo "N/A"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check apps/, packages/, services/, plugins/ directories
|
||||||
|
for subdir_type in apps packages services plugins; do
|
||||||
|
if [[ -d "$dir/$subdir_type" ]]; then
|
||||||
|
for subdir in "$dir/$subdir_type"/*/; do
|
||||||
|
[[ -d "$subdir" ]] || continue
|
||||||
|
# Only check if it has its own manifest
|
||||||
|
if [[ -f "$subdir/package.json" ]] || [[ -f "$subdir/pyproject.toml" ]]; then
|
||||||
|
if [[ ! -f "$subdir/AGENTS.md" ]]; then
|
||||||
|
missing+=("$(basename "$subdir")")
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ ${#missing[@]} -eq 0 ]]; then
|
||||||
|
echo "OK"
|
||||||
|
else
|
||||||
|
echo "MISS:${missing[*]}"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Lint a single project
|
||||||
|
lint_project() {
|
||||||
|
local dir="$1"
|
||||||
|
local name
|
||||||
|
name=$(basename "$dir")
|
||||||
|
|
||||||
|
local has_claude has_agents has_guides has_quality mono_status
|
||||||
|
local score=0 max_score=4
|
||||||
|
|
||||||
|
check_claude_md "$dir" && has_claude="OK" || has_claude="MISS"
|
||||||
|
check_agents_md "$dir" && has_agents="OK" || has_agents="MISS"
|
||||||
|
check_conditional_loading "$dir" && has_guides="OK" || has_guides="MISS"
|
||||||
|
check_quality_gates "$dir" && has_quality="OK" || has_quality="MISS"
|
||||||
|
mono_status=$(check_monorepo_sub_agents "$dir")
|
||||||
|
|
||||||
|
[[ "$has_claude" == "OK" ]] && ((score++)) || true
|
||||||
|
[[ "$has_agents" == "OK" ]] && ((score++)) || true
|
||||||
|
[[ "$has_guides" == "OK" ]] && ((score++)) || true
|
||||||
|
[[ "$has_quality" == "OK" ]] && ((score++)) || true
|
||||||
|
|
||||||
|
if $JSON_OUTPUT; then
|
||||||
|
cat <<JSONEOF
|
||||||
|
{
|
||||||
|
"project": "$name",
|
||||||
|
"path": "$dir",
|
||||||
|
"claude_md": "$has_claude",
|
||||||
|
"agents_md": "$has_agents",
|
||||||
|
"conditional_loading": "$has_guides",
|
||||||
|
"quality_gates": "$has_quality",
|
||||||
|
"monorepo_sub_agents": "$mono_status",
|
||||||
|
"score": $score,
|
||||||
|
"max_score": $max_score
|
||||||
|
}
|
||||||
|
JSONEOF
|
||||||
|
else
|
||||||
|
# Color-code the status
|
||||||
|
local c_claude c_agents c_guides c_quality
|
||||||
|
[[ "$has_claude" == "OK" ]] && c_claude="${GREEN} OK ${NC}" || c_claude="${RED} MISS ${NC}"
|
||||||
|
[[ "$has_agents" == "OK" ]] && c_agents="${GREEN} OK ${NC}" || c_agents="${RED} MISS ${NC}"
|
||||||
|
[[ "$has_guides" == "OK" ]] && c_guides="${GREEN} OK ${NC}" || c_guides="${RED} MISS ${NC}"
|
||||||
|
[[ "$has_quality" == "OK" ]] && c_quality="${GREEN} OK ${NC}" || c_quality="${RED} MISS ${NC}"
|
||||||
|
|
||||||
|
local score_color="$RED"
|
||||||
|
[[ $score -ge 3 ]] && score_color="$YELLOW"
|
||||||
|
[[ $score -eq 4 ]] && score_color="$GREEN"
|
||||||
|
|
||||||
|
printf " %-35s %b %b %b %b ${score_color}%d/%d${NC}" \
|
||||||
|
"$name" "$c_claude" "$c_agents" "$c_guides" "$c_quality" "$score" "$max_score"
|
||||||
|
|
||||||
|
# Show monorepo status if applicable
|
||||||
|
if [[ "$mono_status" != "N/A" && "$mono_status" != "OK" ]]; then
|
||||||
|
printf " ${YELLOW}(mono: %s)${NC}" "$mono_status"
|
||||||
|
fi
|
||||||
|
echo ""
|
||||||
|
fi
|
||||||
|
|
||||||
|
if $VERBOSE && ! $JSON_OUTPUT; then
|
||||||
|
[[ "$has_claude" == "MISS" ]] && echo " ${DIM} CLAUDE.md missing${NC}"
|
||||||
|
[[ "$has_agents" == "MISS" ]] && echo " ${DIM} AGENTS.md missing${NC}"
|
||||||
|
[[ "$has_guides" == "MISS" ]] && echo " ${DIM} No conditional loading (agent-guides not referenced)${NC}"
|
||||||
|
[[ "$has_quality" == "MISS" ]] && echo " ${DIM} No quality gates section${NC}"
|
||||||
|
if [[ "$mono_status" == MISS:* ]]; then
|
||||||
|
echo " ${DIM} Monorepo sub-AGENTS.md missing: ${mono_status#MISS:}${NC}"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if $FIX_HINT && ! $JSON_OUTPUT; then
|
||||||
|
if [[ "$has_claude" == "MISS" || "$has_agents" == "MISS" ]]; then
|
||||||
|
echo " ${DIM}Fix: ~/.mosaic/rails/bootstrap/init-project.sh --name \"$name\" --type auto${NC}"
|
||||||
|
elif [[ "$has_guides" == "MISS" ]]; then
|
||||||
|
echo " ${DIM}Fix: ~/.mosaic/rails/bootstrap/agent-upgrade.sh $dir --section conditional-loading${NC}"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Return score for summary
|
||||||
|
echo "$score" > /tmp/agent-lint-score-$$
|
||||||
|
}
|
||||||
|
|
||||||
|
# Main
|
||||||
|
main() {
|
||||||
|
local projects=()
|
||||||
|
local total=0 passing=0 total_score=0
|
||||||
|
|
||||||
|
if [[ -n "$SINGLE_PROJECT" ]]; then
|
||||||
|
projects=("$SINGLE_PROJECT")
|
||||||
|
else
|
||||||
|
for dir in "$SRC_DIR"/*/; do
|
||||||
|
[[ -d "$dir" ]] || continue
|
||||||
|
is_excluded "$dir" && continue
|
||||||
|
is_coding_project "$dir" && projects+=("${dir%/}")
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ${#projects[@]} -eq 0 ]]; then
|
||||||
|
echo "No coding projects found."
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
if $JSON_OUTPUT; then
|
||||||
|
echo '{ "audit_date": "'$(date -I)'", "projects": ['
|
||||||
|
local first=true
|
||||||
|
for dir in "${projects[@]}"; do
|
||||||
|
$first || echo ","
|
||||||
|
first=false
|
||||||
|
lint_project "$dir"
|
||||||
|
done
|
||||||
|
echo '] }'
|
||||||
|
else
|
||||||
|
echo ""
|
||||||
|
echo -e "${BOLD}Agent Configuration Audit — $(date +%Y-%m-%d)${NC}"
|
||||||
|
echo "========================================================"
|
||||||
|
printf " %-35s %s %s %s %s %s\n" \
|
||||||
|
"Project" "CLAUDE" "AGENTS" "Guides" "Quality" "Score"
|
||||||
|
echo " -----------------------------------------------------------------------"
|
||||||
|
|
||||||
|
for dir in "${projects[@]}"; do
|
||||||
|
lint_project "$dir"
|
||||||
|
local score
|
||||||
|
score=$(cat /tmp/agent-lint-score-$$ 2>/dev/null || echo 0)
|
||||||
|
((total++)) || true
|
||||||
|
((total_score += score)) || true
|
||||||
|
[[ $score -eq 4 ]] && ((passing++)) || true
|
||||||
|
done
|
||||||
|
|
||||||
|
rm -f /tmp/agent-lint-score-$$
|
||||||
|
|
||||||
|
echo " -----------------------------------------------------------------------"
|
||||||
|
local need_attention=$((total - passing))
|
||||||
|
echo ""
|
||||||
|
echo -e " ${BOLD}Summary:${NC} $total projects | ${GREEN}$passing pass${NC} | ${RED}$need_attention need attention${NC}"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ $need_attention -gt 0 ]] && ! $FIX_HINT; then
|
||||||
|
echo -e " ${DIM}Run with --fix-hint for suggested fixes${NC}"
|
||||||
|
echo -e " ${DIM}Run with --verbose for per-check details${NC}"
|
||||||
|
echo ""
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
main
|
||||||
318
rails/bootstrap/agent-upgrade.sh
Executable file
318
rails/bootstrap/agent-upgrade.sh
Executable file
@@ -0,0 +1,318 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# agent-upgrade.sh — Non-destructively upgrade agent configuration in projects
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# agent-upgrade.sh <project-path> # Upgrade one project
|
||||||
|
# agent-upgrade.sh --all # Upgrade all projects in ~/src/
|
||||||
|
# agent-upgrade.sh --all --dry-run # Preview what would change
|
||||||
|
# agent-upgrade.sh <path> --section conditional-loading # Inject specific section
|
||||||
|
# agent-upgrade.sh <path> --create-agents # Create AGENTS.md if missing
|
||||||
|
# agent-upgrade.sh <path> --monorepo-scan # Create sub-AGENTS.md for monorepo dirs
|
||||||
|
#
|
||||||
|
# Safety:
|
||||||
|
# - Creates .bak backup before any modification
|
||||||
|
# - Append-only — never modifies existing sections
|
||||||
|
# - --dry-run shows what would change without writing
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Defaults
|
||||||
|
SRC_DIR="$HOME/src"
|
||||||
|
FRAGMENTS_DIR="$HOME/.mosaic/templates/agent/fragments"
|
||||||
|
TEMPLATES_DIR="$HOME/.mosaic/templates/agent"
|
||||||
|
DRY_RUN=false
|
||||||
|
ALL_PROJECTS=false
|
||||||
|
TARGET_PATH=""
|
||||||
|
SECTION_ONLY=""
|
||||||
|
CREATE_AGENTS=false
|
||||||
|
MONOREPO_SCAN=false
|
||||||
|
|
||||||
|
# Exclusion patterns (same as agent-lint.sh)
|
||||||
|
EXCLUDE_PATTERNS=(
|
||||||
|
"_worktrees"
|
||||||
|
".backup"
|
||||||
|
"_old"
|
||||||
|
"_bak"
|
||||||
|
"junk"
|
||||||
|
"traefik"
|
||||||
|
"infrastructure"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Colors
|
||||||
|
GREEN='\033[0;32m' RED='\033[0;31m' YELLOW='\033[0;33m'
|
||||||
|
NC='\033[0m' BOLD='\033[1m' DIM='\033[2m'
|
||||||
|
|
||||||
|
# Parse args
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case "$1" in
|
||||||
|
--all) ALL_PROJECTS=true; shift ;;
|
||||||
|
--dry-run) DRY_RUN=true; shift ;;
|
||||||
|
--section) SECTION_ONLY="$2"; shift 2 ;;
|
||||||
|
--create-agents) CREATE_AGENTS=true; shift ;;
|
||||||
|
--monorepo-scan) MONOREPO_SCAN=true; shift ;;
|
||||||
|
--src-dir) SRC_DIR="$2"; shift 2 ;;
|
||||||
|
-h|--help)
|
||||||
|
echo "Usage: agent-upgrade.sh [<project-path>|--all] [--dry-run] [--section <name>] [--create-agents] [--monorepo-scan]"
|
||||||
|
echo ""
|
||||||
|
echo "Options:"
|
||||||
|
echo " --all Upgrade all projects in ~/src/"
|
||||||
|
echo " --dry-run Preview changes without writing"
|
||||||
|
echo " --section <name> Inject only a specific fragment (conditional-loading, commit-format, secrets, multi-agent, code-review, campsite-rule)"
|
||||||
|
echo " --create-agents Create AGENTS.md if missing"
|
||||||
|
echo " --monorepo-scan Create sub-AGENTS.md for monorepo directories"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
if [[ -d "$1" ]]; then
|
||||||
|
TARGET_PATH="$1"
|
||||||
|
else
|
||||||
|
echo "Unknown option or invalid path: $1"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if ! $ALL_PROJECTS && [[ -z "$TARGET_PATH" ]]; then
|
||||||
|
echo "Error: Specify a project path or use --all"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Helpers
|
||||||
|
is_coding_project() {
|
||||||
|
local dir="$1"
|
||||||
|
[[ -f "$dir/package.json" ]] || \
|
||||||
|
[[ -f "$dir/pyproject.toml" ]] || \
|
||||||
|
[[ -f "$dir/Cargo.toml" ]] || \
|
||||||
|
[[ -f "$dir/go.mod" ]] || \
|
||||||
|
[[ -f "$dir/pom.xml" ]] || \
|
||||||
|
[[ -f "$dir/build.gradle" ]]
|
||||||
|
}
|
||||||
|
|
||||||
|
is_excluded() {
|
||||||
|
local dir_name
|
||||||
|
dir_name=$(basename "$1")
|
||||||
|
for pattern in "${EXCLUDE_PATTERNS[@]}"; do
|
||||||
|
[[ "$dir_name" == *"$pattern"* ]] && return 0
|
||||||
|
done
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
is_monorepo() {
|
||||||
|
local dir="$1"
|
||||||
|
[[ -f "$dir/pnpm-workspace.yaml" ]] || \
|
||||||
|
[[ -f "$dir/turbo.json" ]] || \
|
||||||
|
[[ -f "$dir/lerna.json" ]] || \
|
||||||
|
(grep -q '"workspaces"' "$dir/package.json" 2>/dev/null)
|
||||||
|
}
|
||||||
|
|
||||||
|
has_section() {
|
||||||
|
local file="$1"
|
||||||
|
local pattern="$2"
|
||||||
|
[[ -f "$file" ]] && grep -qi "$pattern" "$file" 2>/dev/null
|
||||||
|
}
|
||||||
|
|
||||||
|
backup_file() {
|
||||||
|
local file="$1"
|
||||||
|
if [[ -f "$file" ]] && ! $DRY_RUN; then
|
||||||
|
cp "$file" "${file}.bak"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Inject a fragment into CLAUDE.md if the section doesn't exist
|
||||||
|
inject_fragment() {
|
||||||
|
local project_dir="$1"
|
||||||
|
local fragment_name="$2"
|
||||||
|
local claude_md="$project_dir/CLAUDE.md"
|
||||||
|
local fragment_file="$FRAGMENTS_DIR/$fragment_name.md"
|
||||||
|
|
||||||
|
if [[ ! -f "$fragment_file" ]]; then
|
||||||
|
echo -e " ${RED}Fragment not found: $fragment_file${NC}"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Determine detection pattern for this fragment
|
||||||
|
local detect_pattern
|
||||||
|
case "$fragment_name" in
|
||||||
|
conditional-loading) detect_pattern="agent-guides\|Conditional.*Loading\|Conditional.*Documentation" ;;
|
||||||
|
commit-format) detect_pattern="<type>.*#issue\|Types:.*feat.*fix" ;;
|
||||||
|
secrets) detect_pattern="NEVER hardcode secrets\|\.env.example.*committed" ;;
|
||||||
|
multi-agent) detect_pattern="Multi-Agent Coordination\|pull --rebase.*before" ;;
|
||||||
|
code-review) detect_pattern="codex-code-review\|codex-security-review\|Code Review" ;;
|
||||||
|
campsite-rule) detect_pattern="Campsite Rule\|Touching it makes it yours\|was already there.*NEVER" ;;
|
||||||
|
*) echo "Unknown fragment: $fragment_name"; return 1 ;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
if [[ ! -f "$claude_md" ]]; then
|
||||||
|
echo -e " ${YELLOW}No CLAUDE.md — skipping fragment injection${NC}"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
if has_section "$claude_md" "$detect_pattern"; then
|
||||||
|
echo -e " ${DIM}$fragment_name already present${NC}"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
if $DRY_RUN; then
|
||||||
|
echo -e " ${GREEN}Would inject: $fragment_name${NC}"
|
||||||
|
else
|
||||||
|
backup_file "$claude_md"
|
||||||
|
echo "" >> "$claude_md"
|
||||||
|
cat "$fragment_file" >> "$claude_md"
|
||||||
|
echo "" >> "$claude_md"
|
||||||
|
echo -e " ${GREEN}Injected: $fragment_name${NC}"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Create AGENTS.md from template
|
||||||
|
create_agents_md() {
|
||||||
|
local project_dir="$1"
|
||||||
|
local agents_md="$project_dir/AGENTS.md"
|
||||||
|
|
||||||
|
if [[ -f "$agents_md" ]]; then
|
||||||
|
echo -e " ${DIM}AGENTS.md already exists${NC}"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
local project_name
|
||||||
|
project_name=$(basename "$project_dir")
|
||||||
|
|
||||||
|
# Detect project type for quality gates
|
||||||
|
local quality_gates="# Add quality gate commands here"
|
||||||
|
if [[ -f "$project_dir/package.json" ]]; then
|
||||||
|
quality_gates="npm run lint && npm run typecheck && npm test"
|
||||||
|
if grep -q '"pnpm"' "$project_dir/package.json" 2>/dev/null || [[ -f "$project_dir/pnpm-lock.yaml" ]]; then
|
||||||
|
quality_gates="pnpm lint && pnpm typecheck && pnpm test"
|
||||||
|
fi
|
||||||
|
elif [[ -f "$project_dir/pyproject.toml" ]]; then
|
||||||
|
quality_gates="uv run ruff check src/ tests/ && uv run mypy src/ && uv run pytest --cov"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if $DRY_RUN; then
|
||||||
|
echo -e " ${GREEN}Would create: AGENTS.md${NC}"
|
||||||
|
else
|
||||||
|
# Use generic AGENTS.md template with substitutions
|
||||||
|
sed -e "s/\${PROJECT_NAME}/$project_name/g" \
|
||||||
|
-e "s/\${QUALITY_GATES}/$quality_gates/g" \
|
||||||
|
-e "s/\${TASK_PREFIX}/${project_name^^}/g" \
|
||||||
|
-e "s|\${SOURCE_DIR}|src|g" \
|
||||||
|
"$TEMPLATES_DIR/AGENTS.md.template" > "$agents_md"
|
||||||
|
echo -e " ${GREEN}Created: AGENTS.md${NC}"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Create sub-AGENTS.md for monorepo directories
|
||||||
|
create_sub_agents() {
|
||||||
|
local project_dir="$1"
|
||||||
|
|
||||||
|
if ! is_monorepo "$project_dir"; then
|
||||||
|
echo -e " ${DIM}Not a monorepo — skipping sub-AGENTS scan${NC}"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
local created=0
|
||||||
|
for subdir_type in apps packages services plugins; do
|
||||||
|
if [[ -d "$project_dir/$subdir_type" ]]; then
|
||||||
|
for subdir in "$project_dir/$subdir_type"/*/; do
|
||||||
|
[[ -d "$subdir" ]] || continue
|
||||||
|
# Only if it has its own manifest
|
||||||
|
if [[ -f "$subdir/package.json" ]] || [[ -f "$subdir/pyproject.toml" ]]; then
|
||||||
|
if [[ ! -f "$subdir/AGENTS.md" ]]; then
|
||||||
|
local dir_name
|
||||||
|
dir_name=$(basename "$subdir")
|
||||||
|
if $DRY_RUN; then
|
||||||
|
echo -e " ${GREEN}Would create: $subdir_type/$dir_name/AGENTS.md${NC}"
|
||||||
|
else
|
||||||
|
sed -e "s/\${DIRECTORY_NAME}/$dir_name/g" \
|
||||||
|
-e "s/\${DIRECTORY_PURPOSE}/Part of the $subdir_type layer./g" \
|
||||||
|
"$TEMPLATES_DIR/sub-agents.md.template" > "${subdir}AGENTS.md"
|
||||||
|
echo -e " ${GREEN}Created: $subdir_type/$dir_name/AGENTS.md${NC}"
|
||||||
|
fi
|
||||||
|
((created++)) || true
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ $created -eq 0 ]]; then
|
||||||
|
echo -e " ${DIM}All monorepo sub-AGENTS.md present${NC}"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Upgrade a single project
|
||||||
|
upgrade_project() {
|
||||||
|
local dir="$1"
|
||||||
|
local name
|
||||||
|
name=$(basename "$dir")
|
||||||
|
|
||||||
|
echo -e "\n${BOLD}$name${NC} ${DIM}($dir)${NC}"
|
||||||
|
|
||||||
|
if [[ -n "$SECTION_ONLY" ]]; then
|
||||||
|
inject_fragment "$dir" "$SECTION_ONLY"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Always try conditional-loading (highest impact)
|
||||||
|
inject_fragment "$dir" "conditional-loading"
|
||||||
|
|
||||||
|
# Try other fragments if CLAUDE.md exists
|
||||||
|
if [[ -f "$dir/CLAUDE.md" ]]; then
|
||||||
|
inject_fragment "$dir" "commit-format"
|
||||||
|
inject_fragment "$dir" "secrets"
|
||||||
|
inject_fragment "$dir" "multi-agent"
|
||||||
|
inject_fragment "$dir" "code-review"
|
||||||
|
inject_fragment "$dir" "campsite-rule"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create AGENTS.md if missing (always unless --section was used)
|
||||||
|
if $CREATE_AGENTS || [[ -z "$SECTION_ONLY" ]]; then
|
||||||
|
create_agents_md "$dir"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Monorepo sub-AGENTS.md
|
||||||
|
if $MONOREPO_SCAN || [[ -z "$SECTION_ONLY" ]]; then
|
||||||
|
create_sub_agents "$dir"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Main
|
||||||
|
main() {
|
||||||
|
local projects=()
|
||||||
|
|
||||||
|
if $ALL_PROJECTS; then
|
||||||
|
for dir in "$SRC_DIR"/*/; do
|
||||||
|
[[ -d "$dir" ]] || continue
|
||||||
|
is_excluded "$dir" && continue
|
||||||
|
is_coding_project "$dir" && projects+=("${dir%/}")
|
||||||
|
done
|
||||||
|
else
|
||||||
|
projects=("$TARGET_PATH")
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ${#projects[@]} -eq 0 ]]; then
|
||||||
|
echo "No coding projects found."
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
local mode="LIVE"
|
||||||
|
$DRY_RUN && mode="DRY RUN"
|
||||||
|
|
||||||
|
echo -e "${BOLD}Agent Configuration Upgrade — $(date +%Y-%m-%d) [$mode]${NC}"
|
||||||
|
echo "========================================================"
|
||||||
|
|
||||||
|
for dir in "${projects[@]}"; do
|
||||||
|
upgrade_project "$dir"
|
||||||
|
done
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo -e "${BOLD}Done.${NC}"
|
||||||
|
if $DRY_RUN; then
|
||||||
|
echo -e "${DIM}Run without --dry-run to apply changes.${NC}"
|
||||||
|
else
|
||||||
|
echo -e "${DIM}Backups saved as .bak files. Run agent-lint.sh to verify.${NC}"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
main
|
||||||
463
rails/bootstrap/init-project.sh
Executable file
463
rails/bootstrap/init-project.sh
Executable file
@@ -0,0 +1,463 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# init-project.sh - Bootstrap a project for AI-assisted development
|
||||||
|
# Usage: init-project.sh [OPTIONS]
|
||||||
|
#
|
||||||
|
# Creates CLAUDE.md, AGENTS.md, and standard directories using templates.
|
||||||
|
# Optionally initializes git labels and milestones.
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
TEMPLATE_DIR="$HOME/.mosaic/templates/agent"
|
||||||
|
GIT_SCRIPT_DIR="$HOME/.mosaic/rails/git"
|
||||||
|
|
||||||
|
# Defaults
|
||||||
|
PROJECT_NAME=""
|
||||||
|
PROJECT_TYPE=""
|
||||||
|
REPO_URL=""
|
||||||
|
TASK_PREFIX=""
|
||||||
|
PROJECT_DESCRIPTION=""
|
||||||
|
SKIP_LABELS=false
|
||||||
|
SKIP_CI=false
|
||||||
|
CICD_DOCKER=false
|
||||||
|
DRY_RUN=false
|
||||||
|
declare -a CICD_SERVICES=()
|
||||||
|
CICD_BRANCHES="main,develop"
|
||||||
|
|
||||||
|
show_help() {
|
||||||
|
cat <<'EOF'
|
||||||
|
Usage: init-project.sh [OPTIONS]
|
||||||
|
|
||||||
|
Bootstrap a project for AI-assisted development.
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-n, --name <name> Project name (required)
|
||||||
|
-t, --type <type> Project type: nestjs-nextjs, django, generic (default: auto-detect)
|
||||||
|
-r, --repo <url> Git remote URL
|
||||||
|
-p, --prefix <prefix> Orchestrator task prefix (e.g., MS, UC)
|
||||||
|
-d, --description <desc> One-line project description
|
||||||
|
--skip-labels Skip creating git labels and milestones
|
||||||
|
--skip-ci Skip copying CI pipeline files
|
||||||
|
--cicd-docker Generate Docker build/push/link pipeline steps
|
||||||
|
--cicd-service <name:path> Service for Docker CI (repeatable, requires --cicd-docker)
|
||||||
|
--cicd-branches <list> Branches for Docker builds (default: main,develop)
|
||||||
|
--dry-run Show what would be created without creating anything
|
||||||
|
-h, --help Show this help
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
# Full bootstrap with auto-detection
|
||||||
|
init-project.sh --name "My App" --description "A web application"
|
||||||
|
|
||||||
|
# Specific type
|
||||||
|
init-project.sh --name "My API" --type django --prefix MA
|
||||||
|
|
||||||
|
# Dry run
|
||||||
|
init-project.sh --name "Test" --type generic --dry-run
|
||||||
|
|
||||||
|
# With Docker CI/CD pipeline
|
||||||
|
init-project.sh --name "My App" --cicd-docker \
|
||||||
|
--cicd-service "my-api:src/api/Dockerfile" \
|
||||||
|
--cicd-service "my-web:src/web/Dockerfile"
|
||||||
|
|
||||||
|
Project Types:
|
||||||
|
nestjs-nextjs NestJS + Next.js monorepo (pnpm + TurboRepo)
|
||||||
|
django Django project (pytest + ruff + mypy)
|
||||||
|
typescript Standalone TypeScript/Next.js project
|
||||||
|
python-fastapi Python FastAPI project (pytest + ruff + mypy + uv)
|
||||||
|
python-library Python library/SDK (pytest + ruff + mypy + uv)
|
||||||
|
generic Generic project (uses base templates)
|
||||||
|
auto Auto-detect from project files (default)
|
||||||
|
EOF
|
||||||
|
exit 0
|
||||||
|
}
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-n|--name)
|
||||||
|
PROJECT_NAME="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-t|--type)
|
||||||
|
PROJECT_TYPE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-r|--repo)
|
||||||
|
REPO_URL="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-p|--prefix)
|
||||||
|
TASK_PREFIX="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-d|--description)
|
||||||
|
PROJECT_DESCRIPTION="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--skip-labels)
|
||||||
|
SKIP_LABELS=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--skip-ci)
|
||||||
|
SKIP_CI=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--cicd-docker)
|
||||||
|
CICD_DOCKER=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--cicd-service)
|
||||||
|
CICD_SERVICES+=("$2")
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--cicd-branches)
|
||||||
|
CICD_BRANCHES="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--dry-run)
|
||||||
|
DRY_RUN=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
show_help
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1" >&2
|
||||||
|
echo "Run with --help for usage" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Validate required args
|
||||||
|
if [[ -z "$PROJECT_NAME" ]]; then
|
||||||
|
echo "Error: --name is required" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Auto-detect project type if not specified
|
||||||
|
detect_project_type() {
|
||||||
|
# Monorepo (pnpm + turbo or npm workspaces with NestJS)
|
||||||
|
if [[ -f "pnpm-workspace.yaml" ]] || [[ -f "turbo.json" ]]; then
|
||||||
|
echo "nestjs-nextjs"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
if [[ -f "package.json" ]] && grep -q '"workspaces"' package.json 2>/dev/null; then
|
||||||
|
echo "nestjs-nextjs"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
# Django
|
||||||
|
if [[ -f "manage.py" ]] && [[ -f "pyproject.toml" ]]; then
|
||||||
|
echo "django"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
# FastAPI
|
||||||
|
if [[ -f "pyproject.toml" ]] && grep -q "fastapi" pyproject.toml 2>/dev/null; then
|
||||||
|
echo "python-fastapi"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
# Standalone TypeScript
|
||||||
|
if [[ -f "tsconfig.json" ]] && [[ -f "package.json" ]]; then
|
||||||
|
echo "typescript"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
# Python library/tool
|
||||||
|
if [[ -f "pyproject.toml" ]]; then
|
||||||
|
echo "python-library"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
echo "generic"
|
||||||
|
}
|
||||||
|
|
||||||
|
if [[ -z "$PROJECT_TYPE" || "$PROJECT_TYPE" == "auto" ]]; then
|
||||||
|
PROJECT_TYPE=$(detect_project_type)
|
||||||
|
echo "Auto-detected project type: $PROJECT_TYPE"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Derive defaults
|
||||||
|
if [[ -z "$REPO_URL" ]]; then
|
||||||
|
REPO_URL=$(git remote get-url origin 2>/dev/null || echo "")
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -z "$TASK_PREFIX" ]]; then
|
||||||
|
# Generate prefix from project name initials
|
||||||
|
TASK_PREFIX=$(echo "$PROJECT_NAME" | sed 's/[^A-Za-z ]//g' | awk '{for(i=1;i<=NF;i++) printf toupper(substr($i,1,1))}')
|
||||||
|
if [[ -z "$TASK_PREFIX" ]]; then
|
||||||
|
TASK_PREFIX="PRJ"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -z "$PROJECT_DESCRIPTION" ]]; then
|
||||||
|
PROJECT_DESCRIPTION="$PROJECT_NAME"
|
||||||
|
fi
|
||||||
|
|
||||||
|
PROJECT_DIR=$(basename "$(pwd)")
|
||||||
|
|
||||||
|
# Detect quality gates, source dir, and stack info based on type
|
||||||
|
case "$PROJECT_TYPE" in
|
||||||
|
nestjs-nextjs)
|
||||||
|
export QUALITY_GATES="pnpm typecheck && pnpm lint && pnpm test"
|
||||||
|
export SOURCE_DIR="apps"
|
||||||
|
export BUILD_COMMAND="pnpm build"
|
||||||
|
export TEST_COMMAND="pnpm test"
|
||||||
|
export LINT_COMMAND="pnpm lint"
|
||||||
|
export TYPECHECK_COMMAND="pnpm typecheck"
|
||||||
|
export FRONTEND_STACK="Next.js + React + TailwindCSS + Shadcn/ui"
|
||||||
|
export BACKEND_STACK="NestJS + Prisma ORM"
|
||||||
|
export DATABASE_STACK="PostgreSQL"
|
||||||
|
export TESTING_STACK="Vitest + Playwright"
|
||||||
|
export DEPLOYMENT_STACK="Docker + docker-compose"
|
||||||
|
export CONFIG_FILES="turbo.json, pnpm-workspace.yaml, tsconfig.json"
|
||||||
|
;;
|
||||||
|
django)
|
||||||
|
export QUALITY_GATES="ruff check . && mypy . && pytest tests/"
|
||||||
|
export SOURCE_DIR="src"
|
||||||
|
export BUILD_COMMAND="pip install -e ."
|
||||||
|
export TEST_COMMAND="pytest tests/"
|
||||||
|
export LINT_COMMAND="ruff check ."
|
||||||
|
export TYPECHECK_COMMAND="mypy ."
|
||||||
|
export FRONTEND_STACK="N/A"
|
||||||
|
export BACKEND_STACK="Django / Django REST Framework"
|
||||||
|
export DATABASE_STACK="PostgreSQL"
|
||||||
|
export TESTING_STACK="pytest + pytest-django"
|
||||||
|
export DEPLOYMENT_STACK="Docker + docker-compose"
|
||||||
|
export CONFIG_FILES="pyproject.toml"
|
||||||
|
export PROJECT_SLUG=$(echo "$PROJECT_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '_' | sed 's/[^a-z0-9_]//g')
|
||||||
|
;;
|
||||||
|
typescript)
|
||||||
|
PKG_MGR="npm"
|
||||||
|
[[ -f "pnpm-lock.yaml" ]] && PKG_MGR="pnpm"
|
||||||
|
[[ -f "yarn.lock" ]] && PKG_MGR="yarn"
|
||||||
|
export QUALITY_GATES="$PKG_MGR run lint && $PKG_MGR run typecheck && $PKG_MGR test"
|
||||||
|
export SOURCE_DIR="src"
|
||||||
|
export BUILD_COMMAND="$PKG_MGR run build"
|
||||||
|
export TEST_COMMAND="$PKG_MGR test"
|
||||||
|
export LINT_COMMAND="$PKG_MGR run lint"
|
||||||
|
export TYPECHECK_COMMAND="npx tsc --noEmit"
|
||||||
|
export FRAMEWORK="TypeScript"
|
||||||
|
export PACKAGE_MANAGER="$PKG_MGR"
|
||||||
|
export FRONTEND_STACK="N/A"
|
||||||
|
export BACKEND_STACK="N/A"
|
||||||
|
export DATABASE_STACK="N/A"
|
||||||
|
export TESTING_STACK="Vitest or Jest"
|
||||||
|
export DEPLOYMENT_STACK="TBD"
|
||||||
|
export CONFIG_FILES="tsconfig.json, package.json"
|
||||||
|
# Detect Next.js
|
||||||
|
if grep -q '"next"' package.json 2>/dev/null; then
|
||||||
|
export FRAMEWORK="Next.js"
|
||||||
|
export FRONTEND_STACK="Next.js + React"
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
python-fastapi)
|
||||||
|
export PROJECT_SLUG=$(echo "$PROJECT_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '_' | sed 's/[^a-z0-9_]//g')
|
||||||
|
export QUALITY_GATES="uv run ruff check src/ tests/ && uv run ruff format --check src/ && uv run mypy src/ && uv run pytest --cov"
|
||||||
|
export SOURCE_DIR="src"
|
||||||
|
export BUILD_COMMAND="uv sync --all-extras"
|
||||||
|
export TEST_COMMAND="uv run pytest --cov"
|
||||||
|
export LINT_COMMAND="uv run ruff check src/ tests/"
|
||||||
|
export TYPECHECK_COMMAND="uv run mypy src/"
|
||||||
|
export FRONTEND_STACK="N/A"
|
||||||
|
export BACKEND_STACK="FastAPI"
|
||||||
|
export DATABASE_STACK="TBD"
|
||||||
|
export TESTING_STACK="pytest + httpx"
|
||||||
|
export DEPLOYMENT_STACK="Docker"
|
||||||
|
export CONFIG_FILES="pyproject.toml"
|
||||||
|
;;
|
||||||
|
python-library)
|
||||||
|
export PROJECT_SLUG=$(echo "$PROJECT_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '_' | sed 's/[^a-z0-9_]//g')
|
||||||
|
export QUALITY_GATES="uv run ruff check src/ tests/ && uv run ruff format --check src/ && uv run mypy src/ && uv run pytest --cov"
|
||||||
|
export SOURCE_DIR="src"
|
||||||
|
export BUILD_COMMAND="uv sync --all-extras"
|
||||||
|
export TEST_COMMAND="uv run pytest --cov"
|
||||||
|
export LINT_COMMAND="uv run ruff check src/ tests/"
|
||||||
|
export TYPECHECK_COMMAND="uv run mypy src/"
|
||||||
|
export BUILD_SYSTEM="hatchling"
|
||||||
|
export FRONTEND_STACK="N/A"
|
||||||
|
export BACKEND_STACK="N/A"
|
||||||
|
export DATABASE_STACK="N/A"
|
||||||
|
export TESTING_STACK="pytest"
|
||||||
|
export DEPLOYMENT_STACK="PyPI / Gitea Packages"
|
||||||
|
export CONFIG_FILES="pyproject.toml"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
export QUALITY_GATES="echo 'No quality gates configured — update CLAUDE.md'"
|
||||||
|
export SOURCE_DIR="src"
|
||||||
|
export BUILD_COMMAND="echo 'No build command configured'"
|
||||||
|
export TEST_COMMAND="echo 'No test command configured'"
|
||||||
|
export LINT_COMMAND="echo 'No lint command configured'"
|
||||||
|
export TYPECHECK_COMMAND="echo 'No typecheck command configured'"
|
||||||
|
export FRONTEND_STACK="TBD"
|
||||||
|
export BACKEND_STACK="TBD"
|
||||||
|
export DATABASE_STACK="TBD"
|
||||||
|
export TESTING_STACK="TBD"
|
||||||
|
export DEPLOYMENT_STACK="TBD"
|
||||||
|
export CONFIG_FILES="TBD"
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
# Export common variables
|
||||||
|
export PROJECT_NAME
|
||||||
|
export PROJECT_DESCRIPTION
|
||||||
|
export PROJECT_DIR
|
||||||
|
export REPO_URL
|
||||||
|
export TASK_PREFIX
|
||||||
|
|
||||||
|
echo "=== Project Bootstrap ==="
|
||||||
|
echo " Name: $PROJECT_NAME"
|
||||||
|
echo " Type: $PROJECT_TYPE"
|
||||||
|
echo " Prefix: $TASK_PREFIX"
|
||||||
|
echo " Description: $PROJECT_DESCRIPTION"
|
||||||
|
echo " Repo: ${REPO_URL:-'(not set)'}"
|
||||||
|
echo " Directory: $(pwd)"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Select template directory
|
||||||
|
STACK_TEMPLATE_DIR="$TEMPLATE_DIR/projects/$PROJECT_TYPE"
|
||||||
|
if [[ ! -d "$STACK_TEMPLATE_DIR" ]]; then
|
||||||
|
STACK_TEMPLATE_DIR="$TEMPLATE_DIR"
|
||||||
|
echo "No stack-specific templates found for '$PROJECT_TYPE', using generic templates."
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ "$DRY_RUN" == true ]]; then
|
||||||
|
echo "[DRY RUN] Would create:"
|
||||||
|
echo " - CLAUDE.md (from $STACK_TEMPLATE_DIR/CLAUDE.md.template)"
|
||||||
|
echo " - AGENTS.md (from $STACK_TEMPLATE_DIR/AGENTS.md.template)"
|
||||||
|
echo " - docs/scratchpads/"
|
||||||
|
echo " - docs/reports/"
|
||||||
|
echo " - docs/templates/"
|
||||||
|
if [[ "$SKIP_CI" != true ]]; then
|
||||||
|
echo " - .woodpecker/codex-review.yml"
|
||||||
|
echo " - .woodpecker/schemas/*.json"
|
||||||
|
fi
|
||||||
|
if [[ "$SKIP_LABELS" != true ]]; then
|
||||||
|
echo " - Standard git labels (epic, feature, bug, task, documentation, security, breaking)"
|
||||||
|
echo " - Milestone: 0.1.0 - MVP"
|
||||||
|
fi
|
||||||
|
if [[ "$CICD_DOCKER" == true ]]; then
|
||||||
|
echo " - Docker build/push/link steps appended to .woodpecker.yml"
|
||||||
|
for svc in "${CICD_SERVICES[@]}"; do
|
||||||
|
echo " - docker-build-${svc%%:*}"
|
||||||
|
done
|
||||||
|
echo " - link-packages"
|
||||||
|
fi
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create CLAUDE.md
|
||||||
|
if [[ -f "CLAUDE.md" ]]; then
|
||||||
|
echo "CLAUDE.md already exists — skipping (rename or delete to recreate)"
|
||||||
|
else
|
||||||
|
if [[ -f "$STACK_TEMPLATE_DIR/CLAUDE.md.template" ]]; then
|
||||||
|
envsubst < "$STACK_TEMPLATE_DIR/CLAUDE.md.template" > CLAUDE.md
|
||||||
|
echo "Created CLAUDE.md"
|
||||||
|
else
|
||||||
|
echo "Warning: No CLAUDE.md template found at $STACK_TEMPLATE_DIR" >&2
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create AGENTS.md
|
||||||
|
if [[ -f "AGENTS.md" ]]; then
|
||||||
|
echo "AGENTS.md already exists — skipping (rename or delete to recreate)"
|
||||||
|
else
|
||||||
|
if [[ -f "$STACK_TEMPLATE_DIR/AGENTS.md.template" ]]; then
|
||||||
|
envsubst < "$STACK_TEMPLATE_DIR/AGENTS.md.template" > AGENTS.md
|
||||||
|
echo "Created AGENTS.md"
|
||||||
|
else
|
||||||
|
echo "Warning: No AGENTS.md template found at $STACK_TEMPLATE_DIR" >&2
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create directories
|
||||||
|
mkdir -p docs/scratchpads docs/reports docs/templates
|
||||||
|
echo "Created docs/scratchpads/, docs/reports/, docs/templates/"
|
||||||
|
|
||||||
|
# Set up CI/CD pipeline
|
||||||
|
if [[ "$SKIP_CI" != true ]]; then
|
||||||
|
CODEX_DIR="$HOME/.mosaic/rails/codex"
|
||||||
|
if [[ -d "$CODEX_DIR/woodpecker" ]]; then
|
||||||
|
mkdir -p .woodpecker/schemas
|
||||||
|
cp "$CODEX_DIR/woodpecker/codex-review.yml" .woodpecker/
|
||||||
|
cp "$CODEX_DIR/schemas/"*.json .woodpecker/schemas/
|
||||||
|
echo "Created .woodpecker/ with Codex review pipeline"
|
||||||
|
else
|
||||||
|
echo "Codex pipeline templates not found — skipping CI setup"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Generate Docker build/push/link pipeline steps
|
||||||
|
if [[ "$CICD_DOCKER" == true ]]; then
|
||||||
|
CICD_SCRIPT="$HOME/.mosaic/rails/cicd/generate-docker-steps.sh"
|
||||||
|
if [[ -x "$CICD_SCRIPT" ]]; then
|
||||||
|
# Parse org and repo from git remote
|
||||||
|
CICD_REGISTRY=""
|
||||||
|
CICD_ORG=""
|
||||||
|
CICD_REPO_NAME=""
|
||||||
|
if [[ -n "$REPO_URL" ]]; then
|
||||||
|
# Extract host from https://host/org/repo.git or git@host:org/repo.git
|
||||||
|
CICD_REGISTRY=$(echo "$REPO_URL" | sed -E 's|https?://([^/]+)/.*|\1|; s|git@([^:]+):.*|\1|')
|
||||||
|
CICD_ORG=$(echo "$REPO_URL" | sed -E 's|https?://[^/]+/([^/]+)/.*|\1|; s|git@[^:]+:([^/]+)/.*|\1|')
|
||||||
|
CICD_REPO_NAME=$(echo "$REPO_URL" | sed -E 's|.*/([^/]+?)(\.git)?$|\1|')
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$CICD_REGISTRY" && -n "$CICD_ORG" && -n "$CICD_REPO_NAME" && ${#CICD_SERVICES[@]} -gt 0 ]]; then
|
||||||
|
# Build service args
|
||||||
|
SVC_ARGS=""
|
||||||
|
for svc in "${CICD_SERVICES[@]}"; do
|
||||||
|
SVC_ARGS="$SVC_ARGS --service $svc"
|
||||||
|
done
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "Generating Docker CI/CD pipeline steps..."
|
||||||
|
|
||||||
|
# Add kaniko_setup anchor to variables section if .woodpecker.yml exists
|
||||||
|
if [[ -f ".woodpecker.yml" ]]; then
|
||||||
|
# Append Docker steps to existing pipeline
|
||||||
|
"$CICD_SCRIPT" \
|
||||||
|
--registry "$CICD_REGISTRY" \
|
||||||
|
--org "$CICD_ORG" \
|
||||||
|
--repo "$CICD_REPO_NAME" \
|
||||||
|
$SVC_ARGS \
|
||||||
|
--branches "$CICD_BRANCHES" >> .woodpecker.yml
|
||||||
|
echo "Appended Docker build/push/link steps to .woodpecker.yml"
|
||||||
|
else
|
||||||
|
echo "Warning: No .woodpecker.yml found — generate quality gates first, then re-run with --cicd-docker" >&2
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
if [[ ${#CICD_SERVICES[@]} -eq 0 ]]; then
|
||||||
|
echo "Warning: --cicd-docker requires at least one --cicd-service" >&2
|
||||||
|
else
|
||||||
|
echo "Warning: Could not parse registry/org/repo from git remote — specify --repo" >&2
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo "Docker CI/CD generator not found at $CICD_SCRIPT — skipping" >&2
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Initialize labels and milestones
|
||||||
|
if [[ "$SKIP_LABELS" != true ]]; then
|
||||||
|
LABEL_SCRIPT="$SCRIPT_DIR/init-repo-labels.sh"
|
||||||
|
if [[ -x "$LABEL_SCRIPT" ]]; then
|
||||||
|
echo ""
|
||||||
|
echo "Initializing git labels and milestones..."
|
||||||
|
"$LABEL_SCRIPT"
|
||||||
|
else
|
||||||
|
echo "Label init script not found — skipping label setup"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "=== Bootstrap Complete ==="
|
||||||
|
echo ""
|
||||||
|
echo "Next steps:"
|
||||||
|
echo " 1. Review and customize CLAUDE.md"
|
||||||
|
echo " 2. Review and customize AGENTS.md"
|
||||||
|
echo " 3. Update quality gate commands if needed"
|
||||||
|
echo " 4. Commit: git add CLAUDE.md AGENTS.md docs/ .woodpecker/ && git commit -m 'feat: Bootstrap project for AI development'"
|
||||||
|
if [[ "$SKIP_CI" != true ]]; then
|
||||||
|
echo " 5. Add 'codex_api_key' secret to Woodpecker CI"
|
||||||
|
fi
|
||||||
|
if [[ "$CICD_DOCKER" == true ]]; then
|
||||||
|
echo " 6. Add 'gitea_username' and 'gitea_token' secrets to Woodpecker CI"
|
||||||
|
echo " (token needs package:write scope)"
|
||||||
|
fi
|
||||||
121
rails/bootstrap/init-repo-labels.sh
Executable file
121
rails/bootstrap/init-repo-labels.sh
Executable file
@@ -0,0 +1,121 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# init-repo-labels.sh - Create standard labels and initial milestone for a repository
|
||||||
|
# Usage: init-repo-labels.sh [--skip-milestone]
|
||||||
|
#
|
||||||
|
# Works with both Gitea (tea) and GitHub (gh).
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
GIT_SCRIPT_DIR="$HOME/.mosaic/rails/git"
|
||||||
|
source "$GIT_SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
SKIP_MILESTONE=false
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
--skip-milestone)
|
||||||
|
SKIP_MILESTONE=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
echo "Usage: $(basename "$0") [--skip-milestone]"
|
||||||
|
echo ""
|
||||||
|
echo "Create standard labels and initial milestone for the current repository."
|
||||||
|
echo ""
|
||||||
|
echo "Options:"
|
||||||
|
echo " --skip-milestone Skip creating the 0.1.0 MVP milestone"
|
||||||
|
echo " -h, --help Show this help"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
PLATFORM=$(detect_platform)
|
||||||
|
OWNER=$(get_repo_owner)
|
||||||
|
REPO=$(get_repo_name)
|
||||||
|
|
||||||
|
echo "Platform: $PLATFORM"
|
||||||
|
echo "Repository: $OWNER/$REPO"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Standard labels with colors
|
||||||
|
# Format: "name|color|description"
|
||||||
|
LABELS=(
|
||||||
|
"epic|3E4B9E|Large feature spanning multiple issues"
|
||||||
|
"feature|0E8A16|New functionality"
|
||||||
|
"bug|D73A4A|Defect fix"
|
||||||
|
"task|0075CA|General work item"
|
||||||
|
"documentation|0075CA|Documentation updates"
|
||||||
|
"security|B60205|Security-related"
|
||||||
|
"breaking|D93F0B|Breaking change"
|
||||||
|
)
|
||||||
|
|
||||||
|
create_label_github() {
|
||||||
|
local name="$1" color="$2" description="$3"
|
||||||
|
|
||||||
|
# Check if label already exists
|
||||||
|
if gh label list --repo "$OWNER/$REPO" --json name -q ".[].name" 2>/dev/null | grep -qx "$name"; then
|
||||||
|
echo " [skip] '$name' already exists"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
gh label create "$name" \
|
||||||
|
--repo "$OWNER/$REPO" \
|
||||||
|
--color "$color" \
|
||||||
|
--description "$description" 2>/dev/null && \
|
||||||
|
echo " [created] '$name'" || \
|
||||||
|
echo " [error] Failed to create '$name'"
|
||||||
|
}
|
||||||
|
|
||||||
|
create_label_gitea() {
|
||||||
|
local name="$1" color="$2" description="$3"
|
||||||
|
|
||||||
|
# Check if label already exists
|
||||||
|
if tea labels list 2>/dev/null | grep -q "$name"; then
|
||||||
|
echo " [skip] '$name' already exists"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
tea labels create --name "$name" --color "#$color" --description "$description" 2>/dev/null && \
|
||||||
|
echo " [created] '$name'" || \
|
||||||
|
echo " [error] Failed to create '$name'"
|
||||||
|
}
|
||||||
|
|
||||||
|
echo "Creating labels..."
|
||||||
|
|
||||||
|
for label_def in "${LABELS[@]}"; do
|
||||||
|
IFS='|' read -r name color description <<< "$label_def"
|
||||||
|
|
||||||
|
case "$PLATFORM" in
|
||||||
|
github)
|
||||||
|
create_label_github "$name" "$color" "$description"
|
||||||
|
;;
|
||||||
|
gitea)
|
||||||
|
create_label_gitea "$name" "$color" "$description"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Error: Unsupported platform '$PLATFORM'" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Create initial milestone
|
||||||
|
if [[ "$SKIP_MILESTONE" != true ]]; then
|
||||||
|
echo "Creating initial milestone..."
|
||||||
|
|
||||||
|
"$GIT_SCRIPT_DIR/milestone-create.sh" -t "0.1.0" -d "MVP - Minimum Viable Product" 2>/dev/null && \
|
||||||
|
echo " [created] Milestone '0.1.0 - MVP'" || \
|
||||||
|
echo " [skip] Milestone may already exist or creation failed"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Label initialization complete."
|
||||||
265
rails/codex/README.md
Normal file
265
rails/codex/README.md
Normal file
@@ -0,0 +1,265 @@
|
|||||||
|
# Codex CLI Review Scripts
|
||||||
|
|
||||||
|
AI-powered code review and security review scripts using OpenAI's Codex CLI.
|
||||||
|
|
||||||
|
These scripts provide **independent** code analysis separate from Claude sessions, giving you a second AI perspective on code changes to catch issues that might be missed.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Install Codex CLI
|
||||||
|
npm i -g @openai/codex
|
||||||
|
|
||||||
|
# Verify installation
|
||||||
|
codex --version
|
||||||
|
|
||||||
|
# Authenticate (first run)
|
||||||
|
codex # Will prompt for ChatGPT account or API key
|
||||||
|
|
||||||
|
# Verify jq is installed (for JSON processing)
|
||||||
|
jq --version
|
||||||
|
```
|
||||||
|
|
||||||
|
## Scripts
|
||||||
|
|
||||||
|
### `codex-code-review.sh`
|
||||||
|
General code quality review focusing on:
|
||||||
|
- **Correctness** — logic errors, edge cases, error handling
|
||||||
|
- **Code Quality** — complexity, duplication, naming, dead code
|
||||||
|
- **Testing** — coverage, test quality
|
||||||
|
- **Performance** — N+1 queries, blocking operations, resource cleanup
|
||||||
|
- **Dependencies** — deprecated packages
|
||||||
|
- **Documentation** — comments, public API docs
|
||||||
|
|
||||||
|
**Output:** Structured JSON with findings categorized as `blocker`, `should-fix`, or `suggestion`.
|
||||||
|
|
||||||
|
### `codex-security-review.sh`
|
||||||
|
Security vulnerability review focusing on:
|
||||||
|
- **OWASP Top 10** — injection, broken auth, XSS, CSRF, SSRF, etc.
|
||||||
|
- **Secrets Detection** — hardcoded credentials, API keys, tokens
|
||||||
|
- **Injection Flaws** — SQL, NoSQL, OS command, LDAP
|
||||||
|
- **Auth/Authz Gaps** — missing checks, privilege escalation, IDOR
|
||||||
|
- **Data Exposure** — logging sensitive data, information disclosure
|
||||||
|
- **Supply Chain** — vulnerable dependencies, typosquatting
|
||||||
|
|
||||||
|
**Output:** Structured JSON with findings categorized as `critical`, `high`, `medium`, or `low` with CWE IDs and OWASP categories.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### Review Uncommitted Changes
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Code review
|
||||||
|
~/.mosaic/rails/codex/codex-code-review.sh --uncommitted
|
||||||
|
|
||||||
|
# Security review
|
||||||
|
~/.mosaic/rails/codex/codex-security-review.sh --uncommitted
|
||||||
|
```
|
||||||
|
|
||||||
|
### Review a Pull Request
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Review and post findings as a PR comment
|
||||||
|
~/.mosaic/rails/codex/codex-code-review.sh -n 42
|
||||||
|
|
||||||
|
# Security review and post to PR
|
||||||
|
~/.mosaic/rails/codex/codex-security-review.sh -n 42
|
||||||
|
```
|
||||||
|
|
||||||
|
### Review Against Base Branch
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Code review changes vs main
|
||||||
|
~/.mosaic/rails/codex/codex-code-review.sh -b main
|
||||||
|
|
||||||
|
# Security review changes vs develop
|
||||||
|
~/.mosaic/rails/codex/codex-security-review.sh -b develop
|
||||||
|
```
|
||||||
|
|
||||||
|
### Review a Specific Commit
|
||||||
|
|
||||||
|
```bash
|
||||||
|
~/.mosaic/rails/codex/codex-code-review.sh -c abc123f
|
||||||
|
~/.mosaic/rails/codex/codex-security-review.sh -c abc123f
|
||||||
|
```
|
||||||
|
|
||||||
|
### Save Results to File
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Save JSON output
|
||||||
|
~/.mosaic/rails/codex/codex-code-review.sh --uncommitted -o review-results.json
|
||||||
|
~/.mosaic/rails/codex/codex-security-review.sh --uncommitted -o security-results.json
|
||||||
|
```
|
||||||
|
|
||||||
|
## Options
|
||||||
|
|
||||||
|
Both scripts support the same options:
|
||||||
|
|
||||||
|
| Option | Description |
|
||||||
|
|--------|-------------|
|
||||||
|
| `-n, --pr <number>` | PR number (auto-enables posting to PR) |
|
||||||
|
| `-b, --base <branch>` | Base branch to diff against (default: main) |
|
||||||
|
| `-c, --commit <sha>` | Review a specific commit |
|
||||||
|
| `-o, --output <path>` | Write JSON results to file |
|
||||||
|
| `--post-to-pr` | Post findings as PR comment (requires -n) |
|
||||||
|
| `--uncommitted` | Review uncommitted changes (staged + unstaged + untracked) |
|
||||||
|
| `-h, --help` | Show help |
|
||||||
|
|
||||||
|
## Woodpecker CI Integration
|
||||||
|
|
||||||
|
Automated PR reviews in CI pipelines.
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
1. **Copy the pipeline template to your repo:**
|
||||||
|
```bash
|
||||||
|
cp ~/.mosaic/rails/codex/woodpecker/codex-review.yml your-repo/.woodpecker/
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Copy the schemas directory:**
|
||||||
|
```bash
|
||||||
|
cp -r ~/.mosaic/rails/codex/schemas your-repo/.woodpecker/
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Add Codex API key to Woodpecker:**
|
||||||
|
- Go to your repo in Woodpecker CI
|
||||||
|
- Settings → Secrets
|
||||||
|
- Add secret: `codex_api_key` with your OpenAI API key
|
||||||
|
|
||||||
|
4. **Commit and push:**
|
||||||
|
```bash
|
||||||
|
cd your-repo
|
||||||
|
git add .woodpecker/
|
||||||
|
git commit -m "feat: Add Codex AI review pipeline"
|
||||||
|
git push
|
||||||
|
```
|
||||||
|
|
||||||
|
### Pipeline Behavior
|
||||||
|
|
||||||
|
- **Triggers on:** Pull requests
|
||||||
|
- **Runs:** Code review + Security review in parallel
|
||||||
|
- **Fails if:**
|
||||||
|
- Code review finds blockers
|
||||||
|
- Security review finds critical or high severity issues
|
||||||
|
- **Outputs:** Structured JSON results in CI logs
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
### Code Review JSON
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"summary": "Overall assessment...",
|
||||||
|
"verdict": "approve|request-changes|comment",
|
||||||
|
"confidence": 0.85,
|
||||||
|
"findings": [
|
||||||
|
{
|
||||||
|
"severity": "blocker",
|
||||||
|
"title": "SQL injection vulnerability",
|
||||||
|
"file": "src/api/users.ts",
|
||||||
|
"line_start": 42,
|
||||||
|
"line_end": 45,
|
||||||
|
"description": "User input directly interpolated into SQL query",
|
||||||
|
"suggestion": "Use parameterized queries"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"stats": {
|
||||||
|
"files_reviewed": 5,
|
||||||
|
"blockers": 1,
|
||||||
|
"should_fix": 3,
|
||||||
|
"suggestions": 8
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Security Review JSON
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"summary": "Security assessment...",
|
||||||
|
"risk_level": "high",
|
||||||
|
"confidence": 0.90,
|
||||||
|
"findings": [
|
||||||
|
{
|
||||||
|
"severity": "high",
|
||||||
|
"title": "Hardcoded API key",
|
||||||
|
"file": "src/config.ts",
|
||||||
|
"line_start": 10,
|
||||||
|
"description": "API key hardcoded in source",
|
||||||
|
"cwe_id": "CWE-798",
|
||||||
|
"owasp_category": "A02:2021-Cryptographic Failures",
|
||||||
|
"remediation": "Move to environment variables or secrets manager"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"stats": {
|
||||||
|
"files_reviewed": 5,
|
||||||
|
"critical": 0,
|
||||||
|
"high": 1,
|
||||||
|
"medium": 2,
|
||||||
|
"low": 3
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Platform Support
|
||||||
|
|
||||||
|
Works with both **GitHub** and **Gitea** via the shared `~/.mosaic/rails/git/` infrastructure:
|
||||||
|
- Auto-detects platform from git remote
|
||||||
|
- Posts PR comments using `gh` (GitHub) or `tea` (Gitea)
|
||||||
|
- Unified interface across both platforms
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
codex-code-review.sh
|
||||||
|
codex-security-review.sh
|
||||||
|
↓
|
||||||
|
common.sh
|
||||||
|
↓ sources
|
||||||
|
../git/detect-platform.sh (platform detection)
|
||||||
|
../git/pr-review.sh (post PR comments)
|
||||||
|
↓ uses
|
||||||
|
gh (GitHub) or tea (Gitea)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### "codex: command not found"
|
||||||
|
```bash
|
||||||
|
npm i -g @openai/codex
|
||||||
|
```
|
||||||
|
|
||||||
|
### "jq: command not found"
|
||||||
|
```bash
|
||||||
|
# Arch Linux
|
||||||
|
sudo pacman -S jq
|
||||||
|
|
||||||
|
# Debian/Ubuntu
|
||||||
|
sudo apt install jq
|
||||||
|
```
|
||||||
|
|
||||||
|
### "Error: Not inside a git repository"
|
||||||
|
Run the script from inside a git repository.
|
||||||
|
|
||||||
|
### "No changes found to review"
|
||||||
|
The specified mode (--uncommitted, --base, etc.) found no changes to review.
|
||||||
|
|
||||||
|
### "Codex produced no output"
|
||||||
|
Check your Codex API key and authentication:
|
||||||
|
```bash
|
||||||
|
codex # Re-authenticate if needed
|
||||||
|
```
|
||||||
|
|
||||||
|
## Model Configuration
|
||||||
|
|
||||||
|
By default, scripts use the model configured in `~/.codex/config.toml`:
|
||||||
|
- **Model:** `gpt-5.3-codex` (recommended for code review)
|
||||||
|
- **Reasoning effort:** `high`
|
||||||
|
|
||||||
|
For best results, use `gpt-5.2-codex` or newer for strongest review accuracy.
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- `~/.mosaic/guides/code-review.md` — Manual code review checklist
|
||||||
|
- `~/.mosaic/rails/git/` — Git helper scripts (issue/PR management)
|
||||||
|
- OpenAI Codex CLI docs: https://developers.openai.com/codex/cli/
|
||||||
238
rails/codex/codex-code-review.sh
Executable file
238
rails/codex/codex-code-review.sh
Executable file
@@ -0,0 +1,238 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# codex-code-review.sh - Run an AI-powered code quality review using Codex CLI
|
||||||
|
# Usage: codex-code-review.sh [OPTIONS]
|
||||||
|
#
|
||||||
|
# Runs codex exec in read-only sandbox mode with a structured code review prompt.
|
||||||
|
# Outputs findings as JSON and optionally posts them to a PR.
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/common.sh"
|
||||||
|
|
||||||
|
# Defaults
|
||||||
|
PR_NUMBER=""
|
||||||
|
BASE_BRANCH="main"
|
||||||
|
COMMIT_SHA=""
|
||||||
|
OUTPUT_FILE=""
|
||||||
|
POST_TO_PR=false
|
||||||
|
UNCOMMITTED=false
|
||||||
|
REVIEW_MODE=""
|
||||||
|
|
||||||
|
show_help() {
|
||||||
|
cat <<'EOF'
|
||||||
|
Usage: codex-code-review.sh [OPTIONS]
|
||||||
|
|
||||||
|
Run an AI-powered code quality review using OpenAI Codex CLI.
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-n, --pr <number> PR number (auto-enables posting findings to PR)
|
||||||
|
-b, --base <branch> Base branch to diff against (default: main)
|
||||||
|
-c, --commit <sha> Review a specific commit
|
||||||
|
-o, --output <path> Write JSON results to file
|
||||||
|
--post-to-pr Post findings as PR comment (requires -n)
|
||||||
|
--uncommitted Review uncommitted changes (staged + unstaged + untracked)
|
||||||
|
-h, --help Show this help
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
# Review uncommitted changes
|
||||||
|
codex-code-review.sh --uncommitted
|
||||||
|
|
||||||
|
# Review a PR and post findings as a comment
|
||||||
|
codex-code-review.sh -n 42
|
||||||
|
|
||||||
|
# Review changes against main, save JSON
|
||||||
|
codex-code-review.sh -b main -o review.json
|
||||||
|
|
||||||
|
# Review a specific commit
|
||||||
|
codex-code-review.sh -c abc123f
|
||||||
|
EOF
|
||||||
|
exit 0
|
||||||
|
}
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-n|--pr)
|
||||||
|
PR_NUMBER="$2"
|
||||||
|
POST_TO_PR=true
|
||||||
|
REVIEW_MODE="pr"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-b|--base)
|
||||||
|
BASE_BRANCH="$2"
|
||||||
|
REVIEW_MODE="base"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-c|--commit)
|
||||||
|
COMMIT_SHA="$2"
|
||||||
|
REVIEW_MODE="commit"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-o|--output)
|
||||||
|
OUTPUT_FILE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--post-to-pr)
|
||||||
|
POST_TO_PR=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--uncommitted)
|
||||||
|
UNCOMMITTED=true
|
||||||
|
REVIEW_MODE="uncommitted"
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
show_help
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1" >&2
|
||||||
|
echo "Run with --help for usage" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Validate
|
||||||
|
if [[ -z "$REVIEW_MODE" ]]; then
|
||||||
|
echo "Error: Specify a review mode: --uncommitted, --base <branch>, --commit <sha>, or --pr <number>" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ "$POST_TO_PR" == true && -z "$PR_NUMBER" ]]; then
|
||||||
|
echo "Error: --post-to-pr requires -n <pr_number>" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
check_codex
|
||||||
|
check_jq
|
||||||
|
|
||||||
|
# Verify we're in a git repo
|
||||||
|
if ! git rev-parse --is-inside-work-tree &>/dev/null; then
|
||||||
|
echo "Error: Not inside a git repository" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Get the diff context
|
||||||
|
echo "Gathering diff context..." >&2
|
||||||
|
case "$REVIEW_MODE" in
|
||||||
|
uncommitted) DIFF_CONTEXT=$(build_diff_context "uncommitted" "") ;;
|
||||||
|
base) DIFF_CONTEXT=$(build_diff_context "base" "$BASE_BRANCH") ;;
|
||||||
|
commit) DIFF_CONTEXT=$(build_diff_context "commit" "$COMMIT_SHA") ;;
|
||||||
|
pr) DIFF_CONTEXT=$(build_diff_context "pr" "$PR_NUMBER") ;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
if [[ -z "$DIFF_CONTEXT" ]]; then
|
||||||
|
echo "No changes found to review." >&2
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Build the review prompt
|
||||||
|
REVIEW_PROMPT=$(cat <<'PROMPT'
|
||||||
|
You are an expert code reviewer. Review the following code changes thoroughly.
|
||||||
|
|
||||||
|
Focus on issues that are ACTIONABLE and IMPORTANT. Do not flag trivial style issues.
|
||||||
|
|
||||||
|
## Review Checklist
|
||||||
|
|
||||||
|
### Correctness
|
||||||
|
- Code does what it claims to do
|
||||||
|
- Edge cases are handled
|
||||||
|
- Error conditions are managed properly
|
||||||
|
- No obvious bugs or logic errors
|
||||||
|
|
||||||
|
### Code Quality
|
||||||
|
- Functions are focused and reasonably sized
|
||||||
|
- No unnecessary complexity
|
||||||
|
- DRY - no significant duplication
|
||||||
|
- Clear naming for variables and functions
|
||||||
|
- No dead code or commented-out code
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
- Tests exist for new functionality
|
||||||
|
- Tests cover happy path AND error cases
|
||||||
|
- No flaky tests introduced
|
||||||
|
|
||||||
|
### Performance
|
||||||
|
- No obvious N+1 queries
|
||||||
|
- No blocking operations in hot paths
|
||||||
|
- Resource cleanup (connections, file handles)
|
||||||
|
|
||||||
|
### Dependencies
|
||||||
|
- No deprecated packages
|
||||||
|
- No unnecessary new dependencies
|
||||||
|
|
||||||
|
### Documentation
|
||||||
|
- Complex logic has explanatory comments
|
||||||
|
- Public APIs are documented
|
||||||
|
|
||||||
|
## Severity Guide
|
||||||
|
- **blocker**: Must fix before merge (bugs, correctness issues, missing error handling)
|
||||||
|
- **should-fix**: Important but not blocking (code quality, minor issues)
|
||||||
|
- **suggestion**: Optional improvements (nice-to-haves)
|
||||||
|
|
||||||
|
Only report findings you are confident about (confidence > 0.7).
|
||||||
|
If the code looks good, say so — don't manufacture issues.
|
||||||
|
|
||||||
|
PROMPT
|
||||||
|
)
|
||||||
|
|
||||||
|
# Set up temp files for output and diff
|
||||||
|
TEMP_OUTPUT=$(mktemp /tmp/codex-review-XXXXXX.json)
|
||||||
|
TEMP_DIFF=$(mktemp /tmp/codex-diff-XXXXXX.txt)
|
||||||
|
trap 'rm -f "$TEMP_OUTPUT" "$TEMP_DIFF"' EXIT
|
||||||
|
|
||||||
|
SCHEMA_FILE="$SCRIPT_DIR/schemas/code-review-schema.json"
|
||||||
|
|
||||||
|
# Write diff to temp file
|
||||||
|
echo "$DIFF_CONTEXT" > "$TEMP_DIFF"
|
||||||
|
|
||||||
|
echo "Running Codex code review..." >&2
|
||||||
|
echo " Diff size: $(wc -l < "$TEMP_DIFF") lines" >&2
|
||||||
|
|
||||||
|
# Build full prompt with diff reference
|
||||||
|
FULL_PROMPT="${REVIEW_PROMPT}
|
||||||
|
|
||||||
|
Here are the code changes to review:
|
||||||
|
|
||||||
|
\`\`\`diff
|
||||||
|
$(cat "$TEMP_DIFF")
|
||||||
|
\`\`\`"
|
||||||
|
|
||||||
|
# Run codex exec with prompt from stdin to avoid arg length limits
|
||||||
|
echo "$FULL_PROMPT" | codex exec \
|
||||||
|
--sandbox read-only \
|
||||||
|
--output-schema "$SCHEMA_FILE" \
|
||||||
|
-o "$TEMP_OUTPUT" \
|
||||||
|
- 2>&1 | while IFS= read -r line; do
|
||||||
|
echo " [codex] $line" >&2
|
||||||
|
done
|
||||||
|
|
||||||
|
# Check output was produced
|
||||||
|
if [[ ! -s "$TEMP_OUTPUT" ]]; then
|
||||||
|
echo "Error: Codex produced no output" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Validate JSON
|
||||||
|
if ! jq empty "$TEMP_OUTPUT" 2>/dev/null; then
|
||||||
|
echo "Error: Codex output is not valid JSON" >&2
|
||||||
|
cat "$TEMP_OUTPUT" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Save output if requested
|
||||||
|
if [[ -n "$OUTPUT_FILE" ]]; then
|
||||||
|
cp "$TEMP_OUTPUT" "$OUTPUT_FILE"
|
||||||
|
echo "Results saved to: $OUTPUT_FILE" >&2
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Post to PR if requested
|
||||||
|
if [[ "$POST_TO_PR" == true && -n "$PR_NUMBER" ]]; then
|
||||||
|
echo "Posting findings to PR #$PR_NUMBER..." >&2
|
||||||
|
post_to_pr "$PR_NUMBER" "$TEMP_OUTPUT" "code"
|
||||||
|
echo "Posted review to PR #$PR_NUMBER" >&2
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Always print results to stdout
|
||||||
|
print_results "$TEMP_OUTPUT" "code"
|
||||||
235
rails/codex/codex-security-review.sh
Executable file
235
rails/codex/codex-security-review.sh
Executable file
@@ -0,0 +1,235 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# codex-security-review.sh - Run an AI-powered security vulnerability review using Codex CLI
|
||||||
|
# Usage: codex-security-review.sh [OPTIONS]
|
||||||
|
#
|
||||||
|
# Runs codex exec in read-only sandbox mode with a security-focused review prompt.
|
||||||
|
# Outputs findings as JSON and optionally posts them to a PR.
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/common.sh"
|
||||||
|
|
||||||
|
# Defaults
|
||||||
|
PR_NUMBER=""
|
||||||
|
BASE_BRANCH="main"
|
||||||
|
COMMIT_SHA=""
|
||||||
|
OUTPUT_FILE=""
|
||||||
|
POST_TO_PR=false
|
||||||
|
UNCOMMITTED=false
|
||||||
|
REVIEW_MODE=""
|
||||||
|
|
||||||
|
show_help() {
|
||||||
|
cat <<'EOF'
|
||||||
|
Usage: codex-security-review.sh [OPTIONS]
|
||||||
|
|
||||||
|
Run an AI-powered security vulnerability review using OpenAI Codex CLI.
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-n, --pr <number> PR number (auto-enables posting findings to PR)
|
||||||
|
-b, --base <branch> Base branch to diff against (default: main)
|
||||||
|
-c, --commit <sha> Review a specific commit
|
||||||
|
-o, --output <path> Write JSON results to file
|
||||||
|
--post-to-pr Post findings as PR comment (requires -n)
|
||||||
|
--uncommitted Review uncommitted changes (staged + unstaged + untracked)
|
||||||
|
-h, --help Show this help
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
# Security review uncommitted changes
|
||||||
|
codex-security-review.sh --uncommitted
|
||||||
|
|
||||||
|
# Security review a PR and post findings
|
||||||
|
codex-security-review.sh -n 42
|
||||||
|
|
||||||
|
# Security review against main, save JSON
|
||||||
|
codex-security-review.sh -b main -o security.json
|
||||||
|
|
||||||
|
# Security review a specific commit
|
||||||
|
codex-security-review.sh -c abc123f
|
||||||
|
EOF
|
||||||
|
exit 0
|
||||||
|
}
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-n|--pr)
|
||||||
|
PR_NUMBER="$2"
|
||||||
|
POST_TO_PR=true
|
||||||
|
REVIEW_MODE="pr"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-b|--base)
|
||||||
|
BASE_BRANCH="$2"
|
||||||
|
REVIEW_MODE="base"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-c|--commit)
|
||||||
|
COMMIT_SHA="$2"
|
||||||
|
REVIEW_MODE="commit"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-o|--output)
|
||||||
|
OUTPUT_FILE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--post-to-pr)
|
||||||
|
POST_TO_PR=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--uncommitted)
|
||||||
|
UNCOMMITTED=true
|
||||||
|
REVIEW_MODE="uncommitted"
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
show_help
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1" >&2
|
||||||
|
echo "Run with --help for usage" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Validate
|
||||||
|
if [[ -z "$REVIEW_MODE" ]]; then
|
||||||
|
echo "Error: Specify a review mode: --uncommitted, --base <branch>, --commit <sha>, or --pr <number>" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ "$POST_TO_PR" == true && -z "$PR_NUMBER" ]]; then
|
||||||
|
echo "Error: --post-to-pr requires -n <pr_number>" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
check_codex
|
||||||
|
check_jq
|
||||||
|
|
||||||
|
# Verify we're in a git repo
|
||||||
|
if ! git rev-parse --is-inside-work-tree &>/dev/null; then
|
||||||
|
echo "Error: Not inside a git repository" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Get the diff context
|
||||||
|
echo "Gathering diff context..." >&2
|
||||||
|
case "$REVIEW_MODE" in
|
||||||
|
uncommitted) DIFF_CONTEXT=$(build_diff_context "uncommitted" "") ;;
|
||||||
|
base) DIFF_CONTEXT=$(build_diff_context "base" "$BASE_BRANCH") ;;
|
||||||
|
commit) DIFF_CONTEXT=$(build_diff_context "commit" "$COMMIT_SHA") ;;
|
||||||
|
pr) DIFF_CONTEXT=$(build_diff_context "pr" "$PR_NUMBER") ;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
if [[ -z "$DIFF_CONTEXT" ]]; then
|
||||||
|
echo "No changes found to review." >&2
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Build the security review prompt
|
||||||
|
REVIEW_PROMPT=$(cat <<'PROMPT'
|
||||||
|
You are an expert application security engineer performing a security-focused code review.
|
||||||
|
Your goal is to identify vulnerabilities, security anti-patterns, and data exposure risks.
|
||||||
|
|
||||||
|
## Security Review Scope
|
||||||
|
|
||||||
|
### OWASP Top 10 (2021)
|
||||||
|
- A01: Broken Access Control — missing authorization checks, IDOR, privilege escalation
|
||||||
|
- A02: Cryptographic Failures — weak algorithms, plaintext secrets, missing encryption
|
||||||
|
- A03: Injection — SQL, NoSQL, OS command, LDAP, XPath injection
|
||||||
|
- A04: Insecure Design — missing threat modeling, unsafe business logic
|
||||||
|
- A05: Security Misconfiguration — debug mode, default credentials, unnecessary features
|
||||||
|
- A06: Vulnerable Components — known CVEs in dependencies
|
||||||
|
- A07: Authentication Failures — weak auth, missing MFA, session issues
|
||||||
|
- A08: Data Integrity Failures — deserialization, unsigned updates
|
||||||
|
- A09: Logging Failures — sensitive data in logs, missing audit trails
|
||||||
|
- A10: SSRF — unvalidated URLs, internal service access
|
||||||
|
|
||||||
|
### Additional Checks
|
||||||
|
- Hardcoded secrets, API keys, tokens, passwords
|
||||||
|
- Insecure direct object references
|
||||||
|
- Missing input validation at trust boundaries
|
||||||
|
- Cross-Site Scripting (XSS) — reflected, stored, DOM-based
|
||||||
|
- Cross-Site Request Forgery (CSRF) protection
|
||||||
|
- Insecure file handling (path traversal, unrestricted upload)
|
||||||
|
- Race conditions and TOCTOU vulnerabilities
|
||||||
|
- Information disclosure (stack traces, verbose errors)
|
||||||
|
- Supply chain risks (typosquatting, dependency confusion)
|
||||||
|
|
||||||
|
## Severity Guide
|
||||||
|
- **critical**: Exploitable vulnerability with immediate impact (RCE, auth bypass, data breach)
|
||||||
|
- **high**: Significant vulnerability requiring prompt fix (injection, XSS, secrets exposure)
|
||||||
|
- **medium**: Vulnerability with limited exploitability or impact (missing headers, weak config)
|
||||||
|
- **low**: Minor security concern or hardening opportunity (informational, defense-in-depth)
|
||||||
|
|
||||||
|
## Rules
|
||||||
|
- Include CWE IDs when applicable
|
||||||
|
- Include OWASP category when applicable
|
||||||
|
- Provide specific remediation steps for every finding
|
||||||
|
- Only report findings you are confident about
|
||||||
|
- Do NOT flag non-security code quality issues
|
||||||
|
- If no security issues found, say so clearly
|
||||||
|
|
||||||
|
PROMPT
|
||||||
|
)
|
||||||
|
|
||||||
|
# Set up temp files for output and diff
|
||||||
|
TEMP_OUTPUT=$(mktemp /tmp/codex-security-XXXXXX.json)
|
||||||
|
TEMP_DIFF=$(mktemp /tmp/codex-diff-XXXXXX.txt)
|
||||||
|
trap 'rm -f "$TEMP_OUTPUT" "$TEMP_DIFF"' EXIT
|
||||||
|
|
||||||
|
SCHEMA_FILE="$SCRIPT_DIR/schemas/security-review-schema.json"
|
||||||
|
|
||||||
|
# Write diff to temp file
|
||||||
|
echo "$DIFF_CONTEXT" > "$TEMP_DIFF"
|
||||||
|
|
||||||
|
echo "Running Codex security review..." >&2
|
||||||
|
echo " Diff size: $(wc -l < "$TEMP_DIFF") lines" >&2
|
||||||
|
|
||||||
|
# Build full prompt with diff reference
|
||||||
|
FULL_PROMPT="${REVIEW_PROMPT}
|
||||||
|
|
||||||
|
Here are the code changes to security review:
|
||||||
|
|
||||||
|
\`\`\`diff
|
||||||
|
$(cat "$TEMP_DIFF")
|
||||||
|
\`\`\`"
|
||||||
|
|
||||||
|
# Run codex exec with prompt from stdin to avoid arg length limits
|
||||||
|
echo "$FULL_PROMPT" | codex exec \
|
||||||
|
--sandbox read-only \
|
||||||
|
--output-schema "$SCHEMA_FILE" \
|
||||||
|
-o "$TEMP_OUTPUT" \
|
||||||
|
- 2>&1 | while IFS= read -r line; do
|
||||||
|
echo " [codex] $line" >&2
|
||||||
|
done
|
||||||
|
|
||||||
|
# Check output was produced
|
||||||
|
if [[ ! -s "$TEMP_OUTPUT" ]]; then
|
||||||
|
echo "Error: Codex produced no output" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Validate JSON
|
||||||
|
if ! jq empty "$TEMP_OUTPUT" 2>/dev/null; then
|
||||||
|
echo "Error: Codex output is not valid JSON" >&2
|
||||||
|
cat "$TEMP_OUTPUT" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Save output if requested
|
||||||
|
if [[ -n "$OUTPUT_FILE" ]]; then
|
||||||
|
cp "$TEMP_OUTPUT" "$OUTPUT_FILE"
|
||||||
|
echo "Results saved to: $OUTPUT_FILE" >&2
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Post to PR if requested
|
||||||
|
if [[ "$POST_TO_PR" == true && -n "$PR_NUMBER" ]]; then
|
||||||
|
echo "Posting findings to PR #$PR_NUMBER..." >&2
|
||||||
|
post_to_pr "$PR_NUMBER" "$TEMP_OUTPUT" "security"
|
||||||
|
echo "Posted security review to PR #$PR_NUMBER" >&2
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Always print results to stdout
|
||||||
|
print_results "$TEMP_OUTPUT" "security"
|
||||||
191
rails/codex/common.sh
Executable file
191
rails/codex/common.sh
Executable file
@@ -0,0 +1,191 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# common.sh - Shared utilities for Codex review scripts
|
||||||
|
# Source this file from review scripts: source "$(dirname "${BASH_SOURCE[0]}")/common.sh"
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
CODEX_SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
GIT_SCRIPT_DIR="$CODEX_SCRIPT_DIR/../git"
|
||||||
|
|
||||||
|
# Source platform detection
|
||||||
|
source "$GIT_SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
# Check codex is installed
|
||||||
|
check_codex() {
|
||||||
|
if ! command -v codex &>/dev/null; then
|
||||||
|
echo "Error: codex CLI not found. Install with: npm i -g @openai/codex" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check jq is installed (needed for JSON processing)
|
||||||
|
check_jq() {
|
||||||
|
if ! command -v jq &>/dev/null; then
|
||||||
|
echo "Error: jq not found. Install with your package manager." >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Build the codex exec command args for the review mode
|
||||||
|
# Arguments: $1=mode (--uncommitted|--base|--commit), $2=value (branch/sha)
|
||||||
|
build_diff_context() {
|
||||||
|
local mode="$1"
|
||||||
|
local value="$2"
|
||||||
|
local diff_text=""
|
||||||
|
|
||||||
|
case "$mode" in
|
||||||
|
uncommitted)
|
||||||
|
diff_text=$(git diff HEAD 2>/dev/null; git diff --cached 2>/dev/null; git ls-files --others --exclude-standard 2>/dev/null | while read -r f; do echo "=== NEW FILE: $f ==="; cat "$f" 2>/dev/null; done)
|
||||||
|
;;
|
||||||
|
base)
|
||||||
|
diff_text=$(git diff "${value}...HEAD" 2>/dev/null)
|
||||||
|
;;
|
||||||
|
commit)
|
||||||
|
diff_text=$(git show "$value" 2>/dev/null)
|
||||||
|
;;
|
||||||
|
pr)
|
||||||
|
# For PRs, we need to fetch the PR diff
|
||||||
|
detect_platform
|
||||||
|
if [[ "$PLATFORM" == "github" ]]; then
|
||||||
|
diff_text=$(gh pr diff "$value" 2>/dev/null)
|
||||||
|
elif [[ "$PLATFORM" == "gitea" ]]; then
|
||||||
|
# tea doesn't have a direct pr diff command, use git
|
||||||
|
local pr_base
|
||||||
|
pr_base=$(tea pr list --fields index,base --output simple 2>/dev/null | grep "^${value}" | awk '{print $2}')
|
||||||
|
if [[ -n "$pr_base" ]]; then
|
||||||
|
diff_text=$(git diff "${pr_base}...HEAD" 2>/dev/null)
|
||||||
|
else
|
||||||
|
# Fallback: fetch PR info via API
|
||||||
|
local repo_info
|
||||||
|
repo_info=$(get_repo_info)
|
||||||
|
local remote_url
|
||||||
|
remote_url=$(git remote get-url origin 2>/dev/null)
|
||||||
|
local host
|
||||||
|
host=$(echo "$remote_url" | sed -E 's|.*://([^/]+).*|\1|; s|.*@([^:]+).*|\1|')
|
||||||
|
diff_text=$(curl -s "https://${host}/api/v1/repos/${repo_info}/pulls/${value}" \
|
||||||
|
-H "Authorization: token $(tea login list --output simple 2>/dev/null | head -1 | awk '{print $2}')" \
|
||||||
|
2>/dev/null | jq -r '.diff_url // empty')
|
||||||
|
if [[ -n "$diff_text" && "$diff_text" != "null" ]]; then
|
||||||
|
diff_text=$(curl -s "$diff_text" 2>/dev/null)
|
||||||
|
else
|
||||||
|
diff_text=$(git diff "main...HEAD" 2>/dev/null)
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
echo "$diff_text"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Format JSON findings as markdown for PR comments
|
||||||
|
# Arguments: $1=json_file, $2=review_type (code|security)
|
||||||
|
format_findings_as_markdown() {
|
||||||
|
local json_file="$1"
|
||||||
|
local review_type="$2"
|
||||||
|
|
||||||
|
if [[ ! -f "$json_file" ]]; then
|
||||||
|
echo "Error: JSON file not found: $json_file" >&2
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
local summary verdict confidence
|
||||||
|
summary=$(jq -r '.summary' "$json_file")
|
||||||
|
confidence=$(jq -r '.confidence' "$json_file")
|
||||||
|
|
||||||
|
if [[ "$review_type" == "code" ]]; then
|
||||||
|
verdict=$(jq -r '.verdict' "$json_file")
|
||||||
|
local blockers should_fix suggestions files_reviewed
|
||||||
|
blockers=$(jq -r '.stats.blockers' "$json_file")
|
||||||
|
should_fix=$(jq -r '.stats.should_fix' "$json_file")
|
||||||
|
suggestions=$(jq -r '.stats.suggestions' "$json_file")
|
||||||
|
files_reviewed=$(jq -r '.stats.files_reviewed' "$json_file")
|
||||||
|
|
||||||
|
cat <<EOF
|
||||||
|
## Codex Code Review
|
||||||
|
|
||||||
|
**Verdict:** ${verdict} | **Confidence:** ${confidence} | **Files reviewed:** ${files_reviewed}
|
||||||
|
**Findings:** ${blockers} blockers, ${should_fix} should-fix, ${suggestions} suggestions
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
${summary}
|
||||||
|
|
||||||
|
EOF
|
||||||
|
else
|
||||||
|
local risk_level critical high medium low files_reviewed
|
||||||
|
risk_level=$(jq -r '.risk_level' "$json_file")
|
||||||
|
critical=$(jq -r '.stats.critical' "$json_file")
|
||||||
|
high=$(jq -r '.stats.high' "$json_file")
|
||||||
|
medium=$(jq -r '.stats.medium' "$json_file")
|
||||||
|
low=$(jq -r '.stats.low' "$json_file")
|
||||||
|
files_reviewed=$(jq -r '.stats.files_reviewed' "$json_file")
|
||||||
|
|
||||||
|
cat <<EOF
|
||||||
|
## Codex Security Review
|
||||||
|
|
||||||
|
**Risk Level:** ${risk_level} | **Confidence:** ${confidence} | **Files reviewed:** ${files_reviewed}
|
||||||
|
**Findings:** ${critical} critical, ${high} high, ${medium} medium, ${low} low
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
${summary}
|
||||||
|
|
||||||
|
EOF
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Output findings
|
||||||
|
local finding_count
|
||||||
|
finding_count=$(jq '.findings | length' "$json_file")
|
||||||
|
|
||||||
|
if [[ "$finding_count" -gt 0 ]]; then
|
||||||
|
echo "### Findings"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
jq -r '.findings[] | "#### [\(.severity | ascii_upcase)] \(.title)\n- **File:** `\(.file)`\(if .line_start then " (L\(.line_start)\(if .line_end and .line_end != .line_start then "-L\(.line_end)" else "" end))" else "" end)\n- \(.description)\(if .suggestion then "\n- **Suggestion:** \(.suggestion)" else "" end)\(if .cwe_id then "\n- **CWE:** \(.cwe_id)" else "" end)\(if .owasp_category then "\n- **OWASP:** \(.owasp_category)" else "" end)\(if .remediation then "\n- **Remediation:** \(.remediation)" else "" end)\n"' "$json_file"
|
||||||
|
else
|
||||||
|
echo "*No issues found.*"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "---"
|
||||||
|
echo "*Reviewed by Codex ($(codex --version 2>/dev/null || echo "unknown"))*"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Post review findings to a PR
|
||||||
|
# Arguments: $1=pr_number, $2=json_file, $3=review_type (code|security)
|
||||||
|
post_to_pr() {
|
||||||
|
local pr_number="$1"
|
||||||
|
local json_file="$2"
|
||||||
|
local review_type="$3"
|
||||||
|
|
||||||
|
local markdown
|
||||||
|
markdown=$(format_findings_as_markdown "$json_file" "$review_type")
|
||||||
|
|
||||||
|
detect_platform
|
||||||
|
|
||||||
|
# Determine review action based on findings
|
||||||
|
local action="comment"
|
||||||
|
if [[ "$review_type" == "code" ]]; then
|
||||||
|
local verdict
|
||||||
|
verdict=$(jq -r '.verdict' "$json_file")
|
||||||
|
action="$verdict"
|
||||||
|
else
|
||||||
|
local risk_level
|
||||||
|
risk_level=$(jq -r '.risk_level' "$json_file")
|
||||||
|
case "$risk_level" in
|
||||||
|
critical|high) action="request-changes" ;;
|
||||||
|
medium) action="comment" ;;
|
||||||
|
low|none) action="comment" ;;
|
||||||
|
esac
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Post the review
|
||||||
|
"$GIT_SCRIPT_DIR/pr-review.sh" -n "$pr_number" -a "$action" -c "$markdown"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Print review results to stdout
|
||||||
|
# Arguments: $1=json_file, $2=review_type (code|security)
|
||||||
|
print_results() {
|
||||||
|
local json_file="$1"
|
||||||
|
local review_type="$2"
|
||||||
|
|
||||||
|
format_findings_as_markdown "$json_file" "$review_type"
|
||||||
|
}
|
||||||
84
rails/codex/schemas/code-review-schema.json
Normal file
84
rails/codex/schemas/code-review-schema.json
Normal file
@@ -0,0 +1,84 @@
|
|||||||
|
{
|
||||||
|
"type": "object",
|
||||||
|
"additionalProperties": false,
|
||||||
|
"properties": {
|
||||||
|
"summary": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Brief overall assessment of the code changes"
|
||||||
|
},
|
||||||
|
"verdict": {
|
||||||
|
"type": "string",
|
||||||
|
"enum": ["approve", "request-changes", "comment"],
|
||||||
|
"description": "Overall review verdict"
|
||||||
|
},
|
||||||
|
"confidence": {
|
||||||
|
"type": "number",
|
||||||
|
"minimum": 0,
|
||||||
|
"maximum": 1,
|
||||||
|
"description": "Confidence score for the review (0-1)"
|
||||||
|
},
|
||||||
|
"findings": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {
|
||||||
|
"type": "object",
|
||||||
|
"additionalProperties": false,
|
||||||
|
"properties": {
|
||||||
|
"severity": {
|
||||||
|
"type": "string",
|
||||||
|
"enum": ["blocker", "should-fix", "suggestion"],
|
||||||
|
"description": "Finding severity: blocker (must fix), should-fix (important), suggestion (optional)"
|
||||||
|
},
|
||||||
|
"title": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Short title describing the issue"
|
||||||
|
},
|
||||||
|
"file": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "File path where the issue was found"
|
||||||
|
},
|
||||||
|
"line_start": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Starting line number"
|
||||||
|
},
|
||||||
|
"line_end": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Ending line number"
|
||||||
|
},
|
||||||
|
"description": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Detailed explanation of the issue"
|
||||||
|
},
|
||||||
|
"suggestion": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Suggested fix or improvement"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["severity", "title", "file", "line_start", "line_end", "description", "suggestion"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"stats": {
|
||||||
|
"type": "object",
|
||||||
|
"additionalProperties": false,
|
||||||
|
"properties": {
|
||||||
|
"files_reviewed": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Number of files reviewed"
|
||||||
|
},
|
||||||
|
"blockers": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Count of blocker findings"
|
||||||
|
},
|
||||||
|
"should_fix": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Count of should-fix findings"
|
||||||
|
},
|
||||||
|
"suggestions": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Count of suggestion findings"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["files_reviewed", "blockers", "should_fix", "suggestions"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["summary", "verdict", "confidence", "findings", "stats"]
|
||||||
|
}
|
||||||
96
rails/codex/schemas/security-review-schema.json
Normal file
96
rails/codex/schemas/security-review-schema.json
Normal file
@@ -0,0 +1,96 @@
|
|||||||
|
{
|
||||||
|
"type": "object",
|
||||||
|
"additionalProperties": false,
|
||||||
|
"properties": {
|
||||||
|
"summary": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Brief overall security assessment of the code changes"
|
||||||
|
},
|
||||||
|
"risk_level": {
|
||||||
|
"type": "string",
|
||||||
|
"enum": ["critical", "high", "medium", "low", "none"],
|
||||||
|
"description": "Overall security risk level"
|
||||||
|
},
|
||||||
|
"confidence": {
|
||||||
|
"type": "number",
|
||||||
|
"minimum": 0,
|
||||||
|
"maximum": 1,
|
||||||
|
"description": "Confidence score for the review (0-1)"
|
||||||
|
},
|
||||||
|
"findings": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {
|
||||||
|
"type": "object",
|
||||||
|
"additionalProperties": false,
|
||||||
|
"properties": {
|
||||||
|
"severity": {
|
||||||
|
"type": "string",
|
||||||
|
"enum": ["critical", "high", "medium", "low"],
|
||||||
|
"description": "Vulnerability severity level"
|
||||||
|
},
|
||||||
|
"title": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Short title describing the vulnerability"
|
||||||
|
},
|
||||||
|
"file": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "File path where the vulnerability was found"
|
||||||
|
},
|
||||||
|
"line_start": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Starting line number"
|
||||||
|
},
|
||||||
|
"line_end": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Ending line number"
|
||||||
|
},
|
||||||
|
"description": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Detailed explanation of the vulnerability"
|
||||||
|
},
|
||||||
|
"cwe_id": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "CWE identifier if applicable (e.g., CWE-79)"
|
||||||
|
},
|
||||||
|
"owasp_category": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "OWASP Top 10 category if applicable (e.g., A03:2021-Injection)"
|
||||||
|
},
|
||||||
|
"remediation": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Specific remediation steps to fix the vulnerability"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["severity", "title", "file", "line_start", "line_end", "description", "cwe_id", "owasp_category", "remediation"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"stats": {
|
||||||
|
"type": "object",
|
||||||
|
"additionalProperties": false,
|
||||||
|
"properties": {
|
||||||
|
"files_reviewed": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Number of files reviewed"
|
||||||
|
},
|
||||||
|
"critical": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Count of critical findings"
|
||||||
|
},
|
||||||
|
"high": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Count of high findings"
|
||||||
|
},
|
||||||
|
"medium": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Count of medium findings"
|
||||||
|
},
|
||||||
|
"low": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Count of low findings"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["files_reviewed", "critical", "high", "medium", "low"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["summary", "risk_level", "confidence", "findings", "stats"]
|
||||||
|
}
|
||||||
90
rails/codex/woodpecker/codex-review.yml
Normal file
90
rails/codex/woodpecker/codex-review.yml
Normal file
@@ -0,0 +1,90 @@
|
|||||||
|
# Codex AI Review Pipeline for Woodpecker CI
|
||||||
|
# Drop this into your repo's .woodpecker/ directory to enable automated
|
||||||
|
# code and security reviews on every pull request.
|
||||||
|
#
|
||||||
|
# Required secrets:
|
||||||
|
# - codex_api_key: OpenAI API key or Codex-compatible key
|
||||||
|
#
|
||||||
|
# Optional secrets:
|
||||||
|
# - gitea_token: Gitea API token for posting PR comments (if not using tea CLI auth)
|
||||||
|
|
||||||
|
when:
|
||||||
|
event: pull_request
|
||||||
|
|
||||||
|
variables:
|
||||||
|
- &node_image "node:22-slim"
|
||||||
|
- &install_codex "npm i -g @openai/codex"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
# --- Code Quality Review ---
|
||||||
|
code-review:
|
||||||
|
image: *node_image
|
||||||
|
environment:
|
||||||
|
CODEX_API_KEY:
|
||||||
|
from_secret: codex_api_key
|
||||||
|
commands:
|
||||||
|
- *install_codex
|
||||||
|
- apt-get update -qq && apt-get install -y -qq jq git > /dev/null 2>&1
|
||||||
|
|
||||||
|
# Generate the diff
|
||||||
|
- git fetch origin ${CI_COMMIT_TARGET_BRANCH:-main}
|
||||||
|
- DIFF=$(git diff origin/${CI_COMMIT_TARGET_BRANCH:-main}...HEAD)
|
||||||
|
|
||||||
|
# Run code review with structured output
|
||||||
|
- |
|
||||||
|
codex exec \
|
||||||
|
--sandbox read-only \
|
||||||
|
--output-schema .woodpecker/schemas/code-review-schema.json \
|
||||||
|
-o /tmp/code-review.json \
|
||||||
|
"You are an expert code reviewer. Review the following code changes for correctness, code quality, testing, performance, and documentation issues. Only flag actionable, important issues. Categorize as blocker/should-fix/suggestion. If code looks good, say so.
|
||||||
|
|
||||||
|
Changes:
|
||||||
|
$DIFF"
|
||||||
|
|
||||||
|
# Output summary
|
||||||
|
- echo "=== Code Review Results ==="
|
||||||
|
- jq '.' /tmp/code-review.json
|
||||||
|
- |
|
||||||
|
BLOCKERS=$(jq '.stats.blockers // 0' /tmp/code-review.json)
|
||||||
|
if [ "$BLOCKERS" -gt 0 ]; then
|
||||||
|
echo "FAIL: $BLOCKERS blocker(s) found"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo "PASS: No blockers found"
|
||||||
|
|
||||||
|
# --- Security Review ---
|
||||||
|
security-review:
|
||||||
|
image: *node_image
|
||||||
|
environment:
|
||||||
|
CODEX_API_KEY:
|
||||||
|
from_secret: codex_api_key
|
||||||
|
commands:
|
||||||
|
- *install_codex
|
||||||
|
- apt-get update -qq && apt-get install -y -qq jq git > /dev/null 2>&1
|
||||||
|
|
||||||
|
# Generate the diff
|
||||||
|
- git fetch origin ${CI_COMMIT_TARGET_BRANCH:-main}
|
||||||
|
- DIFF=$(git diff origin/${CI_COMMIT_TARGET_BRANCH:-main}...HEAD)
|
||||||
|
|
||||||
|
# Run security review with structured output
|
||||||
|
- |
|
||||||
|
codex exec \
|
||||||
|
--sandbox read-only \
|
||||||
|
--output-schema .woodpecker/schemas/security-review-schema.json \
|
||||||
|
-o /tmp/security-review.json \
|
||||||
|
"You are an expert application security engineer. Review the following code changes for security vulnerabilities including OWASP Top 10, hardcoded secrets, injection flaws, auth/authz gaps, XSS, CSRF, SSRF, path traversal, and supply chain risks. Include CWE IDs and remediation steps. Only flag real security issues, not code quality.
|
||||||
|
|
||||||
|
Changes:
|
||||||
|
$DIFF"
|
||||||
|
|
||||||
|
# Output summary
|
||||||
|
- echo "=== Security Review Results ==="
|
||||||
|
- jq '.' /tmp/security-review.json
|
||||||
|
- |
|
||||||
|
CRITICAL=$(jq '.stats.critical // 0' /tmp/security-review.json)
|
||||||
|
HIGH=$(jq '.stats.high // 0' /tmp/security-review.json)
|
||||||
|
if [ "$CRITICAL" -gt 0 ] || [ "$HIGH" -gt 0 ]; then
|
||||||
|
echo "FAIL: $CRITICAL critical, $HIGH high severity finding(s)"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo "PASS: No critical or high severity findings"
|
||||||
83
rails/git/detect-platform.ps1
Normal file
83
rails/git/detect-platform.ps1
Normal file
@@ -0,0 +1,83 @@
|
|||||||
|
# detect-platform.ps1 - Detect git platform (Gitea or GitHub) for current repo
|
||||||
|
# Usage: . .\detect-platform.ps1; Get-GitPlatform
|
||||||
|
# or: .\detect-platform.ps1 (prints platform name)
|
||||||
|
|
||||||
|
function Get-GitPlatform {
|
||||||
|
[CmdletBinding()]
|
||||||
|
param()
|
||||||
|
|
||||||
|
$remoteUrl = git remote get-url origin 2>$null
|
||||||
|
|
||||||
|
if ([string]::IsNullOrEmpty($remoteUrl)) {
|
||||||
|
Write-Error "Not a git repository or no origin remote"
|
||||||
|
return $null
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check for GitHub
|
||||||
|
if ($remoteUrl -match "github\.com") {
|
||||||
|
return "github"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check for common Gitea indicators
|
||||||
|
# Gitea URLs typically don't contain github.com, gitlab.com, bitbucket.org
|
||||||
|
if ($remoteUrl -notmatch "gitlab\.com" -and $remoteUrl -notmatch "bitbucket\.org") {
|
||||||
|
# Assume Gitea for self-hosted repos
|
||||||
|
return "gitea"
|
||||||
|
}
|
||||||
|
|
||||||
|
return "unknown"
|
||||||
|
}
|
||||||
|
|
||||||
|
function Get-GitRepoInfo {
|
||||||
|
[CmdletBinding()]
|
||||||
|
param()
|
||||||
|
|
||||||
|
$remoteUrl = git remote get-url origin 2>$null
|
||||||
|
|
||||||
|
if ([string]::IsNullOrEmpty($remoteUrl)) {
|
||||||
|
Write-Error "Not a git repository or no origin remote"
|
||||||
|
return $null
|
||||||
|
}
|
||||||
|
|
||||||
|
# Extract owner/repo from URL
|
||||||
|
# Handles: git@host:owner/repo.git, https://host/owner/repo.git, https://host/owner/repo
|
||||||
|
$repoPath = $remoteUrl
|
||||||
|
if ($remoteUrl -match "^git@") {
|
||||||
|
$repoPath = ($remoteUrl -split ":")[1]
|
||||||
|
} else {
|
||||||
|
# Remove protocol and host
|
||||||
|
$repoPath = $remoteUrl -replace "^https?://[^/]+/", ""
|
||||||
|
}
|
||||||
|
|
||||||
|
# Remove .git suffix if present
|
||||||
|
$repoPath = $repoPath -replace "\.git$", ""
|
||||||
|
|
||||||
|
return $repoPath
|
||||||
|
}
|
||||||
|
|
||||||
|
function Get-GitRepoOwner {
|
||||||
|
[CmdletBinding()]
|
||||||
|
param()
|
||||||
|
|
||||||
|
$repoInfo = Get-GitRepoInfo
|
||||||
|
if ($repoInfo) {
|
||||||
|
return ($repoInfo -split "/")[0]
|
||||||
|
}
|
||||||
|
return $null
|
||||||
|
}
|
||||||
|
|
||||||
|
function Get-GitRepoName {
|
||||||
|
[CmdletBinding()]
|
||||||
|
param()
|
||||||
|
|
||||||
|
$repoInfo = Get-GitRepoInfo
|
||||||
|
if ($repoInfo) {
|
||||||
|
return ($repoInfo -split "/")[-1]
|
||||||
|
}
|
||||||
|
return $null
|
||||||
|
}
|
||||||
|
|
||||||
|
# If script is run directly (not dot-sourced), output the platform
|
||||||
|
if ($MyInvocation.InvocationName -ne ".") {
|
||||||
|
Get-GitPlatform
|
||||||
|
}
|
||||||
80
rails/git/detect-platform.sh
Executable file
80
rails/git/detect-platform.sh
Executable file
@@ -0,0 +1,80 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# detect-platform.sh - Detect git platform (Gitea or GitHub) for current repo
|
||||||
|
# Usage: source detect-platform.sh && detect_platform
|
||||||
|
# or: ./detect-platform.sh (prints platform name)
|
||||||
|
|
||||||
|
detect_platform() {
|
||||||
|
local remote_url
|
||||||
|
remote_url=$(git remote get-url origin 2>/dev/null)
|
||||||
|
|
||||||
|
if [[ -z "$remote_url" ]]; then
|
||||||
|
echo "error: not a git repository or no origin remote" >&2
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check for GitHub
|
||||||
|
if [[ "$remote_url" == *"github.com"* ]]; then
|
||||||
|
PLATFORM="github"
|
||||||
|
export PLATFORM
|
||||||
|
echo "github"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check for common Gitea indicators
|
||||||
|
# Gitea URLs typically don't contain github.com, gitlab.com, bitbucket.org
|
||||||
|
if [[ "$remote_url" != *"gitlab.com"* ]] && \
|
||||||
|
[[ "$remote_url" != *"bitbucket.org"* ]]; then
|
||||||
|
# Assume Gitea for self-hosted repos
|
||||||
|
PLATFORM="gitea"
|
||||||
|
export PLATFORM
|
||||||
|
echo "gitea"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
PLATFORM="unknown"
|
||||||
|
export PLATFORM
|
||||||
|
echo "unknown"
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
get_repo_info() {
|
||||||
|
local remote_url
|
||||||
|
remote_url=$(git remote get-url origin 2>/dev/null)
|
||||||
|
|
||||||
|
if [[ -z "$remote_url" ]]; then
|
||||||
|
echo "error: not a git repository or no origin remote" >&2
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Extract owner/repo from URL
|
||||||
|
# Handles: git@host:owner/repo.git, https://host/owner/repo.git, https://host/owner/repo
|
||||||
|
local repo_path
|
||||||
|
if [[ "$remote_url" == git@* ]]; then
|
||||||
|
repo_path="${remote_url#*:}"
|
||||||
|
else
|
||||||
|
repo_path="${remote_url#*://}"
|
||||||
|
repo_path="${repo_path#*/}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Remove .git suffix if present
|
||||||
|
repo_path="${repo_path%.git}"
|
||||||
|
|
||||||
|
echo "$repo_path"
|
||||||
|
}
|
||||||
|
|
||||||
|
get_repo_owner() {
|
||||||
|
local repo_info
|
||||||
|
repo_info=$(get_repo_info)
|
||||||
|
echo "${repo_info%%/*}"
|
||||||
|
}
|
||||||
|
|
||||||
|
get_repo_name() {
|
||||||
|
local repo_info
|
||||||
|
repo_info=$(get_repo_info)
|
||||||
|
echo "${repo_info##*/}"
|
||||||
|
}
|
||||||
|
|
||||||
|
# If script is run directly (not sourced), output the platform
|
||||||
|
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||||
|
detect_platform
|
||||||
|
fi
|
||||||
111
rails/git/issue-assign.ps1
Normal file
111
rails/git/issue-assign.ps1
Normal file
@@ -0,0 +1,111 @@
|
|||||||
|
# issue-assign.ps1 - Assign issues on Gitea or GitHub
|
||||||
|
# Usage: .\issue-assign.ps1 -Issue ISSUE_NUMBER [-Assignee assignee] [-Labels labels] [-Milestone milestone]
|
||||||
|
|
||||||
|
[CmdletBinding()]
|
||||||
|
param(
|
||||||
|
[Parameter(Mandatory=$true)]
|
||||||
|
[Alias("i")]
|
||||||
|
[int]$Issue,
|
||||||
|
|
||||||
|
[Alias("a")]
|
||||||
|
[string]$Assignee,
|
||||||
|
|
||||||
|
[Alias("l")]
|
||||||
|
[string]$Labels,
|
||||||
|
|
||||||
|
[Alias("m")]
|
||||||
|
[string]$Milestone,
|
||||||
|
|
||||||
|
[Alias("r")]
|
||||||
|
[switch]$RemoveAssignee,
|
||||||
|
|
||||||
|
[Alias("h")]
|
||||||
|
[switch]$Help
|
||||||
|
)
|
||||||
|
|
||||||
|
$ScriptDir = Split-Path -Parent $MyInvocation.MyCommand.Path
|
||||||
|
. "$ScriptDir\detect-platform.ps1"
|
||||||
|
|
||||||
|
function Show-Usage {
|
||||||
|
@"
|
||||||
|
Usage: issue-assign.ps1 [OPTIONS]
|
||||||
|
|
||||||
|
Assign or update an issue on the current repository (Gitea or GitHub).
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-Issue, -i NUMBER Issue number (required)
|
||||||
|
-Assignee, -a USER Assign to user (use @me for self)
|
||||||
|
-Labels, -l LABELS Add comma-separated labels
|
||||||
|
-Milestone, -m NAME Set milestone
|
||||||
|
-RemoveAssignee, -r Remove current assignee
|
||||||
|
-Help, -h Show this help message
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
.\issue-assign.ps1 -i 42 -a "username"
|
||||||
|
.\issue-assign.ps1 -i 42 -l "in-progress" -m "0.2.0"
|
||||||
|
.\issue-assign.ps1 -i 42 -a @me
|
||||||
|
"@
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
if ($Help) {
|
||||||
|
Show-Usage
|
||||||
|
}
|
||||||
|
|
||||||
|
$platform = Get-GitPlatform
|
||||||
|
|
||||||
|
switch ($platform) {
|
||||||
|
"github" {
|
||||||
|
if ($Assignee) {
|
||||||
|
gh issue edit $Issue --add-assignee $Assignee
|
||||||
|
}
|
||||||
|
if ($RemoveAssignee) {
|
||||||
|
$current = gh issue view $Issue --json assignees -q '.assignees[].login' 2>$null
|
||||||
|
if ($current) {
|
||||||
|
$assignees = ($current -split "`n") -join ","
|
||||||
|
gh issue edit $Issue --remove-assignee $assignees
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if ($Labels) {
|
||||||
|
gh issue edit $Issue --add-label $Labels
|
||||||
|
}
|
||||||
|
if ($Milestone) {
|
||||||
|
gh issue edit $Issue --milestone $Milestone
|
||||||
|
}
|
||||||
|
Write-Host "Issue #$Issue updated successfully"
|
||||||
|
}
|
||||||
|
"gitea" {
|
||||||
|
$needsEdit = $false
|
||||||
|
$cmd = @("tea", "issue", "edit", $Issue)
|
||||||
|
|
||||||
|
if ($Assignee) {
|
||||||
|
$cmd += @("--assignees", $Assignee)
|
||||||
|
$needsEdit = $true
|
||||||
|
}
|
||||||
|
if ($Labels) {
|
||||||
|
$cmd += @("--labels", $Labels)
|
||||||
|
$needsEdit = $true
|
||||||
|
}
|
||||||
|
if ($Milestone) {
|
||||||
|
$milestoneList = tea milestones list 2>$null
|
||||||
|
$milestoneId = ($milestoneList | Select-String "^\s*(\d+).*$Milestone" | ForEach-Object { $_.Matches.Groups[1].Value } | Select-Object -First 1)
|
||||||
|
if ($milestoneId) {
|
||||||
|
$cmd += @("--milestone", $milestoneId)
|
||||||
|
$needsEdit = $true
|
||||||
|
} else {
|
||||||
|
Write-Warning "Could not find milestone '$Milestone'"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if ($needsEdit) {
|
||||||
|
& $cmd[0] $cmd[1..($cmd.Length-1)]
|
||||||
|
Write-Host "Issue #$Issue updated successfully"
|
||||||
|
} else {
|
||||||
|
Write-Host "No changes specified"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
default {
|
||||||
|
Write-Error "Could not detect git platform"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
}
|
||||||
135
rails/git/issue-assign.sh
Executable file
135
rails/git/issue-assign.sh
Executable file
@@ -0,0 +1,135 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# issue-assign.sh - Assign issues on Gitea or GitHub
|
||||||
|
# Usage: issue-assign.sh -i ISSUE_NUMBER [-a assignee] [-l labels] [-m milestone]
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
# Default values
|
||||||
|
ISSUE=""
|
||||||
|
ASSIGNEE=""
|
||||||
|
LABELS=""
|
||||||
|
MILESTONE=""
|
||||||
|
REMOVE_ASSIGNEE=false
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
cat <<EOF
|
||||||
|
Usage: $(basename "$0") [OPTIONS]
|
||||||
|
|
||||||
|
Assign or update an issue on the current repository (Gitea or GitHub).
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-i, --issue NUMBER Issue number (required)
|
||||||
|
-a, --assignee USER Assign to user (use @me for self)
|
||||||
|
-l, --labels LABELS Add comma-separated labels
|
||||||
|
-m, --milestone NAME Set milestone
|
||||||
|
-r, --remove-assignee Remove current assignee
|
||||||
|
-h, --help Show this help message
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
$(basename "$0") -i 42 -a "username"
|
||||||
|
$(basename "$0") -i 42 -l "in-progress" -m "0.2.0"
|
||||||
|
$(basename "$0") -i 42 -a @me
|
||||||
|
EOF
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-i|--issue)
|
||||||
|
ISSUE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-a|--assignee)
|
||||||
|
ASSIGNEE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-l|--labels)
|
||||||
|
LABELS="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-m|--milestone)
|
||||||
|
MILESTONE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-r|--remove-assignee)
|
||||||
|
REMOVE_ASSIGNEE=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
usage
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1" >&2
|
||||||
|
usage
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ -z "$ISSUE" ]]; then
|
||||||
|
echo "Error: Issue number is required (-i)" >&2
|
||||||
|
usage
|
||||||
|
fi
|
||||||
|
|
||||||
|
PLATFORM=$(detect_platform)
|
||||||
|
|
||||||
|
case "$PLATFORM" in
|
||||||
|
github)
|
||||||
|
if [[ -n "$ASSIGNEE" ]]; then
|
||||||
|
gh issue edit "$ISSUE" --add-assignee "$ASSIGNEE"
|
||||||
|
fi
|
||||||
|
if [[ "$REMOVE_ASSIGNEE" == true ]]; then
|
||||||
|
# Get current assignees and remove them
|
||||||
|
CURRENT=$(gh issue view "$ISSUE" --json assignees -q '.assignees[].login' 2>/dev/null | tr '\n' ',')
|
||||||
|
if [[ -n "$CURRENT" ]]; then
|
||||||
|
gh issue edit "$ISSUE" --remove-assignee "${CURRENT%,}"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
if [[ -n "$LABELS" ]]; then
|
||||||
|
gh issue edit "$ISSUE" --add-label "$LABELS"
|
||||||
|
fi
|
||||||
|
if [[ -n "$MILESTONE" ]]; then
|
||||||
|
gh issue edit "$ISSUE" --milestone "$MILESTONE"
|
||||||
|
fi
|
||||||
|
echo "Issue #$ISSUE updated successfully"
|
||||||
|
;;
|
||||||
|
gitea)
|
||||||
|
# tea issue edit syntax
|
||||||
|
CMD="tea issue edit $ISSUE"
|
||||||
|
NEEDS_EDIT=false
|
||||||
|
|
||||||
|
if [[ -n "$ASSIGNEE" ]]; then
|
||||||
|
# tea uses --assignees flag
|
||||||
|
CMD="$CMD --assignees \"$ASSIGNEE\""
|
||||||
|
NEEDS_EDIT=true
|
||||||
|
fi
|
||||||
|
if [[ -n "$LABELS" ]]; then
|
||||||
|
# tea uses --labels flag (replaces existing)
|
||||||
|
CMD="$CMD --labels \"$LABELS\""
|
||||||
|
NEEDS_EDIT=true
|
||||||
|
fi
|
||||||
|
if [[ -n "$MILESTONE" ]]; then
|
||||||
|
MILESTONE_ID=$(tea milestones list 2>/dev/null | grep -E "^\s*[0-9]+" | grep "$MILESTONE" | awk '{print $1}' | head -1)
|
||||||
|
if [[ -n "$MILESTONE_ID" ]]; then
|
||||||
|
CMD="$CMD --milestone $MILESTONE_ID"
|
||||||
|
NEEDS_EDIT=true
|
||||||
|
else
|
||||||
|
echo "Warning: Could not find milestone '$MILESTONE'" >&2
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ "$NEEDS_EDIT" == true ]]; then
|
||||||
|
eval "$CMD"
|
||||||
|
echo "Issue #$ISSUE updated successfully"
|
||||||
|
else
|
||||||
|
echo "No changes specified"
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Error: Could not detect git platform" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
64
rails/git/issue-close.sh
Executable file
64
rails/git/issue-close.sh
Executable file
@@ -0,0 +1,64 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# issue-close.sh - Close an issue on GitHub or Gitea
|
||||||
|
# Usage: issue-close.sh -i <issue_number> [-c <comment>]
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Source platform detection
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
ISSUE_NUMBER=""
|
||||||
|
COMMENT=""
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-i|--issue)
|
||||||
|
ISSUE_NUMBER="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-c|--comment)
|
||||||
|
COMMENT="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
echo "Usage: issue-close.sh -i <issue_number> [-c <comment>]"
|
||||||
|
echo ""
|
||||||
|
echo "Options:"
|
||||||
|
echo " -i, --issue Issue number (required)"
|
||||||
|
echo " -c, --comment Comment to add before closing (optional)"
|
||||||
|
echo " -h, --help Show this help"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ -z "$ISSUE_NUMBER" ]]; then
|
||||||
|
echo "Error: Issue number is required (-i)"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Detect platform and close issue
|
||||||
|
detect_platform
|
||||||
|
|
||||||
|
if [[ "$PLATFORM" == "github" ]]; then
|
||||||
|
if [[ -n "$COMMENT" ]]; then
|
||||||
|
gh issue comment "$ISSUE_NUMBER" --body "$COMMENT"
|
||||||
|
fi
|
||||||
|
gh issue close "$ISSUE_NUMBER"
|
||||||
|
echo "Closed GitHub issue #$ISSUE_NUMBER"
|
||||||
|
elif [[ "$PLATFORM" == "gitea" ]]; then
|
||||||
|
if [[ -n "$COMMENT" ]]; then
|
||||||
|
tea issue comment "$ISSUE_NUMBER" "$COMMENT"
|
||||||
|
fi
|
||||||
|
tea issue close "$ISSUE_NUMBER"
|
||||||
|
echo "Closed Gitea issue #$ISSUE_NUMBER"
|
||||||
|
else
|
||||||
|
echo "Error: Unknown platform"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
61
rails/git/issue-comment.sh
Executable file
61
rails/git/issue-comment.sh
Executable file
@@ -0,0 +1,61 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# issue-comment.sh - Add a comment to an issue on GitHub or Gitea
|
||||||
|
# Usage: issue-comment.sh -i <issue_number> -c <comment>
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
ISSUE_NUMBER=""
|
||||||
|
COMMENT=""
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-i|--issue)
|
||||||
|
ISSUE_NUMBER="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-c|--comment)
|
||||||
|
COMMENT="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
echo "Usage: issue-comment.sh -i <issue_number> -c <comment>"
|
||||||
|
echo ""
|
||||||
|
echo "Options:"
|
||||||
|
echo " -i, --issue Issue number (required)"
|
||||||
|
echo " -c, --comment Comment text (required)"
|
||||||
|
echo " -h, --help Show this help"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ -z "$ISSUE_NUMBER" ]]; then
|
||||||
|
echo "Error: Issue number is required (-i)"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -z "$COMMENT" ]]; then
|
||||||
|
echo "Error: Comment is required (-c)"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
detect_platform
|
||||||
|
|
||||||
|
if [[ "$PLATFORM" == "github" ]]; then
|
||||||
|
gh issue comment "$ISSUE_NUMBER" --body "$COMMENT"
|
||||||
|
echo "Added comment to GitHub issue #$ISSUE_NUMBER"
|
||||||
|
elif [[ "$PLATFORM" == "gitea" ]]; then
|
||||||
|
tea issue comment "$ISSUE_NUMBER" "$COMMENT"
|
||||||
|
echo "Added comment to Gitea issue #$ISSUE_NUMBER"
|
||||||
|
else
|
||||||
|
echo "Error: Unknown platform"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
80
rails/git/issue-create.ps1
Normal file
80
rails/git/issue-create.ps1
Normal file
@@ -0,0 +1,80 @@
|
|||||||
|
# issue-create.ps1 - Create issues on Gitea or GitHub
|
||||||
|
# Usage: .\issue-create.ps1 -Title "Title" [-Body "Body"] [-Labels "label1,label2"] [-Milestone "milestone"]
|
||||||
|
|
||||||
|
[CmdletBinding()]
|
||||||
|
param(
|
||||||
|
[Parameter(Mandatory=$true)]
|
||||||
|
[Alias("t")]
|
||||||
|
[string]$Title,
|
||||||
|
|
||||||
|
[Alias("b")]
|
||||||
|
[string]$Body,
|
||||||
|
|
||||||
|
[Alias("l")]
|
||||||
|
[string]$Labels,
|
||||||
|
|
||||||
|
[Alias("m")]
|
||||||
|
[string]$Milestone,
|
||||||
|
|
||||||
|
[Alias("h")]
|
||||||
|
[switch]$Help
|
||||||
|
)
|
||||||
|
|
||||||
|
$ScriptDir = Split-Path -Parent $MyInvocation.MyCommand.Path
|
||||||
|
. "$ScriptDir\detect-platform.ps1"
|
||||||
|
|
||||||
|
function Show-Usage {
|
||||||
|
@"
|
||||||
|
Usage: issue-create.ps1 [OPTIONS]
|
||||||
|
|
||||||
|
Create an issue on the current repository (Gitea or GitHub).
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-Title, -t TITLE Issue title (required)
|
||||||
|
-Body, -b BODY Issue body/description
|
||||||
|
-Labels, -l LABELS Comma-separated labels (e.g., "bug,feature")
|
||||||
|
-Milestone, -m NAME Milestone name to assign
|
||||||
|
-Help, -h Show this help message
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
.\issue-create.ps1 -Title "Fix login bug" -Labels "bug,priority-high"
|
||||||
|
.\issue-create.ps1 -t "Add dark mode" -b "Implement theme switching" -m "0.2.0"
|
||||||
|
"@
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
if ($Help) {
|
||||||
|
Show-Usage
|
||||||
|
}
|
||||||
|
|
||||||
|
$platform = Get-GitPlatform
|
||||||
|
|
||||||
|
switch ($platform) {
|
||||||
|
"github" {
|
||||||
|
$cmd = @("gh", "issue", "create", "--title", $Title)
|
||||||
|
if ($Body) { $cmd += @("--body", $Body) }
|
||||||
|
if ($Labels) { $cmd += @("--label", $Labels) }
|
||||||
|
if ($Milestone) { $cmd += @("--milestone", $Milestone) }
|
||||||
|
& $cmd[0] $cmd[1..($cmd.Length-1)]
|
||||||
|
}
|
||||||
|
"gitea" {
|
||||||
|
$cmd = @("tea", "issue", "create", "--title", $Title)
|
||||||
|
if ($Body) { $cmd += @("--description", $Body) }
|
||||||
|
if ($Labels) { $cmd += @("--labels", $Labels) }
|
||||||
|
if ($Milestone) {
|
||||||
|
# Try to get milestone ID by name
|
||||||
|
$milestoneList = tea milestones list 2>$null
|
||||||
|
$milestoneId = ($milestoneList | Select-String "^\s*(\d+).*$Milestone" | ForEach-Object { $_.Matches.Groups[1].Value } | Select-Object -First 1)
|
||||||
|
if ($milestoneId) {
|
||||||
|
$cmd += @("--milestone", $milestoneId)
|
||||||
|
} else {
|
||||||
|
Write-Warning "Could not find milestone '$Milestone', creating without milestone"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
& $cmd[0] $cmd[1..($cmd.Length-1)]
|
||||||
|
}
|
||||||
|
default {
|
||||||
|
Write-Error "Could not detect git platform"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
}
|
||||||
92
rails/git/issue-create.sh
Executable file
92
rails/git/issue-create.sh
Executable file
@@ -0,0 +1,92 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# issue-create.sh - Create issues on Gitea or GitHub
|
||||||
|
# Usage: issue-create.sh -t "Title" [-b "Body"] [-l "label1,label2"] [-m "milestone"]
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
# Default values
|
||||||
|
TITLE=""
|
||||||
|
BODY=""
|
||||||
|
LABELS=""
|
||||||
|
MILESTONE=""
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
cat <<EOF
|
||||||
|
Usage: $(basename "$0") [OPTIONS]
|
||||||
|
|
||||||
|
Create an issue on the current repository (Gitea or GitHub).
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-t, --title TITLE Issue title (required)
|
||||||
|
-b, --body BODY Issue body/description
|
||||||
|
-l, --labels LABELS Comma-separated labels (e.g., "bug,feature")
|
||||||
|
-m, --milestone NAME Milestone name to assign
|
||||||
|
-h, --help Show this help message
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
$(basename "$0") -t "Fix login bug" -l "bug,priority-high"
|
||||||
|
$(basename "$0") -t "Add dark mode" -b "Implement theme switching" -m "0.2.0"
|
||||||
|
EOF
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-t|--title)
|
||||||
|
TITLE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-b|--body)
|
||||||
|
BODY="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-l|--labels)
|
||||||
|
LABELS="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-m|--milestone)
|
||||||
|
MILESTONE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
usage
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1" >&2
|
||||||
|
usage
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ -z "$TITLE" ]]; then
|
||||||
|
echo "Error: Title is required (-t)" >&2
|
||||||
|
usage
|
||||||
|
fi
|
||||||
|
|
||||||
|
PLATFORM=$(detect_platform)
|
||||||
|
|
||||||
|
case "$PLATFORM" in
|
||||||
|
github)
|
||||||
|
CMD="gh issue create --title \"$TITLE\""
|
||||||
|
[[ -n "$BODY" ]] && CMD="$CMD --body \"$BODY\""
|
||||||
|
[[ -n "$LABELS" ]] && CMD="$CMD --label \"$LABELS\""
|
||||||
|
[[ -n "$MILESTONE" ]] && CMD="$CMD --milestone \"$MILESTONE\""
|
||||||
|
eval "$CMD"
|
||||||
|
;;
|
||||||
|
gitea)
|
||||||
|
CMD="tea issue create --title \"$TITLE\""
|
||||||
|
[[ -n "$BODY" ]] && CMD="$CMD --description \"$BODY\""
|
||||||
|
[[ -n "$LABELS" ]] && CMD="$CMD --labels \"$LABELS\""
|
||||||
|
# tea accepts milestone by name directly (verified 2026-02-05)
|
||||||
|
[[ -n "$MILESTONE" ]] && CMD="$CMD --milestone \"$MILESTONE\""
|
||||||
|
eval "$CMD"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Error: Could not detect git platform" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
84
rails/git/issue-edit.sh
Executable file
84
rails/git/issue-edit.sh
Executable file
@@ -0,0 +1,84 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# issue-edit.sh - Edit an issue on GitHub or Gitea
|
||||||
|
# Usage: issue-edit.sh -i <issue_number> [-t <title>] [-b <body>] [-l <labels>] [-m <milestone>]
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
ISSUE_NUMBER=""
|
||||||
|
TITLE=""
|
||||||
|
BODY=""
|
||||||
|
LABELS=""
|
||||||
|
MILESTONE=""
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-i|--issue)
|
||||||
|
ISSUE_NUMBER="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-t|--title)
|
||||||
|
TITLE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-b|--body)
|
||||||
|
BODY="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-l|--labels)
|
||||||
|
LABELS="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-m|--milestone)
|
||||||
|
MILESTONE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
echo "Usage: issue-edit.sh -i <issue_number> [-t <title>] [-b <body>] [-l <labels>] [-m <milestone>]"
|
||||||
|
echo ""
|
||||||
|
echo "Options:"
|
||||||
|
echo " -i, --issue Issue number (required)"
|
||||||
|
echo " -t, --title New title"
|
||||||
|
echo " -b, --body New body/description"
|
||||||
|
echo " -l, --labels Labels (comma-separated, replaces existing)"
|
||||||
|
echo " -m, --milestone Milestone name"
|
||||||
|
echo " -h, --help Show this help"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ -z "$ISSUE_NUMBER" ]]; then
|
||||||
|
echo "Error: Issue number is required (-i)"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
detect_platform
|
||||||
|
|
||||||
|
if [[ "$PLATFORM" == "github" ]]; then
|
||||||
|
CMD="gh issue edit $ISSUE_NUMBER"
|
||||||
|
[[ -n "$TITLE" ]] && CMD="$CMD --title \"$TITLE\""
|
||||||
|
[[ -n "$BODY" ]] && CMD="$CMD --body \"$BODY\""
|
||||||
|
[[ -n "$LABELS" ]] && CMD="$CMD --add-label \"$LABELS\""
|
||||||
|
[[ -n "$MILESTONE" ]] && CMD="$CMD --milestone \"$MILESTONE\""
|
||||||
|
eval $CMD
|
||||||
|
echo "Updated GitHub issue #$ISSUE_NUMBER"
|
||||||
|
elif [[ "$PLATFORM" == "gitea" ]]; then
|
||||||
|
CMD="tea issue edit $ISSUE_NUMBER"
|
||||||
|
[[ -n "$TITLE" ]] && CMD="$CMD --title \"$TITLE\""
|
||||||
|
[[ -n "$BODY" ]] && CMD="$CMD --description \"$BODY\""
|
||||||
|
[[ -n "$LABELS" ]] && CMD="$CMD --add-labels \"$LABELS\""
|
||||||
|
[[ -n "$MILESTONE" ]] && CMD="$CMD --milestone \"$MILESTONE\""
|
||||||
|
eval $CMD
|
||||||
|
echo "Updated Gitea issue #$ISSUE_NUMBER"
|
||||||
|
else
|
||||||
|
echo "Error: Unknown platform"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
78
rails/git/issue-list.ps1
Normal file
78
rails/git/issue-list.ps1
Normal file
@@ -0,0 +1,78 @@
|
|||||||
|
# issue-list.ps1 - List issues on Gitea or GitHub
|
||||||
|
# Usage: .\issue-list.ps1 [-State state] [-Label label] [-Milestone milestone] [-Assignee assignee]
|
||||||
|
|
||||||
|
[CmdletBinding()]
|
||||||
|
param(
|
||||||
|
[Alias("s")]
|
||||||
|
[ValidateSet("open", "closed", "all")]
|
||||||
|
[string]$State = "open",
|
||||||
|
|
||||||
|
[Alias("l")]
|
||||||
|
[string]$Label,
|
||||||
|
|
||||||
|
[Alias("m")]
|
||||||
|
[string]$Milestone,
|
||||||
|
|
||||||
|
[Alias("a")]
|
||||||
|
[string]$Assignee,
|
||||||
|
|
||||||
|
[Alias("n")]
|
||||||
|
[int]$Limit = 100,
|
||||||
|
|
||||||
|
[Alias("h")]
|
||||||
|
[switch]$Help
|
||||||
|
)
|
||||||
|
|
||||||
|
$ScriptDir = Split-Path -Parent $MyInvocation.MyCommand.Path
|
||||||
|
. "$ScriptDir\detect-platform.ps1"
|
||||||
|
|
||||||
|
function Show-Usage {
|
||||||
|
@"
|
||||||
|
Usage: issue-list.ps1 [OPTIONS]
|
||||||
|
|
||||||
|
List issues from the current repository (Gitea or GitHub).
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-State, -s STATE Filter by state: open, closed, all (default: open)
|
||||||
|
-Label, -l LABEL Filter by label
|
||||||
|
-Milestone, -m NAME Filter by milestone name
|
||||||
|
-Assignee, -a USER Filter by assignee
|
||||||
|
-Limit, -n N Maximum issues to show (default: 100)
|
||||||
|
-Help, -h Show this help message
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
.\issue-list.ps1 # List open issues
|
||||||
|
.\issue-list.ps1 -s all -l bug # All issues with 'bug' label
|
||||||
|
.\issue-list.ps1 -m "0.2.0" # Issues in milestone 0.2.0
|
||||||
|
"@
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
if ($Help) {
|
||||||
|
Show-Usage
|
||||||
|
}
|
||||||
|
|
||||||
|
$platform = Get-GitPlatform
|
||||||
|
|
||||||
|
switch ($platform) {
|
||||||
|
"github" {
|
||||||
|
$cmd = @("gh", "issue", "list", "--state", $State, "--limit", $Limit)
|
||||||
|
if ($Label) { $cmd += @("--label", $Label) }
|
||||||
|
if ($Milestone) { $cmd += @("--milestone", $Milestone) }
|
||||||
|
if ($Assignee) { $cmd += @("--assignee", $Assignee) }
|
||||||
|
& $cmd[0] $cmd[1..($cmd.Length-1)]
|
||||||
|
}
|
||||||
|
"gitea" {
|
||||||
|
$cmd = @("tea", "issues", "list", "--state", $State, "--limit", $Limit)
|
||||||
|
if ($Label) { $cmd += @("--labels", $Label) }
|
||||||
|
if ($Milestone) { $cmd += @("--milestones", $Milestone) }
|
||||||
|
& $cmd[0] $cmd[1..($cmd.Length-1)]
|
||||||
|
if ($Assignee) {
|
||||||
|
Write-Warning "Assignee filtering may require manual review for Gitea"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
default {
|
||||||
|
Write-Error "Could not detect git platform"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
}
|
||||||
96
rails/git/issue-list.sh
Executable file
96
rails/git/issue-list.sh
Executable file
@@ -0,0 +1,96 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# issue-list.sh - List issues on Gitea or GitHub
|
||||||
|
# Usage: issue-list.sh [-s state] [-l label] [-m milestone] [-a assignee]
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
# Default values
|
||||||
|
STATE="open"
|
||||||
|
LABEL=""
|
||||||
|
MILESTONE=""
|
||||||
|
ASSIGNEE=""
|
||||||
|
LIMIT=100
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
cat <<EOF
|
||||||
|
Usage: $(basename "$0") [OPTIONS]
|
||||||
|
|
||||||
|
List issues from the current repository (Gitea or GitHub).
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-s, --state STATE Filter by state: open, closed, all (default: open)
|
||||||
|
-l, --label LABEL Filter by label
|
||||||
|
-m, --milestone NAME Filter by milestone name
|
||||||
|
-a, --assignee USER Filter by assignee
|
||||||
|
-n, --limit N Maximum issues to show (default: 100)
|
||||||
|
-h, --help Show this help message
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
$(basename "$0") # List open issues
|
||||||
|
$(basename "$0") -s all -l bug # All issues with 'bug' label
|
||||||
|
$(basename "$0") -m "0.2.0" # Issues in milestone 0.2.0
|
||||||
|
EOF
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-s|--state)
|
||||||
|
STATE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-l|--label)
|
||||||
|
LABEL="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-m|--milestone)
|
||||||
|
MILESTONE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-a|--assignee)
|
||||||
|
ASSIGNEE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-n|--limit)
|
||||||
|
LIMIT="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
usage
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1" >&2
|
||||||
|
usage
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
PLATFORM=$(detect_platform)
|
||||||
|
|
||||||
|
case "$PLATFORM" in
|
||||||
|
github)
|
||||||
|
CMD="gh issue list --state $STATE --limit $LIMIT"
|
||||||
|
[[ -n "$LABEL" ]] && CMD="$CMD --label \"$LABEL\""
|
||||||
|
[[ -n "$MILESTONE" ]] && CMD="$CMD --milestone \"$MILESTONE\""
|
||||||
|
[[ -n "$ASSIGNEE" ]] && CMD="$CMD --assignee \"$ASSIGNEE\""
|
||||||
|
eval "$CMD"
|
||||||
|
;;
|
||||||
|
gitea)
|
||||||
|
CMD="tea issues list --state $STATE --limit $LIMIT"
|
||||||
|
[[ -n "$LABEL" ]] && CMD="$CMD --labels \"$LABEL\""
|
||||||
|
[[ -n "$MILESTONE" ]] && CMD="$CMD --milestones \"$MILESTONE\""
|
||||||
|
# Note: tea may not support assignee filter directly
|
||||||
|
eval "$CMD"
|
||||||
|
if [[ -n "$ASSIGNEE" ]]; then
|
||||||
|
echo "Note: Assignee filtering may require manual review for Gitea" >&2
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Error: Could not detect git platform" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
62
rails/git/issue-reopen.sh
Executable file
62
rails/git/issue-reopen.sh
Executable file
@@ -0,0 +1,62 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# issue-reopen.sh - Reopen a closed issue on GitHub or Gitea
|
||||||
|
# Usage: issue-reopen.sh -i <issue_number> [-c <comment>]
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
ISSUE_NUMBER=""
|
||||||
|
COMMENT=""
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-i|--issue)
|
||||||
|
ISSUE_NUMBER="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-c|--comment)
|
||||||
|
COMMENT="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
echo "Usage: issue-reopen.sh -i <issue_number> [-c <comment>]"
|
||||||
|
echo ""
|
||||||
|
echo "Options:"
|
||||||
|
echo " -i, --issue Issue number (required)"
|
||||||
|
echo " -c, --comment Comment to add when reopening (optional)"
|
||||||
|
echo " -h, --help Show this help"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ -z "$ISSUE_NUMBER" ]]; then
|
||||||
|
echo "Error: Issue number is required (-i)"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
detect_platform
|
||||||
|
|
||||||
|
if [[ "$PLATFORM" == "github" ]]; then
|
||||||
|
if [[ -n "$COMMENT" ]]; then
|
||||||
|
gh issue comment "$ISSUE_NUMBER" --body "$COMMENT"
|
||||||
|
fi
|
||||||
|
gh issue reopen "$ISSUE_NUMBER"
|
||||||
|
echo "Reopened GitHub issue #$ISSUE_NUMBER"
|
||||||
|
elif [[ "$PLATFORM" == "gitea" ]]; then
|
||||||
|
if [[ -n "$COMMENT" ]]; then
|
||||||
|
tea issue comment "$ISSUE_NUMBER" "$COMMENT"
|
||||||
|
fi
|
||||||
|
tea issue reopen "$ISSUE_NUMBER"
|
||||||
|
echo "Reopened Gitea issue #$ISSUE_NUMBER"
|
||||||
|
else
|
||||||
|
echo "Error: Unknown platform"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
48
rails/git/issue-view.sh
Executable file
48
rails/git/issue-view.sh
Executable file
@@ -0,0 +1,48 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# issue-view.sh - View issue details on GitHub or Gitea
|
||||||
|
# Usage: issue-view.sh -i <issue_number>
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
ISSUE_NUMBER=""
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-i|--issue)
|
||||||
|
ISSUE_NUMBER="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
echo "Usage: issue-view.sh -i <issue_number>"
|
||||||
|
echo ""
|
||||||
|
echo "Options:"
|
||||||
|
echo " -i, --issue Issue number (required)"
|
||||||
|
echo " -h, --help Show this help"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ -z "$ISSUE_NUMBER" ]]; then
|
||||||
|
echo "Error: Issue number is required (-i)"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
detect_platform
|
||||||
|
|
||||||
|
if [[ "$PLATFORM" == "github" ]]; then
|
||||||
|
gh issue view "$ISSUE_NUMBER"
|
||||||
|
elif [[ "$PLATFORM" == "gitea" ]]; then
|
||||||
|
tea issue "$ISSUE_NUMBER"
|
||||||
|
else
|
||||||
|
echo "Error: Unknown platform"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
50
rails/git/milestone-close.sh
Executable file
50
rails/git/milestone-close.sh
Executable file
@@ -0,0 +1,50 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# milestone-close.sh - Close a milestone on GitHub or Gitea
|
||||||
|
# Usage: milestone-close.sh -t <title>
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
TITLE=""
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-t|--title)
|
||||||
|
TITLE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
echo "Usage: milestone-close.sh -t <title>"
|
||||||
|
echo ""
|
||||||
|
echo "Options:"
|
||||||
|
echo " -t, --title Milestone title (required)"
|
||||||
|
echo " -h, --help Show this help"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ -z "$TITLE" ]]; then
|
||||||
|
echo "Error: Milestone title is required (-t)"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
detect_platform
|
||||||
|
|
||||||
|
if [[ "$PLATFORM" == "github" ]]; then
|
||||||
|
gh api -X PATCH "/repos/{owner}/{repo}/milestones/$(gh api "/repos/{owner}/{repo}/milestones" --jq ".[] | select(.title==\"$TITLE\") | .number")" -f state=closed
|
||||||
|
echo "Closed GitHub milestone: $TITLE"
|
||||||
|
elif [[ "$PLATFORM" == "gitea" ]]; then
|
||||||
|
tea milestone close "$TITLE"
|
||||||
|
echo "Closed Gitea milestone: $TITLE"
|
||||||
|
else
|
||||||
|
echo "Error: Unknown platform"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
98
rails/git/milestone-create.ps1
Normal file
98
rails/git/milestone-create.ps1
Normal file
@@ -0,0 +1,98 @@
|
|||||||
|
# milestone-create.ps1 - Create milestones on Gitea or GitHub
|
||||||
|
# Usage: .\milestone-create.ps1 -Title "Title" [-Description "Description"] [-Due "YYYY-MM-DD"]
|
||||||
|
|
||||||
|
[CmdletBinding()]
|
||||||
|
param(
|
||||||
|
[Alias("t")]
|
||||||
|
[string]$Title,
|
||||||
|
|
||||||
|
[Alias("d")]
|
||||||
|
[string]$Description,
|
||||||
|
|
||||||
|
[string]$Due,
|
||||||
|
|
||||||
|
[switch]$List,
|
||||||
|
|
||||||
|
[Alias("h")]
|
||||||
|
[switch]$Help
|
||||||
|
)
|
||||||
|
|
||||||
|
$ScriptDir = Split-Path -Parent $MyInvocation.MyCommand.Path
|
||||||
|
. "$ScriptDir\detect-platform.ps1"
|
||||||
|
|
||||||
|
function Show-Usage {
|
||||||
|
@"
|
||||||
|
Usage: milestone-create.ps1 [OPTIONS]
|
||||||
|
|
||||||
|
Create or list milestones on the current repository (Gitea or GitHub).
|
||||||
|
|
||||||
|
Versioning Convention:
|
||||||
|
- Features get dedicated milestones
|
||||||
|
- Pre-release: 0.X.0 for breaking changes, 0.X.Y for patches
|
||||||
|
- Post-release: X.0.0 for breaking changes
|
||||||
|
- MVP starts at 0.1.0
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-Title, -t TITLE Milestone title/version (e.g., "0.2.0")
|
||||||
|
-Description, -d DESC Milestone description
|
||||||
|
-Due DATE Due date (YYYY-MM-DD format)
|
||||||
|
-List List existing milestones
|
||||||
|
-Help, -h Show this help message
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
.\milestone-create.ps1 -List
|
||||||
|
.\milestone-create.ps1 -t "0.1.0" -d "MVP Release"
|
||||||
|
.\milestone-create.ps1 -t "0.2.0" -d "User Authentication Feature" -Due "2025-03-01"
|
||||||
|
"@
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
if ($Help) {
|
||||||
|
Show-Usage
|
||||||
|
}
|
||||||
|
|
||||||
|
$platform = Get-GitPlatform
|
||||||
|
|
||||||
|
if ($List) {
|
||||||
|
switch ($platform) {
|
||||||
|
"github" {
|
||||||
|
gh api repos/:owner/:repo/milestones --jq '.[] | "\(.number)`t\(.title)`t\(.state)`t\(.open_issues)/\(.closed_issues) issues"'
|
||||||
|
}
|
||||||
|
"gitea" {
|
||||||
|
tea milestones list
|
||||||
|
}
|
||||||
|
default {
|
||||||
|
Write-Error "Could not detect git platform"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
exit 0
|
||||||
|
}
|
||||||
|
|
||||||
|
if (-not $Title) {
|
||||||
|
Write-Error "Title is required (-t) for creating milestones"
|
||||||
|
Show-Usage
|
||||||
|
}
|
||||||
|
|
||||||
|
switch ($platform) {
|
||||||
|
"github" {
|
||||||
|
$payload = @{ title = $Title }
|
||||||
|
if ($Description) { $payload.description = $Description }
|
||||||
|
if ($Due) { $payload.due_on = "${Due}T00:00:00Z" }
|
||||||
|
|
||||||
|
$json = $payload | ConvertTo-Json -Compress
|
||||||
|
$json | gh api repos/:owner/:repo/milestones --method POST --input -
|
||||||
|
Write-Host "Milestone '$Title' created successfully"
|
||||||
|
}
|
||||||
|
"gitea" {
|
||||||
|
$cmd = @("tea", "milestones", "create", "--title", $Title)
|
||||||
|
if ($Description) { $cmd += @("--description", $Description) }
|
||||||
|
if ($Due) { $cmd += @("--deadline", $Due) }
|
||||||
|
& $cmd[0] $cmd[1..($cmd.Length-1)]
|
||||||
|
Write-Host "Milestone '$Title' created successfully"
|
||||||
|
}
|
||||||
|
default {
|
||||||
|
Write-Error "Could not detect git platform"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
}
|
||||||
117
rails/git/milestone-create.sh
Executable file
117
rails/git/milestone-create.sh
Executable file
@@ -0,0 +1,117 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# milestone-create.sh - Create milestones on Gitea or GitHub
|
||||||
|
# Usage: milestone-create.sh -t "Title" [-d "Description"] [--due "YYYY-MM-DD"]
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
# Default values
|
||||||
|
TITLE=""
|
||||||
|
DESCRIPTION=""
|
||||||
|
DUE_DATE=""
|
||||||
|
LIST_ONLY=false
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
cat <<EOF
|
||||||
|
Usage: $(basename "$0") [OPTIONS]
|
||||||
|
|
||||||
|
Create or list milestones on the current repository (Gitea or GitHub).
|
||||||
|
|
||||||
|
Versioning Convention:
|
||||||
|
- Features get dedicated milestones
|
||||||
|
- Pre-release: 0.X.0 for breaking changes, 0.X.Y for patches
|
||||||
|
- Post-release: X.0.0 for breaking changes
|
||||||
|
- MVP starts at 0.1.0
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-t, --title TITLE Milestone title/version (e.g., "0.2.0")
|
||||||
|
-d, --desc DESCRIPTION Milestone description
|
||||||
|
--due DATE Due date (YYYY-MM-DD format)
|
||||||
|
--list List existing milestones
|
||||||
|
-h, --help Show this help message
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
$(basename "$0") --list
|
||||||
|
$(basename "$0") -t "0.1.0" -d "MVP Release"
|
||||||
|
$(basename "$0") -t "0.2.0" -d "User Authentication Feature" --due "2025-03-01"
|
||||||
|
EOF
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-t|--title)
|
||||||
|
TITLE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-d|--desc)
|
||||||
|
DESCRIPTION="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--due)
|
||||||
|
DUE_DATE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
--list)
|
||||||
|
LIST_ONLY=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
usage
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1" >&2
|
||||||
|
usage
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
PLATFORM=$(detect_platform)
|
||||||
|
|
||||||
|
if [[ "$LIST_ONLY" == true ]]; then
|
||||||
|
case "$PLATFORM" in
|
||||||
|
github)
|
||||||
|
gh api repos/:owner/:repo/milestones --jq '.[] | "\(.number)\t\(.title)\t\(.state)\t\(.open_issues)/\(.closed_issues) issues"'
|
||||||
|
;;
|
||||||
|
gitea)
|
||||||
|
tea milestones list
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Error: Could not detect git platform" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -z "$TITLE" ]]; then
|
||||||
|
echo "Error: Title is required (-t) for creating milestones" >&2
|
||||||
|
usage
|
||||||
|
fi
|
||||||
|
|
||||||
|
case "$PLATFORM" in
|
||||||
|
github)
|
||||||
|
# GitHub uses the API for milestone creation
|
||||||
|
JSON_PAYLOAD="{\"title\":\"$TITLE\""
|
||||||
|
[[ -n "$DESCRIPTION" ]] && JSON_PAYLOAD="$JSON_PAYLOAD,\"description\":\"$DESCRIPTION\""
|
||||||
|
[[ -n "$DUE_DATE" ]] && JSON_PAYLOAD="$JSON_PAYLOAD,\"due_on\":\"${DUE_DATE}T00:00:00Z\""
|
||||||
|
JSON_PAYLOAD="$JSON_PAYLOAD}"
|
||||||
|
|
||||||
|
gh api repos/:owner/:repo/milestones --method POST --input - <<< "$JSON_PAYLOAD"
|
||||||
|
echo "Milestone '$TITLE' created successfully"
|
||||||
|
;;
|
||||||
|
gitea)
|
||||||
|
CMD="tea milestones create --title \"$TITLE\""
|
||||||
|
[[ -n "$DESCRIPTION" ]] && CMD="$CMD --description \"$DESCRIPTION\""
|
||||||
|
[[ -n "$DUE_DATE" ]] && CMD="$CMD --deadline \"$DUE_DATE\""
|
||||||
|
eval "$CMD"
|
||||||
|
echo "Milestone '$TITLE' created successfully"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Error: Could not detect git platform" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
43
rails/git/milestone-list.sh
Executable file
43
rails/git/milestone-list.sh
Executable file
@@ -0,0 +1,43 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# milestone-list.sh - List milestones on GitHub or Gitea
|
||||||
|
# Usage: milestone-list.sh [-s <state>]
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
STATE="open"
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-s|--state)
|
||||||
|
STATE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
echo "Usage: milestone-list.sh [-s <state>]"
|
||||||
|
echo ""
|
||||||
|
echo "Options:"
|
||||||
|
echo " -s, --state Filter by state: open, closed, all (default: open)"
|
||||||
|
echo " -h, --help Show this help"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
detect_platform
|
||||||
|
|
||||||
|
if [[ "$PLATFORM" == "github" ]]; then
|
||||||
|
gh api "/repos/{owner}/{repo}/milestones?state=$STATE" --jq '.[] | "\(.title) (\(.state)) - \(.open_issues) open, \(.closed_issues) closed"'
|
||||||
|
elif [[ "$PLATFORM" == "gitea" ]]; then
|
||||||
|
tea milestone list
|
||||||
|
else
|
||||||
|
echo "Error: Unknown platform"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
62
rails/git/pr-close.sh
Executable file
62
rails/git/pr-close.sh
Executable file
@@ -0,0 +1,62 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# pr-close.sh - Close a pull request without merging on GitHub or Gitea
|
||||||
|
# Usage: pr-close.sh -n <pr_number> [-c <comment>]
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
PR_NUMBER=""
|
||||||
|
COMMENT=""
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-n|--number)
|
||||||
|
PR_NUMBER="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-c|--comment)
|
||||||
|
COMMENT="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
echo "Usage: pr-close.sh -n <pr_number> [-c <comment>]"
|
||||||
|
echo ""
|
||||||
|
echo "Options:"
|
||||||
|
echo " -n, --number PR number (required)"
|
||||||
|
echo " -c, --comment Comment before closing (optional)"
|
||||||
|
echo " -h, --help Show this help"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ -z "$PR_NUMBER" ]]; then
|
||||||
|
echo "Error: PR number is required (-n)"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
detect_platform
|
||||||
|
|
||||||
|
if [[ "$PLATFORM" == "github" ]]; then
|
||||||
|
if [[ -n "$COMMENT" ]]; then
|
||||||
|
gh pr comment "$PR_NUMBER" --body "$COMMENT"
|
||||||
|
fi
|
||||||
|
gh pr close "$PR_NUMBER"
|
||||||
|
echo "Closed GitHub PR #$PR_NUMBER"
|
||||||
|
elif [[ "$PLATFORM" == "gitea" ]]; then
|
||||||
|
if [[ -n "$COMMENT" ]]; then
|
||||||
|
tea pr comment "$PR_NUMBER" "$COMMENT"
|
||||||
|
fi
|
||||||
|
tea pr close "$PR_NUMBER"
|
||||||
|
echo "Closed Gitea PR #$PR_NUMBER"
|
||||||
|
else
|
||||||
|
echo "Error: Unknown platform"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
130
rails/git/pr-create.ps1
Normal file
130
rails/git/pr-create.ps1
Normal file
@@ -0,0 +1,130 @@
|
|||||||
|
# pr-create.ps1 - Create pull requests on Gitea or GitHub
|
||||||
|
# Usage: .\pr-create.ps1 -Title "Title" [-Body "Body"] [-Base base] [-Head head] [-Labels "labels"] [-Milestone "milestone"]
|
||||||
|
|
||||||
|
[CmdletBinding()]
|
||||||
|
param(
|
||||||
|
[Alias("t")]
|
||||||
|
[string]$Title,
|
||||||
|
|
||||||
|
[Alias("b")]
|
||||||
|
[string]$Body,
|
||||||
|
|
||||||
|
[Alias("B")]
|
||||||
|
[string]$Base,
|
||||||
|
|
||||||
|
[Alias("H")]
|
||||||
|
[string]$Head,
|
||||||
|
|
||||||
|
[Alias("l")]
|
||||||
|
[string]$Labels,
|
||||||
|
|
||||||
|
[Alias("m")]
|
||||||
|
[string]$Milestone,
|
||||||
|
|
||||||
|
[Alias("i")]
|
||||||
|
[int]$Issue,
|
||||||
|
|
||||||
|
[Alias("d")]
|
||||||
|
[switch]$Draft,
|
||||||
|
|
||||||
|
[switch]$Help
|
||||||
|
)
|
||||||
|
|
||||||
|
$ScriptDir = Split-Path -Parent $MyInvocation.MyCommand.Path
|
||||||
|
. "$ScriptDir\detect-platform.ps1"
|
||||||
|
|
||||||
|
function Show-Usage {
|
||||||
|
@"
|
||||||
|
Usage: pr-create.ps1 [OPTIONS]
|
||||||
|
|
||||||
|
Create a pull request on the current repository (Gitea or GitHub).
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-Title, -t TITLE PR title (required, or use -Issue)
|
||||||
|
-Body, -b BODY PR description/body
|
||||||
|
-Base, -B BRANCH Base branch to merge into (default: main/master)
|
||||||
|
-Head, -H BRANCH Head branch with changes (default: current branch)
|
||||||
|
-Labels, -l LABELS Comma-separated labels
|
||||||
|
-Milestone, -m NAME Milestone name
|
||||||
|
-Issue, -i NUMBER Link to issue (auto-generates title if not provided)
|
||||||
|
-Draft, -d Create as draft PR
|
||||||
|
-Help Show this help message
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
.\pr-create.ps1 -Title "Add login feature" -Body "Implements user authentication"
|
||||||
|
.\pr-create.ps1 -t "Fix bug" -B main -H feature/fix-123
|
||||||
|
.\pr-create.ps1 -i 42 -b "Implements the feature described in #42"
|
||||||
|
.\pr-create.ps1 -t "WIP: New feature" -Draft
|
||||||
|
"@
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
if ($Help) {
|
||||||
|
Show-Usage
|
||||||
|
}
|
||||||
|
|
||||||
|
# If no title but issue provided, generate title
|
||||||
|
if (-not $Title -and $Issue) {
|
||||||
|
$Title = "Fixes #$Issue"
|
||||||
|
}
|
||||||
|
|
||||||
|
if (-not $Title) {
|
||||||
|
Write-Error "Title is required (-t) or provide an issue (-i)"
|
||||||
|
Show-Usage
|
||||||
|
}
|
||||||
|
|
||||||
|
# Default head branch to current branch
|
||||||
|
if (-not $Head) {
|
||||||
|
$Head = git branch --show-current
|
||||||
|
}
|
||||||
|
|
||||||
|
# Add issue reference to body if provided
|
||||||
|
if ($Issue) {
|
||||||
|
if ($Body) {
|
||||||
|
$Body = "$Body`n`nFixes #$Issue"
|
||||||
|
} else {
|
||||||
|
$Body = "Fixes #$Issue"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
$platform = Get-GitPlatform
|
||||||
|
|
||||||
|
switch ($platform) {
|
||||||
|
"github" {
|
||||||
|
$cmd = @("gh", "pr", "create", "--title", $Title)
|
||||||
|
if ($Body) { $cmd += @("--body", $Body) }
|
||||||
|
if ($Base) { $cmd += @("--base", $Base) }
|
||||||
|
if ($Head) { $cmd += @("--head", $Head) }
|
||||||
|
if ($Labels) { $cmd += @("--label", $Labels) }
|
||||||
|
if ($Milestone) { $cmd += @("--milestone", $Milestone) }
|
||||||
|
if ($Draft) { $cmd += "--draft" }
|
||||||
|
& $cmd[0] $cmd[1..($cmd.Length-1)]
|
||||||
|
}
|
||||||
|
"gitea" {
|
||||||
|
$cmd = @("tea", "pr", "create", "--title", $Title)
|
||||||
|
if ($Body) { $cmd += @("--description", $Body) }
|
||||||
|
if ($Base) { $cmd += @("--base", $Base) }
|
||||||
|
if ($Head) { $cmd += @("--head", $Head) }
|
||||||
|
if ($Labels) { $cmd += @("--labels", $Labels) }
|
||||||
|
|
||||||
|
if ($Milestone) {
|
||||||
|
$milestoneList = tea milestones list 2>$null
|
||||||
|
$milestoneId = ($milestoneList | Select-String "^\s*(\d+).*$Milestone" | ForEach-Object { $_.Matches.Groups[1].Value } | Select-Object -First 1)
|
||||||
|
if ($milestoneId) {
|
||||||
|
$cmd += @("--milestone", $milestoneId)
|
||||||
|
} else {
|
||||||
|
Write-Warning "Could not find milestone '$Milestone', creating without milestone"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if ($Draft) {
|
||||||
|
Write-Warning "Draft PR may not be supported by your tea version"
|
||||||
|
}
|
||||||
|
|
||||||
|
& $cmd[0] $cmd[1..($cmd.Length-1)]
|
||||||
|
}
|
||||||
|
default {
|
||||||
|
Write-Error "Could not detect git platform"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
}
|
||||||
164
rails/git/pr-create.sh
Executable file
164
rails/git/pr-create.sh
Executable file
@@ -0,0 +1,164 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# pr-create.sh - Create pull requests on Gitea or GitHub
|
||||||
|
# Usage: pr-create.sh -t "Title" [-b "Body"] [-B base] [-H head] [-l "labels"] [-m "milestone"]
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
# Default values
|
||||||
|
TITLE=""
|
||||||
|
BODY=""
|
||||||
|
BASE_BRANCH=""
|
||||||
|
HEAD_BRANCH=""
|
||||||
|
LABELS=""
|
||||||
|
MILESTONE=""
|
||||||
|
DRAFT=false
|
||||||
|
ISSUE=""
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
cat <<EOF
|
||||||
|
Usage: $(basename "$0") [OPTIONS]
|
||||||
|
|
||||||
|
Create a pull request on the current repository (Gitea or GitHub).
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-t, --title TITLE PR title (required, or use --issue)
|
||||||
|
-b, --body BODY PR description/body
|
||||||
|
-B, --base BRANCH Base branch to merge into (default: main/master)
|
||||||
|
-H, --head BRANCH Head branch with changes (default: current branch)
|
||||||
|
-l, --labels LABELS Comma-separated labels
|
||||||
|
-m, --milestone NAME Milestone name
|
||||||
|
-i, --issue NUMBER Link to issue (auto-generates title if not provided)
|
||||||
|
-d, --draft Create as draft PR
|
||||||
|
-h, --help Show this help message
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
$(basename "$0") -t "Add login feature" -b "Implements user authentication"
|
||||||
|
$(basename "$0") -t "Fix bug" -B main -H feature/fix-123
|
||||||
|
$(basename "$0") -i 42 -b "Implements the feature described in #42"
|
||||||
|
$(basename "$0") -t "WIP: New feature" --draft
|
||||||
|
EOF
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-t|--title)
|
||||||
|
TITLE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-b|--body)
|
||||||
|
BODY="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-B|--base)
|
||||||
|
BASE_BRANCH="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-H|--head)
|
||||||
|
HEAD_BRANCH="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-l|--labels)
|
||||||
|
LABELS="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-m|--milestone)
|
||||||
|
MILESTONE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-i|--issue)
|
||||||
|
ISSUE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-d|--draft)
|
||||||
|
DRAFT=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
usage
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1" >&2
|
||||||
|
usage
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# If no title but issue provided, generate title
|
||||||
|
if [[ -z "$TITLE" ]] && [[ -n "$ISSUE" ]]; then
|
||||||
|
TITLE="Fixes #$ISSUE"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -z "$TITLE" ]]; then
|
||||||
|
echo "Error: Title is required (-t) or provide an issue (-i)" >&2
|
||||||
|
usage
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Default head branch to current branch
|
||||||
|
if [[ -z "$HEAD_BRANCH" ]]; then
|
||||||
|
HEAD_BRANCH=$(git branch --show-current)
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Add issue reference to body if provided
|
||||||
|
if [[ -n "$ISSUE" ]]; then
|
||||||
|
if [[ -n "$BODY" ]]; then
|
||||||
|
BODY="$BODY
|
||||||
|
|
||||||
|
Fixes #$ISSUE"
|
||||||
|
else
|
||||||
|
BODY="Fixes #$ISSUE"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
PLATFORM=$(detect_platform)
|
||||||
|
|
||||||
|
case "$PLATFORM" in
|
||||||
|
github)
|
||||||
|
CMD="gh pr create --title \"$TITLE\""
|
||||||
|
[[ -n "$BODY" ]] && CMD="$CMD --body \"$BODY\""
|
||||||
|
[[ -n "$BASE_BRANCH" ]] && CMD="$CMD --base \"$BASE_BRANCH\""
|
||||||
|
[[ -n "$HEAD_BRANCH" ]] && CMD="$CMD --head \"$HEAD_BRANCH\""
|
||||||
|
[[ -n "$LABELS" ]] && CMD="$CMD --label \"$LABELS\""
|
||||||
|
[[ -n "$MILESTONE" ]] && CMD="$CMD --milestone \"$MILESTONE\""
|
||||||
|
[[ "$DRAFT" == true ]] && CMD="$CMD --draft"
|
||||||
|
eval "$CMD"
|
||||||
|
;;
|
||||||
|
gitea)
|
||||||
|
# tea pull create syntax
|
||||||
|
CMD="tea pr create --title \"$TITLE\""
|
||||||
|
[[ -n "$BODY" ]] && CMD="$CMD --description \"$BODY\""
|
||||||
|
[[ -n "$BASE_BRANCH" ]] && CMD="$CMD --base \"$BASE_BRANCH\""
|
||||||
|
[[ -n "$HEAD_BRANCH" ]] && CMD="$CMD --head \"$HEAD_BRANCH\""
|
||||||
|
|
||||||
|
# Handle labels for tea
|
||||||
|
if [[ -n "$LABELS" ]]; then
|
||||||
|
# tea may use --labels flag
|
||||||
|
CMD="$CMD --labels \"$LABELS\""
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Handle milestone for tea
|
||||||
|
if [[ -n "$MILESTONE" ]]; then
|
||||||
|
MILESTONE_ID=$(tea milestones list 2>/dev/null | grep -E "^\s*[0-9]+" | grep "$MILESTONE" | awk '{print $1}' | head -1)
|
||||||
|
if [[ -n "$MILESTONE_ID" ]]; then
|
||||||
|
CMD="$CMD --milestone $MILESTONE_ID"
|
||||||
|
else
|
||||||
|
echo "Warning: Could not find milestone '$MILESTONE', creating without milestone" >&2
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Note: tea may not support --draft flag in all versions
|
||||||
|
if [[ "$DRAFT" == true ]]; then
|
||||||
|
echo "Note: Draft PR may not be supported by your tea version" >&2
|
||||||
|
fi
|
||||||
|
|
||||||
|
eval "$CMD"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Error: Could not detect git platform" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
88
rails/git/pr-diff.sh
Executable file
88
rails/git/pr-diff.sh
Executable file
@@ -0,0 +1,88 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# pr-diff.sh - Get the diff for a pull request on GitHub or Gitea
|
||||||
|
# Usage: pr-diff.sh -n <pr_number> [-o <output_file>]
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
PR_NUMBER=""
|
||||||
|
OUTPUT_FILE=""
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-n|--number)
|
||||||
|
PR_NUMBER="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-o|--output)
|
||||||
|
OUTPUT_FILE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
echo "Usage: pr-diff.sh -n <pr_number> [-o <output_file>]"
|
||||||
|
echo ""
|
||||||
|
echo "Options:"
|
||||||
|
echo " -n, --number PR number (required)"
|
||||||
|
echo " -o, --output Output file (optional, prints to stdout if omitted)"
|
||||||
|
echo " -h, --help Show this help"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ -z "$PR_NUMBER" ]]; then
|
||||||
|
echo "Error: PR number is required (-n)" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
detect_platform > /dev/null
|
||||||
|
|
||||||
|
if [[ "$PLATFORM" == "github" ]]; then
|
||||||
|
if [[ -n "$OUTPUT_FILE" ]]; then
|
||||||
|
gh pr diff "$PR_NUMBER" > "$OUTPUT_FILE"
|
||||||
|
else
|
||||||
|
gh pr diff "$PR_NUMBER"
|
||||||
|
fi
|
||||||
|
elif [[ "$PLATFORM" == "gitea" ]]; then
|
||||||
|
# tea doesn't have a direct diff command — use the API
|
||||||
|
OWNER=$(get_repo_owner)
|
||||||
|
REPO=$(get_repo_name)
|
||||||
|
REMOTE_URL=$(git remote get-url origin 2>/dev/null)
|
||||||
|
|
||||||
|
# Extract host from remote URL
|
||||||
|
if [[ "$REMOTE_URL" == https://* ]]; then
|
||||||
|
HOST=$(echo "$REMOTE_URL" | sed -E 's|https://([^/]+)/.*|\1|')
|
||||||
|
elif [[ "$REMOTE_URL" == git@* ]]; then
|
||||||
|
HOST=$(echo "$REMOTE_URL" | sed -E 's|git@([^:]+):.*|\1|')
|
||||||
|
else
|
||||||
|
echo "Error: Cannot determine host from remote URL" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
DIFF_URL="https://${HOST}/api/v1/repos/${OWNER}/${REPO}/pulls/${PR_NUMBER}.diff"
|
||||||
|
|
||||||
|
# Use tea's auth token if available
|
||||||
|
TEA_TOKEN=$(tea login list 2>/dev/null | grep "$HOST" | awk '{print $NF}' || true)
|
||||||
|
|
||||||
|
if [[ -n "$TEA_TOKEN" ]]; then
|
||||||
|
DIFF_CONTENT=$(curl -sS -H "Authorization: token $TEA_TOKEN" "$DIFF_URL")
|
||||||
|
else
|
||||||
|
DIFF_CONTENT=$(curl -sS "$DIFF_URL")
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$OUTPUT_FILE" ]]; then
|
||||||
|
echo "$DIFF_CONTENT" > "$OUTPUT_FILE"
|
||||||
|
else
|
||||||
|
echo "$DIFF_CONTENT"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo "Error: Unknown platform" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
76
rails/git/pr-list.ps1
Normal file
76
rails/git/pr-list.ps1
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
# pr-list.ps1 - List pull requests on Gitea or GitHub
|
||||||
|
# Usage: .\pr-list.ps1 [-State state] [-Label label] [-Author author]
|
||||||
|
|
||||||
|
[CmdletBinding()]
|
||||||
|
param(
|
||||||
|
[Alias("s")]
|
||||||
|
[ValidateSet("open", "closed", "merged", "all")]
|
||||||
|
[string]$State = "open",
|
||||||
|
|
||||||
|
[Alias("l")]
|
||||||
|
[string]$Label,
|
||||||
|
|
||||||
|
[Alias("a")]
|
||||||
|
[string]$Author,
|
||||||
|
|
||||||
|
[Alias("n")]
|
||||||
|
[int]$Limit = 100,
|
||||||
|
|
||||||
|
[Alias("h")]
|
||||||
|
[switch]$Help
|
||||||
|
)
|
||||||
|
|
||||||
|
$ScriptDir = Split-Path -Parent $MyInvocation.MyCommand.Path
|
||||||
|
. "$ScriptDir\detect-platform.ps1"
|
||||||
|
|
||||||
|
function Show-Usage {
|
||||||
|
@"
|
||||||
|
Usage: pr-list.ps1 [OPTIONS]
|
||||||
|
|
||||||
|
List pull requests from the current repository (Gitea or GitHub).
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-State, -s STATE Filter by state: open, closed, merged, all (default: open)
|
||||||
|
-Label, -l LABEL Filter by label
|
||||||
|
-Author, -a USER Filter by author
|
||||||
|
-Limit, -n N Maximum PRs to show (default: 100)
|
||||||
|
-Help, -h Show this help message
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
.\pr-list.ps1 # List open PRs
|
||||||
|
.\pr-list.ps1 -s all # All PRs
|
||||||
|
.\pr-list.ps1 -s merged -a username # Merged PRs by user
|
||||||
|
"@
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
if ($Help) {
|
||||||
|
Show-Usage
|
||||||
|
}
|
||||||
|
|
||||||
|
$platform = Get-GitPlatform
|
||||||
|
|
||||||
|
switch ($platform) {
|
||||||
|
"github" {
|
||||||
|
$cmd = @("gh", "pr", "list", "--state", $State, "--limit", $Limit)
|
||||||
|
if ($Label) { $cmd += @("--label", $Label) }
|
||||||
|
if ($Author) { $cmd += @("--author", $Author) }
|
||||||
|
& $cmd[0] $cmd[1..($cmd.Length-1)]
|
||||||
|
}
|
||||||
|
"gitea" {
|
||||||
|
$cmd = @("tea", "pr", "list", "--state", $State, "--limit", $Limit)
|
||||||
|
|
||||||
|
if ($Label) {
|
||||||
|
Write-Warning "Label filtering may require manual review for Gitea"
|
||||||
|
}
|
||||||
|
if ($Author) {
|
||||||
|
Write-Warning "Author filtering may require manual review for Gitea"
|
||||||
|
}
|
||||||
|
|
||||||
|
& $cmd[0] $cmd[1..($cmd.Length-1)]
|
||||||
|
}
|
||||||
|
default {
|
||||||
|
Write-Error "Could not detect git platform"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
}
|
||||||
93
rails/git/pr-list.sh
Executable file
93
rails/git/pr-list.sh
Executable file
@@ -0,0 +1,93 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# pr-list.sh - List pull requests on Gitea or GitHub
|
||||||
|
# Usage: pr-list.sh [-s state] [-l label] [-a author]
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
# Default values
|
||||||
|
STATE="open"
|
||||||
|
LABEL=""
|
||||||
|
AUTHOR=""
|
||||||
|
LIMIT=100
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
cat <<EOF
|
||||||
|
Usage: $(basename "$0") [OPTIONS]
|
||||||
|
|
||||||
|
List pull requests from the current repository (Gitea or GitHub).
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-s, --state STATE Filter by state: open, closed, merged, all (default: open)
|
||||||
|
-l, --label LABEL Filter by label
|
||||||
|
-a, --author USER Filter by author
|
||||||
|
-n, --limit N Maximum PRs to show (default: 100)
|
||||||
|
-h, --help Show this help message
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
$(basename "$0") # List open PRs
|
||||||
|
$(basename "$0") -s all # All PRs
|
||||||
|
$(basename "$0") -s merged -a username # Merged PRs by user
|
||||||
|
EOF
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-s|--state)
|
||||||
|
STATE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-l|--label)
|
||||||
|
LABEL="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-a|--author)
|
||||||
|
AUTHOR="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-n|--limit)
|
||||||
|
LIMIT="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
usage
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1" >&2
|
||||||
|
usage
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
PLATFORM=$(detect_platform)
|
||||||
|
|
||||||
|
case "$PLATFORM" in
|
||||||
|
github)
|
||||||
|
CMD="gh pr list --state $STATE --limit $LIMIT"
|
||||||
|
[[ -n "$LABEL" ]] && CMD="$CMD --label \"$LABEL\""
|
||||||
|
[[ -n "$AUTHOR" ]] && CMD="$CMD --author \"$AUTHOR\""
|
||||||
|
eval "$CMD"
|
||||||
|
;;
|
||||||
|
gitea)
|
||||||
|
# tea pr list - note: tea uses 'pulls' subcommand in some versions
|
||||||
|
CMD="tea pr list --state $STATE --limit $LIMIT"
|
||||||
|
|
||||||
|
# tea filtering may be limited
|
||||||
|
if [[ -n "$LABEL" ]]; then
|
||||||
|
echo "Note: Label filtering may require manual review for Gitea" >&2
|
||||||
|
fi
|
||||||
|
if [[ -n "$AUTHOR" ]]; then
|
||||||
|
echo "Note: Author filtering may require manual review for Gitea" >&2
|
||||||
|
fi
|
||||||
|
|
||||||
|
eval "$CMD"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Error: Could not detect git platform" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
81
rails/git/pr-merge.ps1
Normal file
81
rails/git/pr-merge.ps1
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
# pr-merge.ps1 - Merge pull requests on Gitea or GitHub
|
||||||
|
# Usage: .\pr-merge.ps1 -Number PR_NUMBER [-Method method] [-DeleteBranch]
|
||||||
|
|
||||||
|
[CmdletBinding()]
|
||||||
|
param(
|
||||||
|
[Parameter(Mandatory=$true)]
|
||||||
|
[Alias("n")]
|
||||||
|
[int]$Number,
|
||||||
|
|
||||||
|
[Alias("m")]
|
||||||
|
[ValidateSet("merge", "squash", "rebase")]
|
||||||
|
[string]$Method = "merge",
|
||||||
|
|
||||||
|
[Alias("d")]
|
||||||
|
[switch]$DeleteBranch,
|
||||||
|
|
||||||
|
[Alias("h")]
|
||||||
|
[switch]$Help
|
||||||
|
)
|
||||||
|
|
||||||
|
$ScriptDir = Split-Path -Parent $MyInvocation.MyCommand.Path
|
||||||
|
. "$ScriptDir\detect-platform.ps1"
|
||||||
|
|
||||||
|
function Show-Usage {
|
||||||
|
@"
|
||||||
|
Usage: pr-merge.ps1 [OPTIONS]
|
||||||
|
|
||||||
|
Merge a pull request on the current repository (Gitea or GitHub).
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-Number, -n NUMBER PR number to merge (required)
|
||||||
|
-Method, -m METHOD Merge method: merge, squash, rebase (default: merge)
|
||||||
|
-DeleteBranch, -d Delete the head branch after merge
|
||||||
|
-Help, -h Show this help message
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
.\pr-merge.ps1 -n 42 # Merge PR #42
|
||||||
|
.\pr-merge.ps1 -n 42 -m squash # Squash merge
|
||||||
|
.\pr-merge.ps1 -n 42 -m rebase -d # Rebase and delete branch
|
||||||
|
"@
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
if ($Help) {
|
||||||
|
Show-Usage
|
||||||
|
}
|
||||||
|
|
||||||
|
$platform = Get-GitPlatform
|
||||||
|
|
||||||
|
switch ($platform) {
|
||||||
|
"github" {
|
||||||
|
$cmd = @("gh", "pr", "merge", $Number)
|
||||||
|
switch ($Method) {
|
||||||
|
"merge" { $cmd += "--merge" }
|
||||||
|
"squash" { $cmd += "--squash" }
|
||||||
|
"rebase" { $cmd += "--rebase" }
|
||||||
|
}
|
||||||
|
if ($DeleteBranch) { $cmd += "--delete-branch" }
|
||||||
|
& $cmd[0] $cmd[1..($cmd.Length-1)]
|
||||||
|
}
|
||||||
|
"gitea" {
|
||||||
|
$cmd = @("tea", "pr", "merge", $Number)
|
||||||
|
switch ($Method) {
|
||||||
|
"merge" { $cmd += @("--style", "merge") }
|
||||||
|
"squash" { $cmd += @("--style", "squash") }
|
||||||
|
"rebase" { $cmd += @("--style", "rebase") }
|
||||||
|
}
|
||||||
|
|
||||||
|
if ($DeleteBranch) {
|
||||||
|
Write-Warning "Branch deletion after merge may need to be done separately with tea"
|
||||||
|
}
|
||||||
|
|
||||||
|
& $cmd[0] $cmd[1..($cmd.Length-1)]
|
||||||
|
}
|
||||||
|
default {
|
||||||
|
Write-Error "Could not detect git platform"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Write-Host "PR #$Number merged successfully"
|
||||||
110
rails/git/pr-merge.sh
Executable file
110
rails/git/pr-merge.sh
Executable file
@@ -0,0 +1,110 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# pr-merge.sh - Merge pull requests on Gitea or GitHub
|
||||||
|
# Usage: pr-merge.sh -n PR_NUMBER [-m method] [-d]
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
# Default values
|
||||||
|
PR_NUMBER=""
|
||||||
|
MERGE_METHOD="merge" # merge, squash, rebase
|
||||||
|
DELETE_BRANCH=false
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
cat <<EOF
|
||||||
|
Usage: $(basename "$0") [OPTIONS]
|
||||||
|
|
||||||
|
Merge a pull request on the current repository (Gitea or GitHub).
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-n, --number NUMBER PR number to merge (required)
|
||||||
|
-m, --method METHOD Merge method: merge, squash, rebase (default: merge)
|
||||||
|
-d, --delete-branch Delete the head branch after merge
|
||||||
|
-h, --help Show this help message
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
$(basename "$0") -n 42 # Merge PR #42
|
||||||
|
$(basename "$0") -n 42 -m squash # Squash merge
|
||||||
|
$(basename "$0") -n 42 -m rebase -d # Rebase and delete branch
|
||||||
|
EOF
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-n|--number)
|
||||||
|
PR_NUMBER="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-m|--method)
|
||||||
|
MERGE_METHOD="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-d|--delete-branch)
|
||||||
|
DELETE_BRANCH=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
usage
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1" >&2
|
||||||
|
usage
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ -z "$PR_NUMBER" ]]; then
|
||||||
|
echo "Error: PR number is required (-n)" >&2
|
||||||
|
usage
|
||||||
|
fi
|
||||||
|
|
||||||
|
PLATFORM=$(detect_platform)
|
||||||
|
|
||||||
|
case "$PLATFORM" in
|
||||||
|
github)
|
||||||
|
CMD="gh pr merge $PR_NUMBER"
|
||||||
|
case "$MERGE_METHOD" in
|
||||||
|
merge) CMD="$CMD --merge" ;;
|
||||||
|
squash) CMD="$CMD --squash" ;;
|
||||||
|
rebase) CMD="$CMD --rebase" ;;
|
||||||
|
*)
|
||||||
|
echo "Error: Invalid merge method '$MERGE_METHOD'" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
[[ "$DELETE_BRANCH" == true ]] && CMD="$CMD --delete-branch"
|
||||||
|
eval "$CMD"
|
||||||
|
;;
|
||||||
|
gitea)
|
||||||
|
# tea pr merge syntax
|
||||||
|
CMD="tea pr merge $PR_NUMBER"
|
||||||
|
|
||||||
|
# tea merge style flags
|
||||||
|
case "$MERGE_METHOD" in
|
||||||
|
merge) CMD="$CMD --style merge" ;;
|
||||||
|
squash) CMD="$CMD --style squash" ;;
|
||||||
|
rebase) CMD="$CMD --style rebase" ;;
|
||||||
|
*)
|
||||||
|
echo "Error: Invalid merge method '$MERGE_METHOD'" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
# Delete branch after merge if requested
|
||||||
|
if [[ "$DELETE_BRANCH" == true ]]; then
|
||||||
|
echo "Note: Branch deletion after merge may need to be done separately with tea" >&2
|
||||||
|
fi
|
||||||
|
|
||||||
|
eval "$CMD"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Error: Could not detect git platform" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
echo "PR #$PR_NUMBER merged successfully"
|
||||||
114
rails/git/pr-metadata.sh
Executable file
114
rails/git/pr-metadata.sh
Executable file
@@ -0,0 +1,114 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# pr-metadata.sh - Get PR metadata as JSON on GitHub or Gitea
|
||||||
|
# Usage: pr-metadata.sh -n <pr_number> [-o <output_file>]
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
PR_NUMBER=""
|
||||||
|
OUTPUT_FILE=""
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-n|--number)
|
||||||
|
PR_NUMBER="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-o|--output)
|
||||||
|
OUTPUT_FILE="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
echo "Usage: pr-metadata.sh -n <pr_number> [-o <output_file>]"
|
||||||
|
echo ""
|
||||||
|
echo "Options:"
|
||||||
|
echo " -n, --number PR number (required)"
|
||||||
|
echo " -o, --output Output file (optional, prints to stdout if omitted)"
|
||||||
|
echo " -h, --help Show this help"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ -z "$PR_NUMBER" ]]; then
|
||||||
|
echo "Error: PR number is required (-n)" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
detect_platform > /dev/null
|
||||||
|
|
||||||
|
if [[ "$PLATFORM" == "github" ]]; then
|
||||||
|
METADATA=$(gh pr view "$PR_NUMBER" --json number,title,body,state,author,headRefName,baseRefName,files,labels,assignees,milestone,createdAt,updatedAt,url,isDraft)
|
||||||
|
|
||||||
|
if [[ -n "$OUTPUT_FILE" ]]; then
|
||||||
|
echo "$METADATA" > "$OUTPUT_FILE"
|
||||||
|
else
|
||||||
|
echo "$METADATA"
|
||||||
|
fi
|
||||||
|
elif [[ "$PLATFORM" == "gitea" ]]; then
|
||||||
|
OWNER=$(get_repo_owner)
|
||||||
|
REPO=$(get_repo_name)
|
||||||
|
REMOTE_URL=$(git remote get-url origin 2>/dev/null)
|
||||||
|
|
||||||
|
# Extract host from remote URL
|
||||||
|
if [[ "$REMOTE_URL" == https://* ]]; then
|
||||||
|
HOST=$(echo "$REMOTE_URL" | sed -E 's|https://([^/]+)/.*|\1|')
|
||||||
|
elif [[ "$REMOTE_URL" == git@* ]]; then
|
||||||
|
HOST=$(echo "$REMOTE_URL" | sed -E 's|git@([^:]+):.*|\1|')
|
||||||
|
else
|
||||||
|
echo "Error: Cannot determine host from remote URL" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
API_URL="https://${HOST}/api/v1/repos/${OWNER}/${REPO}/pulls/${PR_NUMBER}"
|
||||||
|
|
||||||
|
# Use tea's auth token if available
|
||||||
|
TEA_TOKEN=$(tea login list 2>/dev/null | grep "$HOST" | awk '{print $NF}' || true)
|
||||||
|
|
||||||
|
if [[ -n "$TEA_TOKEN" ]]; then
|
||||||
|
RAW=$(curl -sS -H "Authorization: token $TEA_TOKEN" "$API_URL")
|
||||||
|
else
|
||||||
|
RAW=$(curl -sS "$API_URL")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Normalize Gitea response to match our expected schema
|
||||||
|
METADATA=$(echo "$RAW" | python3 -c "
|
||||||
|
import json, sys
|
||||||
|
data = json.load(sys.stdin)
|
||||||
|
normalized = {
|
||||||
|
'number': data.get('number'),
|
||||||
|
'title': data.get('title'),
|
||||||
|
'body': data.get('body', ''),
|
||||||
|
'state': data.get('state'),
|
||||||
|
'author': data.get('user', {}).get('login', ''),
|
||||||
|
'headRefName': data.get('head', {}).get('ref', ''),
|
||||||
|
'baseRefName': data.get('base', {}).get('ref', ''),
|
||||||
|
'labels': [l.get('name', '') for l in data.get('labels', [])],
|
||||||
|
'assignees': [a.get('login', '') for a in data.get('assignees', [])],
|
||||||
|
'milestone': data.get('milestone', {}).get('title', '') if data.get('milestone') else '',
|
||||||
|
'createdAt': data.get('created_at', ''),
|
||||||
|
'updatedAt': data.get('updated_at', ''),
|
||||||
|
'url': data.get('html_url', ''),
|
||||||
|
'isDraft': data.get('draft', False),
|
||||||
|
'mergeable': data.get('mergeable'),
|
||||||
|
'diffUrl': data.get('diff_url', ''),
|
||||||
|
}
|
||||||
|
json.dump(normalized, sys.stdout, indent=2)
|
||||||
|
")
|
||||||
|
|
||||||
|
if [[ -n "$OUTPUT_FILE" ]]; then
|
||||||
|
echo "$METADATA" > "$OUTPUT_FILE"
|
||||||
|
else
|
||||||
|
echo "$METADATA"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo "Error: Unknown platform" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
115
rails/git/pr-review.sh
Executable file
115
rails/git/pr-review.sh
Executable file
@@ -0,0 +1,115 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# pr-review.sh - Review a pull request on GitHub or Gitea
|
||||||
|
# Usage: pr-review.sh -n <pr_number> -a <action> [-c <comment>]
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
PR_NUMBER=""
|
||||||
|
ACTION=""
|
||||||
|
COMMENT=""
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-n|--number)
|
||||||
|
PR_NUMBER="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-a|--action)
|
||||||
|
ACTION="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-c|--comment)
|
||||||
|
COMMENT="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
echo "Usage: pr-review.sh -n <pr_number> -a <action> [-c <comment>]"
|
||||||
|
echo ""
|
||||||
|
echo "Options:"
|
||||||
|
echo " -n, --number PR number (required)"
|
||||||
|
echo " -a, --action Review action: approve, request-changes, comment (required)"
|
||||||
|
echo " -c, --comment Review comment (required for request-changes)"
|
||||||
|
echo " -h, --help Show this help"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ -z "$PR_NUMBER" ]]; then
|
||||||
|
echo "Error: PR number is required (-n)"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -z "$ACTION" ]]; then
|
||||||
|
echo "Error: Action is required (-a): approve, request-changes, comment"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
detect_platform
|
||||||
|
|
||||||
|
if [[ "$PLATFORM" == "github" ]]; then
|
||||||
|
case $ACTION in
|
||||||
|
approve)
|
||||||
|
gh pr review "$PR_NUMBER" --approve ${COMMENT:+--body "$COMMENT"}
|
||||||
|
echo "Approved GitHub PR #$PR_NUMBER"
|
||||||
|
;;
|
||||||
|
request-changes)
|
||||||
|
if [[ -z "$COMMENT" ]]; then
|
||||||
|
echo "Error: Comment required for request-changes"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
gh pr review "$PR_NUMBER" --request-changes --body "$COMMENT"
|
||||||
|
echo "Requested changes on GitHub PR #$PR_NUMBER"
|
||||||
|
;;
|
||||||
|
comment)
|
||||||
|
if [[ -z "$COMMENT" ]]; then
|
||||||
|
echo "Error: Comment required"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
gh pr review "$PR_NUMBER" --comment --body "$COMMENT"
|
||||||
|
echo "Added review comment to GitHub PR #$PR_NUMBER"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Error: Unknown action: $ACTION"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
elif [[ "$PLATFORM" == "gitea" ]]; then
|
||||||
|
case $ACTION in
|
||||||
|
approve)
|
||||||
|
tea pr approve "$PR_NUMBER" ${COMMENT:+--comment "$COMMENT"}
|
||||||
|
echo "Approved Gitea PR #$PR_NUMBER"
|
||||||
|
;;
|
||||||
|
request-changes)
|
||||||
|
if [[ -z "$COMMENT" ]]; then
|
||||||
|
echo "Error: Comment required for request-changes"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
tea pr reject "$PR_NUMBER" --comment "$COMMENT"
|
||||||
|
echo "Requested changes on Gitea PR #$PR_NUMBER"
|
||||||
|
;;
|
||||||
|
comment)
|
||||||
|
if [[ -z "$COMMENT" ]]; then
|
||||||
|
echo "Error: Comment required"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
tea pr comment "$PR_NUMBER" "$COMMENT"
|
||||||
|
echo "Added comment to Gitea PR #$PR_NUMBER"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Error: Unknown action: $ACTION"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
else
|
||||||
|
echo "Error: Unknown platform"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
48
rails/git/pr-view.sh
Executable file
48
rails/git/pr-view.sh
Executable file
@@ -0,0 +1,48 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# pr-view.sh - View pull request details on GitHub or Gitea
|
||||||
|
# Usage: pr-view.sh -n <pr_number>
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/detect-platform.sh"
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
PR_NUMBER=""
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-n|--number)
|
||||||
|
PR_NUMBER="$2"
|
||||||
|
shift 2
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
echo "Usage: pr-view.sh -n <pr_number>"
|
||||||
|
echo ""
|
||||||
|
echo "Options:"
|
||||||
|
echo " -n, --number PR number (required)"
|
||||||
|
echo " -h, --help Show this help"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ -z "$PR_NUMBER" ]]; then
|
||||||
|
echo "Error: PR number is required (-n)"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
detect_platform
|
||||||
|
|
||||||
|
if [[ "$PLATFORM" == "github" ]]; then
|
||||||
|
gh pr view "$PR_NUMBER"
|
||||||
|
elif [[ "$PLATFORM" == "gitea" ]]; then
|
||||||
|
tea pr "$PR_NUMBER"
|
||||||
|
else
|
||||||
|
echo "Error: Unknown platform"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
15
rails/qa/debug-hook.sh
Executable file
15
rails/qa/debug-hook.sh
Executable file
@@ -0,0 +1,15 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Debug hook to identify available variables
|
||||||
|
|
||||||
|
echo "=== Hook Debug ===" >> /tmp/hook-debug.log
|
||||||
|
echo "Date: $(date)" >> /tmp/hook-debug.log
|
||||||
|
echo "All args: $@" >> /tmp/hook-debug.log
|
||||||
|
echo "Arg count: $#" >> /tmp/hook-debug.log
|
||||||
|
echo "Arg 1: ${1:-EMPTY}" >> /tmp/hook-debug.log
|
||||||
|
echo "Arg 2: ${2:-EMPTY}" >> /tmp/hook-debug.log
|
||||||
|
echo "Arg 3: ${3:-EMPTY}" >> /tmp/hook-debug.log
|
||||||
|
echo "Environment:" >> /tmp/hook-debug.log
|
||||||
|
env | grep -i file >> /tmp/hook-debug.log 2>/dev/null || true
|
||||||
|
env | grep -i path >> /tmp/hook-debug.log 2>/dev/null || true
|
||||||
|
env | grep -i tool >> /tmp/hook-debug.log 2>/dev/null || true
|
||||||
|
echo "==================" >> /tmp/hook-debug.log
|
||||||
197
rails/qa/qa-hook-handler.sh
Executable file
197
rails/qa/qa-hook-handler.sh
Executable file
@@ -0,0 +1,197 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Universal QA hook handler with robust error handling
|
||||||
|
# Location: ~/.mosaic/rails/qa-hook-handler.sh
|
||||||
|
|
||||||
|
# Don't exit on unset variables initially to handle missing params gracefully
|
||||||
|
set -eo pipefail
|
||||||
|
|
||||||
|
PROJECT_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
||||||
|
TOOL_NAME="${1:-}"
|
||||||
|
FILE_PATH="${2:-}"
|
||||||
|
|
||||||
|
# Debug logging
|
||||||
|
echo "[DEBUG] Script called with args: \$1='$1' \$2='$2'" >> "$PROJECT_ROOT/logs/qa-automation.log" 2>/dev/null || true
|
||||||
|
|
||||||
|
# Validate inputs
|
||||||
|
if [ -z "$FILE_PATH" ] || [ -z "$TOOL_NAME" ]; then
|
||||||
|
echo "[ERROR] Missing required parameters: tool='$TOOL_NAME' file='$FILE_PATH'" >&2
|
||||||
|
echo "[ERROR] Usage: $0 <tool> <file_path>" >&2
|
||||||
|
# Log to file if possible
|
||||||
|
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [ERROR] Missing parameters - tool='$TOOL_NAME' file='$FILE_PATH'" >> "$PROJECT_ROOT/logs/qa-automation.log" 2>/dev/null || true
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Now enable strict mode after parameter handling
|
||||||
|
set -u
|
||||||
|
|
||||||
|
# Skip non-JS/TS files
|
||||||
|
if ! [[ "$FILE_PATH" =~ \.(ts|tsx|js|jsx|mjs|cjs)$ ]]; then
|
||||||
|
echo "[INFO] Skipping non-JS/TS file: $FILE_PATH"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Generate naming components
|
||||||
|
TIMESTAMP=$(date '+%Y%m%d-%H%M')
|
||||||
|
SANITIZED_NAME=$(echo "$FILE_PATH" | sed 's/\//-/g' | sed 's/^-//' | sed 's/\.\./\./g')
|
||||||
|
ITERATION=1
|
||||||
|
|
||||||
|
# Log file for debugging
|
||||||
|
LOG_FILE="$PROJECT_ROOT/logs/qa-automation.log"
|
||||||
|
mkdir -p "$(dirname "$LOG_FILE")"
|
||||||
|
|
||||||
|
# Function to detect Epic with fallback
|
||||||
|
detect_epic() {
|
||||||
|
local file_path="$1"
|
||||||
|
local epic=""
|
||||||
|
|
||||||
|
# Try to detect Epic from path patterns
|
||||||
|
case "$file_path" in
|
||||||
|
*/apps/frontend/src/components/*adapter*|*/apps/frontend/src/views/*adapter*)
|
||||||
|
epic="E.3001-ADAPTER-CONFIG-SYSTEM"
|
||||||
|
;;
|
||||||
|
*/services/backend/src/adapters/*)
|
||||||
|
epic="E.3001-ADAPTER-CONFIG-SYSTEM"
|
||||||
|
;;
|
||||||
|
*/services/backend/src/*)
|
||||||
|
epic="E.2004-enterprise-data-synchronization-engine"
|
||||||
|
;;
|
||||||
|
*/services/syncagent-debezium/*|*/services/syncagent-n8n/*)
|
||||||
|
epic="E.2004-enterprise-data-synchronization-engine"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
epic="" # General QA
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
echo "$epic"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Detect Epic association
|
||||||
|
EPIC_FOLDER=$(detect_epic "$FILE_PATH")
|
||||||
|
|
||||||
|
# Function to setup report directory with creation if needed
|
||||||
|
setup_report_dir() {
|
||||||
|
local epic="$1"
|
||||||
|
local project_root="$2"
|
||||||
|
local report_dir=""
|
||||||
|
|
||||||
|
if [ -n "$epic" ]; then
|
||||||
|
# Check if Epic directory exists
|
||||||
|
local epic_dir="$project_root/docs/task-management/epics/active/$epic"
|
||||||
|
|
||||||
|
if [ -d "$epic_dir" ]; then
|
||||||
|
# Epic exists, use it
|
||||||
|
report_dir="$epic_dir/reports/qa-automation/pending"
|
||||||
|
echo "[INFO] Using existing Epic: $epic" | tee -a "$LOG_FILE"
|
||||||
|
else
|
||||||
|
# Epic doesn't exist, check if we should create it
|
||||||
|
local epic_parent="$project_root/docs/task-management/epics/active"
|
||||||
|
|
||||||
|
if [ -d "$epic_parent" ]; then
|
||||||
|
# Parent exists, create Epic structure
|
||||||
|
echo "[WARN] Epic $epic not found, creating structure..." | tee -a "$LOG_FILE"
|
||||||
|
mkdir -p "$epic_dir/reports/qa-automation/pending"
|
||||||
|
mkdir -p "$epic_dir/reports/qa-automation/in-progress"
|
||||||
|
mkdir -p "$epic_dir/reports/qa-automation/done"
|
||||||
|
mkdir -p "$epic_dir/reports/qa-automation/escalated"
|
||||||
|
|
||||||
|
# Create Epic README
|
||||||
|
cat > "$epic_dir/README.md" << EOF
|
||||||
|
# Epic: $epic
|
||||||
|
|
||||||
|
**Status**: Active
|
||||||
|
**Created**: $(date '+%Y-%m-%d')
|
||||||
|
**Purpose**: Auto-created by QA automation system
|
||||||
|
|
||||||
|
## Description
|
||||||
|
This Epic was automatically created to organize QA remediation reports.
|
||||||
|
|
||||||
|
## QA Automation
|
||||||
|
- Reports are stored in \`reports/qa-automation/\`
|
||||||
|
- Pending issues: \`reports/qa-automation/pending/\`
|
||||||
|
- Escalated issues: \`reports/qa-automation/escalated/\`
|
||||||
|
EOF
|
||||||
|
report_dir="$epic_dir/reports/qa-automation/pending"
|
||||||
|
echo "[INFO] Created Epic structure: $epic" | tee -a "$LOG_FILE"
|
||||||
|
else
|
||||||
|
# Epic structure doesn't exist, fall back to general
|
||||||
|
echo "[WARN] Epic structure not found, using general QA" | tee -a "$LOG_FILE"
|
||||||
|
report_dir="$project_root/docs/reports/qa-automation/pending"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
# No Epic association, use general
|
||||||
|
report_dir="$project_root/docs/reports/qa-automation/pending"
|
||||||
|
echo "[INFO] No Epic association, using general QA" | tee -a "$LOG_FILE"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Ensure directory exists
|
||||||
|
mkdir -p "$report_dir"
|
||||||
|
echo "$report_dir"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Setup report directory (capture only the last line which is the path)
|
||||||
|
REPORT_DIR=$(setup_report_dir "$EPIC_FOLDER" "$PROJECT_ROOT" | tail -1)
|
||||||
|
|
||||||
|
# Check for existing reports from same timestamp
|
||||||
|
check_existing_iteration() {
|
||||||
|
local dir="$1"
|
||||||
|
local name="$2"
|
||||||
|
local timestamp="$3"
|
||||||
|
local max_iter=0
|
||||||
|
|
||||||
|
for file in "$dir"/${name}_${timestamp}_*_remediation_needed.md; do
|
||||||
|
if [ -f "$file" ]; then
|
||||||
|
# Extract iteration number
|
||||||
|
local iter=$(echo "$file" | sed 's/.*_\([0-9]\+\)_remediation_needed\.md$/\1/')
|
||||||
|
if [ "$iter" -gt "$max_iter" ]; then
|
||||||
|
max_iter=$iter
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
echo $((max_iter + 1))
|
||||||
|
}
|
||||||
|
|
||||||
|
ITERATION=$(check_existing_iteration "$REPORT_DIR" "$SANITIZED_NAME" "$TIMESTAMP")
|
||||||
|
|
||||||
|
# Check if we're at max iterations
|
||||||
|
if [ "$ITERATION" -gt 5 ]; then
|
||||||
|
echo "[ERROR] Max iterations (5) reached for $FILE_PATH" | tee -a "$LOG_FILE"
|
||||||
|
# Move to escalated immediately
|
||||||
|
REPORT_DIR="${REPORT_DIR/pending/escalated}"
|
||||||
|
mkdir -p "$REPORT_DIR"
|
||||||
|
ITERATION=5 # Cap at 5
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create report filename
|
||||||
|
REPORT_FILE="${SANITIZED_NAME}_${TIMESTAMP}_${ITERATION}_remediation_needed.md"
|
||||||
|
REPORT_PATH="$REPORT_DIR/$REPORT_FILE"
|
||||||
|
|
||||||
|
# Log the action
|
||||||
|
echo "[$(date '+%Y-%m-%d %H:%M:%S')] QA Hook: $TOOL_NAME on $FILE_PATH" | tee -a "$LOG_FILE"
|
||||||
|
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Creating report: $REPORT_PATH" | tee -a "$LOG_FILE"
|
||||||
|
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Epic: ${EPIC_FOLDER:-general}, Iteration: $ITERATION" | tee -a "$LOG_FILE"
|
||||||
|
|
||||||
|
# Create a task file for the QA agent instead of calling Claude directly
|
||||||
|
cat > "$REPORT_PATH" << EOF
|
||||||
|
# QA Remediation Report
|
||||||
|
|
||||||
|
**File:** $FILE_PATH
|
||||||
|
**Tool Used:** $TOOL_NAME
|
||||||
|
**Epic:** ${EPIC_FOLDER:-general}
|
||||||
|
**Iteration:** $ITERATION
|
||||||
|
**Generated:** $(date '+%Y-%m-%d %H:%M:%S')
|
||||||
|
|
||||||
|
## Status
|
||||||
|
Pending QA validation
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
This report was created by the QA automation hook.
|
||||||
|
To process this report, run:
|
||||||
|
\`\`\`bash
|
||||||
|
claude -p "Use Task tool to launch universal-qa-agent for report: $REPORT_PATH"
|
||||||
|
\`\`\`
|
||||||
|
EOF
|
||||||
|
|
||||||
|
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Created report template at: $REPORT_PATH" | tee -a "$LOG_FILE"
|
||||||
59
rails/qa/qa-hook-stdin.sh
Executable file
59
rails/qa/qa-hook-stdin.sh
Executable file
@@ -0,0 +1,59 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# QA Hook handler that reads from stdin
|
||||||
|
# Location: ~/.mosaic/rails/qa-hook-stdin.sh
|
||||||
|
|
||||||
|
set -eo pipefail
|
||||||
|
|
||||||
|
PROJECT_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
||||||
|
LOG_FILE="$PROJECT_ROOT/logs/qa-automation.log"
|
||||||
|
mkdir -p "$(dirname "$LOG_FILE")"
|
||||||
|
|
||||||
|
# Read JSON from stdin
|
||||||
|
JSON_INPUT=$(cat)
|
||||||
|
|
||||||
|
# Log raw input for debugging
|
||||||
|
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Hook triggered with JSON:" >> "$LOG_FILE"
|
||||||
|
echo "$JSON_INPUT" >> "$LOG_FILE"
|
||||||
|
|
||||||
|
# Extract file path using jq if available, otherwise use grep/sed
|
||||||
|
if command -v jq &> /dev/null; then
|
||||||
|
# Try multiple paths - tool_input.file_path is the actual structure from Claude Code
|
||||||
|
FILE_PATH=$(echo "$JSON_INPUT" | jq -r '.tool_input.file_path // .tool_response.filePath // .file_path // .path // .file // empty' 2>/dev/null || echo "")
|
||||||
|
TOOL_NAME=$(echo "$JSON_INPUT" | jq -r '.tool_name // .tool // .matcher // "Edit"' 2>/dev/null || echo "Edit")
|
||||||
|
else
|
||||||
|
# Fallback parsing without jq - search in tool_input first
|
||||||
|
FILE_PATH=$(echo "$JSON_INPUT" | grep -o '"tool_input"[^}]*}' | grep -o '"file_path"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/.*"file_path"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/' | head -1)
|
||||||
|
if [ -z "$FILE_PATH" ]; then
|
||||||
|
FILE_PATH=$(echo "$JSON_INPUT" | grep -o '"file_path"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/.*"file_path"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/' | head -1)
|
||||||
|
fi
|
||||||
|
if [ -z "$FILE_PATH" ]; then
|
||||||
|
FILE_PATH=$(echo "$JSON_INPUT" | grep -o '"filePath"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/.*"filePath"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/' | head -1)
|
||||||
|
fi
|
||||||
|
TOOL_NAME=$(echo "$JSON_INPUT" | grep -o '"tool_name"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/.*"tool_name"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/' | head -1)
|
||||||
|
if [ -z "$TOOL_NAME" ]; then
|
||||||
|
TOOL_NAME=$(echo "$JSON_INPUT" | grep -o '"tool"[[:space:]]*:[[:space:]]*"[^"]*"' | sed 's/.*"tool"[[:space:]]*:[[:space:]]*"\([^"]*\)".*/\1/' | head -1)
|
||||||
|
fi
|
||||||
|
[ -z "$TOOL_NAME" ] && TOOL_NAME="Edit"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Extracted: tool=$TOOL_NAME file=$FILE_PATH" >> "$LOG_FILE"
|
||||||
|
|
||||||
|
# Validate we got a file path
|
||||||
|
if [ -z "$FILE_PATH" ]; then
|
||||||
|
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [ERROR] Could not extract file path from JSON" >> "$LOG_FILE"
|
||||||
|
exit 0 # Exit successfully to not block Claude
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Skip non-JS/TS files
|
||||||
|
if ! [[ "$FILE_PATH" =~ \.(ts|tsx|js|jsx|mjs|cjs)$ ]]; then
|
||||||
|
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [INFO] Skipping non-JS/TS file: $FILE_PATH" >> "$LOG_FILE"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Call the main QA handler with extracted parameters
|
||||||
|
if [ -f ~/.mosaic/rails/qa-hook-handler.sh ]; then
|
||||||
|
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Calling QA handler for $FILE_PATH" >> "$LOG_FILE"
|
||||||
|
~/.mosaic/rails/qa-hook-handler.sh "$TOOL_NAME" "$FILE_PATH" 2>&1 | tee -a "$LOG_FILE"
|
||||||
|
else
|
||||||
|
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [ERROR] QA handler script not found" >> "$LOG_FILE"
|
||||||
|
fi
|
||||||
19
rails/qa/qa-hook-wrapper.sh
Executable file
19
rails/qa/qa-hook-wrapper.sh
Executable file
@@ -0,0 +1,19 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Wrapper script that handles hook invocation more robustly
|
||||||
|
|
||||||
|
# Get the most recently modified JS/TS file as a fallback
|
||||||
|
RECENT_FILE=$(find . -type f \( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" \) -mmin -1 2>/dev/null | head -1)
|
||||||
|
|
||||||
|
# Use provided file path or fallback to recent file
|
||||||
|
FILE_PATH="${2:-$RECENT_FILE}"
|
||||||
|
TOOL_NAME="${1:-Edit}"
|
||||||
|
|
||||||
|
# Log the attempt
|
||||||
|
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Hook wrapper called: tool=$TOOL_NAME file=$FILE_PATH" >> logs/qa-automation.log 2>/dev/null || true
|
||||||
|
|
||||||
|
# Call the actual QA handler if we have a file
|
||||||
|
if [ -n "$FILE_PATH" ]; then
|
||||||
|
~/.mosaic/rails/qa-hook-handler.sh "$TOOL_NAME" "$FILE_PATH"
|
||||||
|
else
|
||||||
|
echo "[$(date '+%Y-%m-%d %H:%M:%S')] No file path available for QA check" >> logs/qa-automation.log 2>/dev/null || true
|
||||||
|
fi
|
||||||
91
rails/qa/qa-queue-monitor.sh
Executable file
91
rails/qa/qa-queue-monitor.sh
Executable file
@@ -0,0 +1,91 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Monitor QA queues with graceful handling of missing directories
|
||||||
|
# Location: ~/.mosaic/rails/qa-queue-monitor.sh
|
||||||
|
|
||||||
|
PROJECT_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
||||||
|
|
||||||
|
echo "=== QA Automation Queue Status ==="
|
||||||
|
echo "Project: $(basename "$PROJECT_ROOT")"
|
||||||
|
echo "Time: $(date '+%Y-%m-%d %H:%M:%S')"
|
||||||
|
echo
|
||||||
|
|
||||||
|
# Function to count files safely
|
||||||
|
count_files() {
|
||||||
|
local dir="$1"
|
||||||
|
if [ -d "$dir" ]; then
|
||||||
|
ls "$dir" 2>/dev/null | wc -l
|
||||||
|
else
|
||||||
|
echo "0"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check Epic-specific queues
|
||||||
|
EPIC_BASE="$PROJECT_ROOT/docs/task-management/epics/active"
|
||||||
|
if [ -d "$EPIC_BASE" ]; then
|
||||||
|
for EPIC_DIR in "$EPIC_BASE"/*/; do
|
||||||
|
if [ -d "$EPIC_DIR" ]; then
|
||||||
|
EPIC_NAME=$(basename "$EPIC_DIR")
|
||||||
|
QA_DIR="$EPIC_DIR/reports/qa-automation"
|
||||||
|
|
||||||
|
if [ -d "$QA_DIR" ]; then
|
||||||
|
echo "Epic: $EPIC_NAME"
|
||||||
|
echo " Pending: $(count_files "$QA_DIR/pending")"
|
||||||
|
echo " In Progress: $(count_files "$QA_DIR/in-progress")"
|
||||||
|
echo " Done: $(count_files "$QA_DIR/done")"
|
||||||
|
echo " Escalated: $(count_files "$QA_DIR/escalated")"
|
||||||
|
|
||||||
|
# Show escalated files if any
|
||||||
|
if [ -d "$QA_DIR/escalated" ] && [ "$(ls "$QA_DIR/escalated" 2>/dev/null | wc -l)" -gt 0 ]; then
|
||||||
|
echo " ⚠️ Escalated Issues:"
|
||||||
|
for file in "$QA_DIR/escalated"/*_remediation_needed.md; do
|
||||||
|
if [ -f "$file" ]; then
|
||||||
|
echo " - $(basename "$file")"
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
echo
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
else
|
||||||
|
echo "[WARN] No Epic structure found at: $EPIC_BASE"
|
||||||
|
echo
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check general queue
|
||||||
|
GENERAL_DIR="$PROJECT_ROOT/docs/reports/qa-automation"
|
||||||
|
if [ -d "$GENERAL_DIR" ]; then
|
||||||
|
echo "General (Non-Epic):"
|
||||||
|
echo " Pending: $(count_files "$GENERAL_DIR/pending")"
|
||||||
|
echo " In Progress: $(count_files "$GENERAL_DIR/in-progress")"
|
||||||
|
echo " Done: $(count_files "$GENERAL_DIR/done")"
|
||||||
|
echo " Escalated: $(count_files "$GENERAL_DIR/escalated")"
|
||||||
|
|
||||||
|
# Show escalated files
|
||||||
|
if [ -d "$GENERAL_DIR/escalated" ] && [ "$(ls "$GENERAL_DIR/escalated" 2>/dev/null | wc -l)" -gt 0 ]; then
|
||||||
|
echo " ⚠️ Escalated Issues:"
|
||||||
|
for file in "$GENERAL_DIR/escalated"/*_remediation_needed.md; do
|
||||||
|
if [ -f "$file" ]; then
|
||||||
|
echo " - $(basename "$file")"
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo "[INFO] No general QA directory found (will be created on first use)"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo
|
||||||
|
echo "=== Recent Activity ==="
|
||||||
|
# Show last 5 log entries
|
||||||
|
if [ -f "$PROJECT_ROOT/logs/qa-automation.log" ]; then
|
||||||
|
tail -5 "$PROJECT_ROOT/logs/qa-automation.log"
|
||||||
|
else
|
||||||
|
echo "No activity log found"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo
|
||||||
|
echo "=== Queue Processing Tips ==="
|
||||||
|
echo "• View pending reports: ls -la $PROJECT_ROOT/docs/reports/qa-automation/pending/"
|
||||||
|
echo "• Check stale reports: find $PROJECT_ROOT -path '*/in-progress/*' -mmin +60"
|
||||||
|
echo "• Manual escalation: mv {report} {path}/escalated/"
|
||||||
|
echo "• View full log: tail -f $PROJECT_ROOT/logs/qa-automation.log"
|
||||||
66
rails/qa/remediation-hook-handler.sh
Executable file
66
rails/qa/remediation-hook-handler.sh
Executable file
@@ -0,0 +1,66 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Universal remediation hook handler with error recovery
|
||||||
|
# Location: ~/.mosaic/rails/remediation-hook-handler.sh
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
PROJECT_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
||||||
|
REPORT_FILE="${1:-}"
|
||||||
|
|
||||||
|
# Validate input
|
||||||
|
if [ -z "$REPORT_FILE" ] || [ ! -f "$REPORT_FILE" ]; then
|
||||||
|
echo "[ERROR] Invalid or missing report file: $REPORT_FILE" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
LOG_FILE="$PROJECT_ROOT/logs/qa-automation.log"
|
||||||
|
mkdir -p "$(dirname "$LOG_FILE")"
|
||||||
|
|
||||||
|
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Remediation triggered for: $REPORT_FILE" | tee -a "$LOG_FILE"
|
||||||
|
|
||||||
|
# Extract components from path and filename
|
||||||
|
BASE_NAME=$(basename "$REPORT_FILE" _remediation_needed.md)
|
||||||
|
DIR_PATH=$(dirname "$REPORT_FILE")
|
||||||
|
|
||||||
|
# Validate directory structure
|
||||||
|
if [[ ! "$DIR_PATH" =~ /pending$ ]]; then
|
||||||
|
echo "[ERROR] Report not in pending directory: $DIR_PATH" | tee -a "$LOG_FILE"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Setup in-progress directory
|
||||||
|
IN_PROGRESS_DIR="${DIR_PATH/pending/in-progress}"
|
||||||
|
|
||||||
|
# Handle missing in-progress directory
|
||||||
|
if [ ! -d "$IN_PROGRESS_DIR" ]; then
|
||||||
|
echo "[WARN] Creating missing in-progress directory: $IN_PROGRESS_DIR" | tee -a "$LOG_FILE"
|
||||||
|
mkdir -p "$IN_PROGRESS_DIR"
|
||||||
|
|
||||||
|
# Also ensure done and escalated exist
|
||||||
|
mkdir -p "${DIR_PATH/pending/done}"
|
||||||
|
mkdir -p "${DIR_PATH/pending/escalated}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Move from pending to in-progress (with error handling)
|
||||||
|
if ! mv "$REPORT_FILE" "$IN_PROGRESS_DIR/" 2>/dev/null; then
|
||||||
|
echo "[ERROR] Failed to move report to in-progress" | tee -a "$LOG_FILE"
|
||||||
|
# Check if already in progress
|
||||||
|
if [ -f "$IN_PROGRESS_DIR/$(basename "$REPORT_FILE")" ]; then
|
||||||
|
echo "[WARN] Report already in progress, skipping" | tee -a "$LOG_FILE"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create actions file
|
||||||
|
ACTIONS_FILE="${BASE_NAME}_remediation_actions.md"
|
||||||
|
ACTIONS_PATH="$IN_PROGRESS_DIR/$ACTIONS_FILE"
|
||||||
|
|
||||||
|
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Starting remediation: $ACTIONS_PATH" | tee -a "$LOG_FILE"
|
||||||
|
|
||||||
|
# Trigger remediation agent
|
||||||
|
claude -p "Use Task tool to launch auto-remediation-agent for:
|
||||||
|
- Remediation Report: $IN_PROGRESS_DIR/$(basename "$REPORT_FILE")
|
||||||
|
- Actions File: $ACTIONS_PATH
|
||||||
|
- Max Iterations: 5
|
||||||
|
Process the report, create action plan using Sequential Thinking, research with Context7, and execute fixes systematically." 2>&1 | tee -a "$LOG_FILE"
|
||||||
72
templates/agent/AGENTS.md.template
Normal file
72
templates/agent/AGENTS.md.template
Normal file
@@ -0,0 +1,72 @@
|
|||||||
|
# ${PROJECT_NAME} — Agent Context
|
||||||
|
|
||||||
|
> Patterns, gotchas, and orchestrator integration for AI agents working on this project.
|
||||||
|
> **Update this file** when you discover reusable patterns or non-obvious requirements.
|
||||||
|
|
||||||
|
## Codebase Patterns
|
||||||
|
|
||||||
|
<!-- Add project-specific patterns as you discover them -->
|
||||||
|
<!-- Examples: -->
|
||||||
|
<!-- - Use `httpx.AsyncClient` for external HTTP calls -->
|
||||||
|
<!-- - All routes require authentication via `Depends(get_current_user)` -->
|
||||||
|
<!-- - Config is loaded from environment variables via `settings.py` -->
|
||||||
|
|
||||||
|
## Common Gotchas
|
||||||
|
|
||||||
|
<!-- Add things that trip up agents -->
|
||||||
|
<!-- Examples: -->
|
||||||
|
<!-- - Remember to run migrations after schema changes -->
|
||||||
|
<!-- - Frontend env vars need NEXT_PUBLIC_ prefix -->
|
||||||
|
<!-- - Tests require a running PostgreSQL instance -->
|
||||||
|
|
||||||
|
## Quality Gates
|
||||||
|
|
||||||
|
**All must pass before any commit:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
${QUALITY_GATES}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Orchestrator Integration
|
||||||
|
|
||||||
|
### Task Prefix
|
||||||
|
Use `${TASK_PREFIX}` as the prefix for orchestrated tasks (e.g., `${TASK_PREFIX}-SEC-001`).
|
||||||
|
|
||||||
|
### Package/Directory Names
|
||||||
|
<!-- List key directories the orchestrator needs to know about -->
|
||||||
|
|
||||||
|
| Directory | Purpose |
|
||||||
|
|-----------|---------|
|
||||||
|
| `${SOURCE_DIR}/` | Main source code |
|
||||||
|
| `tests/` | Test files |
|
||||||
|
| `docs/scratchpads/` | Working documents |
|
||||||
|
|
||||||
|
### Worker Checklist
|
||||||
|
When completing an orchestrated task:
|
||||||
|
1. Read the finding details from the report
|
||||||
|
2. Implement the fix following existing code patterns
|
||||||
|
3. Run quality gates (ALL must pass)
|
||||||
|
4. Commit with: `git commit -m "fix({finding_id}): brief description"`
|
||||||
|
5. Report result as JSON to orchestrator
|
||||||
|
|
||||||
|
### Post-Coding Review
|
||||||
|
After implementing changes, the orchestrator will run:
|
||||||
|
1. **Codex code review** — `~/.mosaic/rails/codex/codex-code-review.sh --uncommitted`
|
||||||
|
2. **Codex security review** — `~/.mosaic/rails/codex/codex-security-review.sh --uncommitted`
|
||||||
|
3. If blockers/critical findings: remediation task created
|
||||||
|
4. If clean: task marked done
|
||||||
|
|
||||||
|
## Directory-Specific Context
|
||||||
|
|
||||||
|
<!-- Add sub-AGENTS.md files in subdirectories if needed -->
|
||||||
|
<!-- Example: -->
|
||||||
|
<!-- - `src/api/AGENTS.md` — API-specific patterns -->
|
||||||
|
<!-- - `src/components/AGENTS.md` — Component conventions -->
|
||||||
|
|
||||||
|
## Testing Approaches
|
||||||
|
|
||||||
|
<!-- Document how tests should be written for this project -->
|
||||||
|
<!-- Examples: -->
|
||||||
|
<!-- - Unit tests use pytest with fixtures in conftest.py -->
|
||||||
|
<!-- - Integration tests require DATABASE_URL env var -->
|
||||||
|
<!-- - E2E tests use Playwright -->
|
||||||
151
templates/agent/CLAUDE.md.template
Normal file
151
templates/agent/CLAUDE.md.template
Normal file
@@ -0,0 +1,151 @@
|
|||||||
|
# ${PROJECT_NAME} — Claude Code Instructions
|
||||||
|
|
||||||
|
> **Project:** ${PROJECT_DESCRIPTION}
|
||||||
|
> **Repository:** ${REPO_URL}
|
||||||
|
|
||||||
|
## Session Protocol
|
||||||
|
|
||||||
|
### Starting a Session
|
||||||
|
```bash
|
||||||
|
git pull --rebase
|
||||||
|
```
|
||||||
|
|
||||||
|
### Ending a Session
|
||||||
|
```bash
|
||||||
|
git pull --rebase
|
||||||
|
git add -A
|
||||||
|
git commit -m "feat: <what changed>"
|
||||||
|
git push
|
||||||
|
```
|
||||||
|
|
||||||
|
## Conditional Documentation Loading
|
||||||
|
|
||||||
|
**Read the relevant guide before starting work:**
|
||||||
|
|
||||||
|
| Task Type | Guide |
|
||||||
|
|-----------|-------|
|
||||||
|
| Bootstrapping this project | `~/.mosaic/guides/bootstrap.md` |
|
||||||
|
| Orchestrating autonomous tasks | `~/.mosaic/guides/orchestrator.md` |
|
||||||
|
| Code review | `~/.mosaic/guides/code-review.md` |
|
||||||
|
| Frontend development | `~/.mosaic/guides/frontend.md` |
|
||||||
|
| Backend/API development | `~/.mosaic/guides/backend.md` |
|
||||||
|
| Authentication/Authorization | `~/.mosaic/guides/authentication.md` |
|
||||||
|
| Infrastructure/DevOps | `~/.mosaic/guides/infrastructure.md` |
|
||||||
|
| QA/Testing | `~/.mosaic/guides/qa-testing.md` |
|
||||||
|
|
||||||
|
## Technology Stack
|
||||||
|
|
||||||
|
| Layer | Technology |
|
||||||
|
|-------|------------|
|
||||||
|
| **Frontend** | ${FRONTEND_STACK} |
|
||||||
|
| **Backend** | ${BACKEND_STACK} |
|
||||||
|
| **Database** | ${DATABASE_STACK} |
|
||||||
|
| **Testing** | ${TESTING_STACK} |
|
||||||
|
| **Deployment** | ${DEPLOYMENT_STACK} |
|
||||||
|
|
||||||
|
## Repository Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
${PROJECT_DIR}/
|
||||||
|
├── CLAUDE.md # This file
|
||||||
|
├── AGENTS.md # Agent-specific patterns and gotchas
|
||||||
|
├── docs/
|
||||||
|
│ ├── scratchpads/ # Per-issue working documents
|
||||||
|
│ └── templates/ # Project templates (if any)
|
||||||
|
├── ${SOURCE_DIR}/ # Application source code
|
||||||
|
├── tests/ # Test files
|
||||||
|
└── ${CONFIG_FILES} # Configuration files
|
||||||
|
```
|
||||||
|
|
||||||
|
## Development Workflow
|
||||||
|
|
||||||
|
### Branch Strategy
|
||||||
|
- `main` — Production-ready code
|
||||||
|
- `develop` — Integration branch (if applicable)
|
||||||
|
- `feat/<name>` — Feature branches
|
||||||
|
- `fix/<name>` — Bug fix branches
|
||||||
|
|
||||||
|
### Building
|
||||||
|
```bash
|
||||||
|
${BUILD_COMMAND}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
```bash
|
||||||
|
${TEST_COMMAND}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Linting
|
||||||
|
```bash
|
||||||
|
${LINT_COMMAND}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Type Checking
|
||||||
|
```bash
|
||||||
|
${TYPECHECK_COMMAND}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Quality Gates
|
||||||
|
|
||||||
|
**All must pass before committing:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
${QUALITY_GATES}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Issue Tracking
|
||||||
|
|
||||||
|
All work is tracked as issues in the project's git repository.
|
||||||
|
|
||||||
|
### Workflow
|
||||||
|
1. Check for assigned issues before starting work
|
||||||
|
2. Create scratchpad: `docs/scratchpads/{issue-number}-{short-name}.md`
|
||||||
|
3. Reference issues in commits: `Fixes #123` or `Refs #123`
|
||||||
|
4. Close issues only after successful testing
|
||||||
|
|
||||||
|
### Labels
|
||||||
|
Use consistent labels: `epic`, `feature`, `bug`, `task`, `documentation`, `security`, `breaking`
|
||||||
|
|
||||||
|
### Commits
|
||||||
|
```
|
||||||
|
<type>(#issue): Brief description
|
||||||
|
|
||||||
|
Detailed explanation if needed.
|
||||||
|
|
||||||
|
Fixes #123
|
||||||
|
```
|
||||||
|
Types: `feat`, `fix`, `docs`, `test`, `refactor`, `chore`
|
||||||
|
|
||||||
|
## Code Review
|
||||||
|
|
||||||
|
### Independent Review (Automated)
|
||||||
|
After completing code changes, run independent reviews:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Code quality review (Codex)
|
||||||
|
~/.mosaic/rails/codex/codex-code-review.sh --uncommitted
|
||||||
|
|
||||||
|
# Security review (Codex)
|
||||||
|
~/.mosaic/rails/codex/codex-security-review.sh --uncommitted
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fallback:** If Codex is unavailable, use Claude's built-in review skills.
|
||||||
|
|
||||||
|
### Review Checklist
|
||||||
|
See `~/.mosaic/guides/code-review.md` for the full review checklist.
|
||||||
|
|
||||||
|
## Secrets Management
|
||||||
|
|
||||||
|
**NEVER hardcode secrets.** Use `.env` files (gitignored) or a secrets manager.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# .env.example is committed (with placeholders)
|
||||||
|
# .env is NOT committed (contains real values)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Multi-Agent Coordination
|
||||||
|
|
||||||
|
When multiple agents work on this project:
|
||||||
|
1. `git pull --rebase` before editing
|
||||||
|
2. `git pull --rebase` before pushing
|
||||||
|
3. If conflicts, **alert the user** — don't auto-resolve data conflicts
|
||||||
74
templates/agent/SPEC.md
Normal file
74
templates/agent/SPEC.md
Normal file
@@ -0,0 +1,74 @@
|
|||||||
|
# Agent Configuration Specification v1.0
|
||||||
|
|
||||||
|
> Defines what "well-configured" means for AI agent development across all coding projects.
|
||||||
|
|
||||||
|
## CLAUDE.md — Required Sections
|
||||||
|
|
||||||
|
### Tier 1 (Required — blocks audit pass)
|
||||||
|
|
||||||
|
1. **Conditional Documentation Loading** — Table linking to `~/.mosaic/guides/`
|
||||||
|
2. **Quality Gates** — Bash commands that must pass before commit (build, test, lint, typecheck)
|
||||||
|
3. **Build/Test/Lint commands** — How to build, test, and lint the project
|
||||||
|
|
||||||
|
### Tier 2 (Recommended — logged as warning)
|
||||||
|
|
||||||
|
4. Technology Stack table
|
||||||
|
5. Repository Structure tree
|
||||||
|
6. Commit format reference
|
||||||
|
7. Secrets management note
|
||||||
|
8. Multi-agent coordination note
|
||||||
|
9. **Campsite Rule** — "Touching it makes it yours" policy for code violations
|
||||||
|
|
||||||
|
### Tier 3 (Optional — nice to have)
|
||||||
|
|
||||||
|
10. Code Review section (Codex commands)
|
||||||
|
11. Issue Tracking workflow
|
||||||
|
12. Session Protocol (start/end)
|
||||||
|
|
||||||
|
## AGENTS.md — Required Sections
|
||||||
|
|
||||||
|
### Tier 1 (Required)
|
||||||
|
|
||||||
|
1. **Codebase Patterns** — At least one entry or placeholder with instructive comments
|
||||||
|
2. **Common Gotchas** — At least one entry or placeholder with instructive comments
|
||||||
|
3. **Quality Gates** — Duplicated for quick agent reference
|
||||||
|
|
||||||
|
### Tier 2 (Recommended)
|
||||||
|
|
||||||
|
4. Key Files table
|
||||||
|
5. Testing Approaches section
|
||||||
|
|
||||||
|
## Monorepo Sub-AGENTS.md
|
||||||
|
|
||||||
|
Required in any directory under `apps/`, `packages/`, `services/`, or `plugins/`
|
||||||
|
that contains its own `package.json` or `pyproject.toml`.
|
||||||
|
|
||||||
|
Minimum content:
|
||||||
|
1. Purpose (one line)
|
||||||
|
2. Patterns (at least placeholder)
|
||||||
|
3. Gotchas (at least placeholder)
|
||||||
|
|
||||||
|
## Detection Markers
|
||||||
|
|
||||||
|
The `agent-lint.sh` tool checks for these markers:
|
||||||
|
|
||||||
|
| Check | Pass Criteria |
|
||||||
|
|-------|---------------|
|
||||||
|
| CLAUDE.md exists | File present at project root |
|
||||||
|
| AGENTS.md exists | File present at project root |
|
||||||
|
| Conditional loading | CLAUDE.md contains `agent-guides` or `Conditional` + `Loading` |
|
||||||
|
| Quality gates | CLAUDE.md contains `Quality Gates` or quality commands (test, lint, typecheck) |
|
||||||
|
| Monorepo sub-agents | Each app/package dir with own manifest has AGENTS.md |
|
||||||
|
|
||||||
|
## Fragment Sources
|
||||||
|
|
||||||
|
Shared sections are maintained in `~/.mosaic/templates/agent/fragments/`:
|
||||||
|
|
||||||
|
| Fragment | Injects Section |
|
||||||
|
|----------|----------------|
|
||||||
|
| `conditional-loading.md` | Conditional Documentation Loading table |
|
||||||
|
| `commit-format.md` | Commit format convention |
|
||||||
|
| `secrets.md` | Secrets management note |
|
||||||
|
| `multi-agent.md` | Multi-agent coordination protocol |
|
||||||
|
| `code-review.md` | Code review commands |
|
||||||
|
| `campsite-rule.md` | Campsite Rule — fix violations you touch |
|
||||||
18
templates/agent/fragments/campsite-rule.md
Normal file
18
templates/agent/fragments/campsite-rule.md
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
## Campsite Rule (MANDATORY)
|
||||||
|
|
||||||
|
If you modify a line containing a policy violation, you MUST either:
|
||||||
|
1. **Fix the violation properly** in the same change, OR
|
||||||
|
2. **Flag it as a deferred item** with documented rationale
|
||||||
|
|
||||||
|
**"It was already there" is NEVER an acceptable justification** for perpetuating a violation in code you touched. Touching it makes it yours.
|
||||||
|
|
||||||
|
Examples of violations you must fix when you touch the line:
|
||||||
|
- `as unknown as Type` double assertions — use type guards instead
|
||||||
|
- `any` types — narrow to `unknown` with validation or define a proper interface
|
||||||
|
- Missing error handling — add it if you're modifying the surrounding code
|
||||||
|
- Suppressed linting rules (`// eslint-disable`) — fix the underlying issue
|
||||||
|
|
||||||
|
If the proper fix is too large for the current scope, you MUST:
|
||||||
|
- Create a TODO comment with issue reference: `// TODO(#123): Replace double assertion with type guard`
|
||||||
|
- Document the deferral in your PR/commit description
|
||||||
|
- Never silently carry the violation forward
|
||||||
14
templates/agent/fragments/code-review.md
Normal file
14
templates/agent/fragments/code-review.md
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
## Code Review
|
||||||
|
|
||||||
|
After completing code changes, run independent reviews:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Code quality review (Codex)
|
||||||
|
~/.mosaic/rails/codex/codex-code-review.sh --uncommitted
|
||||||
|
|
||||||
|
# Security review (Codex)
|
||||||
|
~/.mosaic/rails/codex/codex-security-review.sh --uncommitted
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fallback:** If Codex is unavailable, use Claude's built-in review skills.
|
||||||
|
See `~/.mosaic/guides/code-review.md` for the full review checklist.
|
||||||
11
templates/agent/fragments/commit-format.md
Normal file
11
templates/agent/fragments/commit-format.md
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
## Commits
|
||||||
|
|
||||||
|
```
|
||||||
|
<type>(#issue): Brief description
|
||||||
|
|
||||||
|
Detailed explanation if needed.
|
||||||
|
|
||||||
|
Fixes #123
|
||||||
|
```
|
||||||
|
|
||||||
|
Types: `feat`, `fix`, `docs`, `test`, `refactor`, `chore`
|
||||||
17
templates/agent/fragments/conditional-loading.md
Normal file
17
templates/agent/fragments/conditional-loading.md
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
## Conditional Documentation Loading
|
||||||
|
|
||||||
|
**Read the relevant guide before starting work:**
|
||||||
|
|
||||||
|
| Task Type | Guide |
|
||||||
|
|-----------|-------|
|
||||||
|
| Bootstrapping a new project | `~/.mosaic/guides/bootstrap.md` |
|
||||||
|
| Orchestrating autonomous tasks | `~/.mosaic/guides/orchestrator.md` |
|
||||||
|
| Ralph autonomous development | `~/.mosaic/guides/ralph-autonomous.md` |
|
||||||
|
| Frontend development | `~/.mosaic/guides/frontend.md` |
|
||||||
|
| Backend/API development | `~/.mosaic/guides/backend.md` |
|
||||||
|
| TypeScript strict typing | `~/.mosaic/guides/typescript.md` |
|
||||||
|
| Code review | `~/.mosaic/guides/code-review.md` |
|
||||||
|
| Authentication/Authorization | `~/.mosaic/guides/authentication.md` |
|
||||||
|
| Infrastructure/DevOps | `~/.mosaic/guides/infrastructure.md` |
|
||||||
|
| QA/Testing | `~/.mosaic/guides/qa-testing.md` |
|
||||||
|
| Secrets management (Vault) | `~/.mosaic/guides/vault-secrets.md` |
|
||||||
6
templates/agent/fragments/multi-agent.md
Normal file
6
templates/agent/fragments/multi-agent.md
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
## Multi-Agent Coordination
|
||||||
|
|
||||||
|
When multiple agents work on this project:
|
||||||
|
1. `git pull --rebase` before editing
|
||||||
|
2. `git pull --rebase` before pushing
|
||||||
|
3. If conflicts, **alert the user** — don't auto-resolve data conflicts
|
||||||
10
templates/agent/fragments/secrets.md
Normal file
10
templates/agent/fragments/secrets.md
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
## Secrets Management
|
||||||
|
|
||||||
|
**NEVER hardcode secrets.** Use `.env` files (gitignored) or a secrets manager.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# .env.example is committed (with placeholders)
|
||||||
|
# .env is NOT committed (contains real values)
|
||||||
|
```
|
||||||
|
|
||||||
|
Ensure `.gitignore` includes `.env*` (except `.env.example`).
|
||||||
79
templates/agent/projects/django/AGENTS.md.template
Normal file
79
templates/agent/projects/django/AGENTS.md.template
Normal file
@@ -0,0 +1,79 @@
|
|||||||
|
# ${PROJECT_NAME} — Agent Context
|
||||||
|
|
||||||
|
> Guidelines for AI agents working on this Django project.
|
||||||
|
> **Update this file** when you discover reusable patterns or non-obvious requirements.
|
||||||
|
|
||||||
|
## Codebase Patterns
|
||||||
|
|
||||||
|
- **Django project:** Standard Django project layout
|
||||||
|
- **Database:** PostgreSQL with Django ORM
|
||||||
|
- **API:** Django REST Framework for REST endpoints
|
||||||
|
- **Tasks:** Celery for background/async tasks
|
||||||
|
- **Config:** Environment variables via `.env` / `python-dotenv`
|
||||||
|
|
||||||
|
## Common Gotchas
|
||||||
|
|
||||||
|
- **Run migrations** after model changes: `python manage.py makemigrations && python manage.py migrate`
|
||||||
|
- **Import order matters:** Django setup must happen before model imports
|
||||||
|
- **Celery tasks** must be importable from `tasks.py` in each app
|
||||||
|
- **Settings module:** Check `DJANGO_SETTINGS_MODULE` environment variable
|
||||||
|
- **Test database:** Tests use a separate database — check `TEST` config in settings
|
||||||
|
- **Static files:** Run `collectstatic` before deployment
|
||||||
|
|
||||||
|
## Context Management
|
||||||
|
|
||||||
|
| Strategy | When |
|
||||||
|
|----------|------|
|
||||||
|
| **Spawn sub-agents** | Isolated coding tasks, research |
|
||||||
|
| **Batch operations** | Group related operations |
|
||||||
|
| **Check existing patterns** | Before writing new code |
|
||||||
|
| **Minimize re-reading** | Don't re-read files you just wrote |
|
||||||
|
|
||||||
|
## Quality Gates
|
||||||
|
|
||||||
|
**All must pass before any commit:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ruff check . && mypy . && pytest tests/
|
||||||
|
```
|
||||||
|
|
||||||
|
## Orchestrator Integration
|
||||||
|
|
||||||
|
### Task Prefix
|
||||||
|
Use `${TASK_PREFIX}` for orchestrated tasks (e.g., `${TASK_PREFIX}-SEC-001`).
|
||||||
|
|
||||||
|
### Worker Checklist
|
||||||
|
1. Read the finding details from the report
|
||||||
|
2. Implement the fix following existing patterns
|
||||||
|
3. Run quality gates (ALL must pass)
|
||||||
|
4. Commit: `git commit -m "fix({finding_id}): brief description"`
|
||||||
|
5. Push: `git push origin {branch}`
|
||||||
|
6. Report result as JSON
|
||||||
|
|
||||||
|
### Post-Coding Review
|
||||||
|
After implementing changes, the orchestrator will run:
|
||||||
|
1. **Codex code review** — `~/.mosaic/rails/codex/codex-code-review.sh --uncommitted`
|
||||||
|
2. **Codex security review** — `~/.mosaic/rails/codex/codex-security-review.sh --uncommitted`
|
||||||
|
3. If blockers/critical findings: remediation task created
|
||||||
|
4. If clean: task marked done
|
||||||
|
|
||||||
|
## Key Files
|
||||||
|
|
||||||
|
| File | Purpose |
|
||||||
|
|------|---------|
|
||||||
|
| `CLAUDE.md` | Project overview and conventions |
|
||||||
|
| `pyproject.toml` | Python dependencies and tool config |
|
||||||
|
| `${SOURCE_DIR}/manage.py` | Django management entry point |
|
||||||
|
| `.env.example` | Required environment variables |
|
||||||
|
|
||||||
|
## Testing Approaches
|
||||||
|
|
||||||
|
- Unit tests: `pytest` with fixtures in `conftest.py`
|
||||||
|
- API tests: DRF's `APITestCase` or pytest with `api_client` fixture
|
||||||
|
- Database tests: Use `@pytest.mark.django_db` decorator
|
||||||
|
- Mocking: `unittest.mock.patch` for external services
|
||||||
|
- Minimum 85% coverage for new code
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
_Model-agnostic. Works for Claude, Codex, GPT, Llama, etc._
|
||||||
168
templates/agent/projects/django/CLAUDE.md.template
Normal file
168
templates/agent/projects/django/CLAUDE.md.template
Normal file
@@ -0,0 +1,168 @@
|
|||||||
|
# ${PROJECT_NAME} — Claude Code Instructions
|
||||||
|
|
||||||
|
> **${PROJECT_DESCRIPTION}**
|
||||||
|
|
||||||
|
## Conditional Documentation Loading
|
||||||
|
|
||||||
|
| When working on... | Load this guide |
|
||||||
|
|---|---|
|
||||||
|
| Bootstrapping this project | `~/.mosaic/guides/bootstrap.md` |
|
||||||
|
| Orchestrating autonomous tasks | `~/.mosaic/guides/orchestrator.md` |
|
||||||
|
| Backend/API development | `~/.mosaic/guides/backend.md` |
|
||||||
|
| Code review | `~/.mosaic/guides/code-review.md` |
|
||||||
|
| QA/Testing | `~/.mosaic/guides/qa-testing.md` |
|
||||||
|
|
||||||
|
## Technology Stack
|
||||||
|
|
||||||
|
| Layer | Technology |
|
||||||
|
|-------|------------|
|
||||||
|
| **Backend** | Django / Django REST Framework |
|
||||||
|
| **Database** | PostgreSQL |
|
||||||
|
| **Task Queue** | Celery + Redis/Valkey |
|
||||||
|
| **Testing** | pytest + pytest-django |
|
||||||
|
| **Linting** | ruff |
|
||||||
|
| **Type Checking** | mypy |
|
||||||
|
| **Deployment** | Docker + docker-compose |
|
||||||
|
|
||||||
|
## Repository Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
${PROJECT_DIR}/
|
||||||
|
├── CLAUDE.md # This file
|
||||||
|
├── AGENTS.md # Agent-specific patterns and gotchas
|
||||||
|
├── ${SOURCE_DIR}/ # Django project root
|
||||||
|
│ ├── manage.py
|
||||||
|
│ ├── ${PROJECT_SLUG}/ # Django settings module
|
||||||
|
│ │ ├── settings.py
|
||||||
|
│ │ ├── urls.py
|
||||||
|
│ │ └── wsgi.py
|
||||||
|
│ └── apps/ # Django applications
|
||||||
|
├── tests/ # Test files
|
||||||
|
├── docs/
|
||||||
|
│ ├── scratchpads/ # Per-issue working documents
|
||||||
|
│ └── templates/ # Project templates
|
||||||
|
├── pyproject.toml # Python project configuration
|
||||||
|
├── .env.example
|
||||||
|
└── README.md
|
||||||
|
```
|
||||||
|
|
||||||
|
## Development Workflow
|
||||||
|
|
||||||
|
### Branch Strategy
|
||||||
|
- `main` — Production-ready code
|
||||||
|
- `develop` — Integration branch (if used)
|
||||||
|
- `feat/<name>` — Feature branches
|
||||||
|
- `fix/<name>` — Bug fix branches
|
||||||
|
|
||||||
|
### Starting Work
|
||||||
|
```bash
|
||||||
|
git pull --rebase
|
||||||
|
uv sync # or pip install -e ".[dev]"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Building / Running
|
||||||
|
```bash
|
||||||
|
python manage.py runserver # Development server
|
||||||
|
python manage.py migrate # Apply migrations
|
||||||
|
python manage.py shell # Django shell
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
```bash
|
||||||
|
pytest tests/ # Run all tests
|
||||||
|
pytest tests/ -x # Stop on first failure
|
||||||
|
pytest tests/ --cov # With coverage
|
||||||
|
```
|
||||||
|
|
||||||
|
### Linting & Type Checking
|
||||||
|
```bash
|
||||||
|
ruff check . # Lint
|
||||||
|
ruff format . # Format
|
||||||
|
mypy . # Type check
|
||||||
|
```
|
||||||
|
|
||||||
|
## Quality Gates
|
||||||
|
|
||||||
|
**All must pass before committing:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ruff check . && mypy . && pytest tests/
|
||||||
|
```
|
||||||
|
|
||||||
|
## Django Conventions
|
||||||
|
|
||||||
|
### Models
|
||||||
|
- All tunable parameters stored in the database with `get_effective_*()` fallback patterns
|
||||||
|
- Always create migrations for model changes: `python manage.py makemigrations`
|
||||||
|
- Include migrations in commits
|
||||||
|
- Use `models.TextChoices` or `models.IntegerChoices` for enum-like fields
|
||||||
|
|
||||||
|
### Views / API
|
||||||
|
- Use Django REST Framework for API endpoints
|
||||||
|
- Use serializers for validation
|
||||||
|
- Use ViewSets for standard CRUD
|
||||||
|
- Use permissions classes for authorization
|
||||||
|
|
||||||
|
### Management Commands
|
||||||
|
- Place in `<app>/management/commands/`
|
||||||
|
- Use `self.stdout.write()` for output
|
||||||
|
- Handle interrupts gracefully
|
||||||
|
|
||||||
|
## Database
|
||||||
|
|
||||||
|
### Migration Workflow
|
||||||
|
```bash
|
||||||
|
# Create migration after model changes
|
||||||
|
python manage.py makemigrations <app_name>
|
||||||
|
|
||||||
|
# Apply migrations
|
||||||
|
python manage.py migrate
|
||||||
|
|
||||||
|
# Check for missing migrations
|
||||||
|
python manage.py makemigrations --check
|
||||||
|
```
|
||||||
|
|
||||||
|
### Rules
|
||||||
|
- Always create migrations for schema changes
|
||||||
|
- Include migrations in commits
|
||||||
|
- Use `RunPython` for data migrations
|
||||||
|
- Use transactions for multi-table operations
|
||||||
|
|
||||||
|
## Code Review
|
||||||
|
|
||||||
|
### Independent Review (Automated)
|
||||||
|
After completing code changes, run independent reviews:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Code quality review (Codex)
|
||||||
|
~/.mosaic/rails/codex/codex-code-review.sh --uncommitted
|
||||||
|
|
||||||
|
# Security review (Codex)
|
||||||
|
~/.mosaic/rails/codex/codex-security-review.sh --uncommitted
|
||||||
|
```
|
||||||
|
|
||||||
|
## Issue Tracking
|
||||||
|
|
||||||
|
### Workflow
|
||||||
|
1. Check for assigned issues before starting work
|
||||||
|
2. Create scratchpad: `docs/scratchpads/{issue-number}-{short-name}.md`
|
||||||
|
3. Reference issues in commits: `Fixes #123` or `Refs #123`
|
||||||
|
4. Close issues after successful testing
|
||||||
|
|
||||||
|
### Commits
|
||||||
|
```
|
||||||
|
<type>(#issue): Brief description
|
||||||
|
|
||||||
|
Fixes #123
|
||||||
|
```
|
||||||
|
Types: `feat`, `fix`, `docs`, `test`, `refactor`, `chore`
|
||||||
|
|
||||||
|
## Secrets Management
|
||||||
|
|
||||||
|
**NEVER hardcode secrets.**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# .env.example is committed (with placeholders)
|
||||||
|
# .env is NOT committed (contains real values)
|
||||||
|
# Load via python-dotenv or django-environ
|
||||||
|
```
|
||||||
91
templates/agent/projects/nestjs-nextjs/AGENTS.md.template
Normal file
91
templates/agent/projects/nestjs-nextjs/AGENTS.md.template
Normal file
@@ -0,0 +1,91 @@
|
|||||||
|
# ${PROJECT_NAME} — Agent Context
|
||||||
|
|
||||||
|
> Guidelines for AI agents working on this NestJS + Next.js monorepo.
|
||||||
|
> **Update this file** when you discover reusable patterns or non-obvious requirements.
|
||||||
|
|
||||||
|
## Codebase Patterns
|
||||||
|
|
||||||
|
- **Monorepo structure:** pnpm workspaces + TurboRepo
|
||||||
|
- `apps/api/` — NestJS backend
|
||||||
|
- `apps/web/` — Next.js frontend
|
||||||
|
- `packages/shared/` — Shared types and utilities
|
||||||
|
- **Database:** Prisma ORM — schema at `apps/api/prisma/schema.prisma`
|
||||||
|
- **Auth:** Configured in `apps/api/src/auth/`
|
||||||
|
- **API:** RESTful with DTOs for validation
|
||||||
|
|
||||||
|
## Common Gotchas
|
||||||
|
|
||||||
|
- **Always run `pnpm install`** after pulling — lockfile changes frequently
|
||||||
|
- **Prisma generate** after schema changes: `pnpm --filter api prisma generate`
|
||||||
|
- **Environment variables:** Frontend vars need `NEXT_PUBLIC_` prefix
|
||||||
|
- **Import paths:** Use `@shared/` alias for shared package imports
|
||||||
|
- **Tests require running database:** Set `DATABASE_URL` in `.env.test`
|
||||||
|
- **TurboRepo caching:** Run `pnpm clean` if builds behave unexpectedly
|
||||||
|
|
||||||
|
## Context Management
|
||||||
|
|
||||||
|
Context = tokens = cost. Be smart.
|
||||||
|
|
||||||
|
| Strategy | When |
|
||||||
|
|----------|------|
|
||||||
|
| **Spawn sub-agents** | Isolated coding tasks, research |
|
||||||
|
| **Batch operations** | Group related API calls |
|
||||||
|
| **Check existing patterns** | Before writing new code |
|
||||||
|
| **Minimize re-reading** | Don't re-read files you just wrote |
|
||||||
|
|
||||||
|
## Quality Gates
|
||||||
|
|
||||||
|
**All must pass before any commit:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pnpm typecheck && pnpm lint && pnpm test
|
||||||
|
```
|
||||||
|
|
||||||
|
## Orchestrator Integration
|
||||||
|
|
||||||
|
### Task Prefix
|
||||||
|
Use `${TASK_PREFIX}` for orchestrated tasks (e.g., `${TASK_PREFIX}-SEC-001`).
|
||||||
|
|
||||||
|
### Worker Checklist
|
||||||
|
1. Read the finding details from the report
|
||||||
|
2. Implement the fix following existing patterns
|
||||||
|
3. Run quality gates (ALL must pass)
|
||||||
|
4. Commit: `git commit -m "fix({finding_id}): brief description"`
|
||||||
|
5. Push: `git push origin {branch}`
|
||||||
|
6. Report result as JSON
|
||||||
|
|
||||||
|
### Post-Coding Review
|
||||||
|
After implementing changes, the orchestrator will run:
|
||||||
|
1. **Codex code review** — `~/.mosaic/rails/codex/codex-code-review.sh --uncommitted`
|
||||||
|
2. **Codex security review** — `~/.mosaic/rails/codex/codex-security-review.sh --uncommitted`
|
||||||
|
3. If blockers/critical findings: remediation task created
|
||||||
|
4. If clean: task marked done
|
||||||
|
|
||||||
|
## Workflow (Non-Negotiable)
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Branch → git checkout -b feature/XX-description
|
||||||
|
2. Code → TDD: write test (RED), implement (GREEN), refactor
|
||||||
|
3. Test → pnpm test (must pass)
|
||||||
|
4. Push → git push origin feature/XX-description
|
||||||
|
5. PR → Create PR to develop (not main)
|
||||||
|
6. Review → Wait for approval or self-merge if authorized
|
||||||
|
7. Close → Close related issues
|
||||||
|
```
|
||||||
|
|
||||||
|
**Never merge directly to develop without a PR.**
|
||||||
|
|
||||||
|
## Key Files
|
||||||
|
|
||||||
|
| File | Purpose |
|
||||||
|
|------|---------|
|
||||||
|
| `CLAUDE.md` | Project overview and conventions |
|
||||||
|
| `apps/api/prisma/schema.prisma` | Database schema |
|
||||||
|
| `apps/api/src/` | Backend source |
|
||||||
|
| `apps/web/app/` | Frontend pages |
|
||||||
|
| `packages/shared/` | Shared types |
|
||||||
|
| `.env.example` | Required environment variables |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
_Model-agnostic. Works for Claude, Codex, GPT, Llama, etc._
|
||||||
200
templates/agent/projects/nestjs-nextjs/CLAUDE.md.template
Normal file
200
templates/agent/projects/nestjs-nextjs/CLAUDE.md.template
Normal file
@@ -0,0 +1,200 @@
|
|||||||
|
# ${PROJECT_NAME} — Claude Code Instructions
|
||||||
|
|
||||||
|
> **${PROJECT_DESCRIPTION}**
|
||||||
|
|
||||||
|
## Conditional Documentation Loading
|
||||||
|
|
||||||
|
| When working on... | Load this guide |
|
||||||
|
|---|---|
|
||||||
|
| Bootstrapping this project | `~/.mosaic/guides/bootstrap.md` |
|
||||||
|
| Orchestrating autonomous tasks | `~/.mosaic/guides/orchestrator.md` |
|
||||||
|
| Frontend development | `~/.mosaic/guides/frontend.md` |
|
||||||
|
| Backend/API development | `~/.mosaic/guides/backend.md` |
|
||||||
|
| Code review | `~/.mosaic/guides/code-review.md` |
|
||||||
|
| TypeScript strict typing | `~/.mosaic/guides/typescript.md` |
|
||||||
|
| Authentication/Authorization | `~/.mosaic/guides/authentication.md` |
|
||||||
|
| QA/Testing | `~/.mosaic/guides/qa-testing.md` |
|
||||||
|
|
||||||
|
## Technology Stack
|
||||||
|
|
||||||
|
| Layer | Technology |
|
||||||
|
|-------|------------|
|
||||||
|
| **Frontend** | Next.js + React + TailwindCSS + Shadcn/ui |
|
||||||
|
| **Backend** | NestJS + Prisma ORM |
|
||||||
|
| **Database** | PostgreSQL |
|
||||||
|
| **Testing** | Vitest + Playwright |
|
||||||
|
| **Monorepo** | pnpm workspaces + TurboRepo |
|
||||||
|
| **Deployment** | Docker + docker-compose |
|
||||||
|
|
||||||
|
## Repository Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
${PROJECT_DIR}/
|
||||||
|
├── CLAUDE.md # This file
|
||||||
|
├── AGENTS.md # Agent-specific patterns and gotchas
|
||||||
|
├── apps/
|
||||||
|
│ ├── api/ # NestJS backend
|
||||||
|
│ │ ├── src/
|
||||||
|
│ │ ├── prisma/
|
||||||
|
│ │ │ └── schema.prisma
|
||||||
|
│ │ └── Dockerfile
|
||||||
|
│ └── web/ # Next.js frontend
|
||||||
|
│ ├── app/
|
||||||
|
│ ├── components/
|
||||||
|
│ └── Dockerfile
|
||||||
|
├── packages/
|
||||||
|
│ ├── shared/ # Shared types, utilities
|
||||||
|
│ ├── ui/ # Shared UI components
|
||||||
|
│ └── config/ # Shared configuration
|
||||||
|
├── docs/
|
||||||
|
│ ├── scratchpads/ # Per-issue working documents
|
||||||
|
│ └── templates/ # Project templates
|
||||||
|
├── tests/ # Integration/E2E tests
|
||||||
|
├── docker/ # Docker configuration
|
||||||
|
├── .env.example
|
||||||
|
├── turbo.json
|
||||||
|
├── pnpm-workspace.yaml
|
||||||
|
└── README.md
|
||||||
|
```
|
||||||
|
|
||||||
|
## Development Workflow
|
||||||
|
|
||||||
|
### Branch Strategy
|
||||||
|
- `main` — Stable releases only
|
||||||
|
- `develop` — Active development (default working branch)
|
||||||
|
- `feature/*` — Feature branches from develop
|
||||||
|
- `fix/*` — Bug fix branches
|
||||||
|
|
||||||
|
### Starting Work
|
||||||
|
```bash
|
||||||
|
git checkout develop
|
||||||
|
git pull --rebase
|
||||||
|
pnpm install
|
||||||
|
```
|
||||||
|
|
||||||
|
### Building
|
||||||
|
```bash
|
||||||
|
pnpm build # Build all packages
|
||||||
|
pnpm --filter api build # Build API only
|
||||||
|
pnpm --filter web build # Build web only
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
```bash
|
||||||
|
pnpm test # Run all tests
|
||||||
|
pnpm test:unit # Unit tests (Vitest)
|
||||||
|
pnpm test:e2e # E2E tests (Playwright)
|
||||||
|
pnpm test:coverage # Coverage report
|
||||||
|
```
|
||||||
|
|
||||||
|
### Linting & Type Checking
|
||||||
|
```bash
|
||||||
|
pnpm lint # ESLint
|
||||||
|
pnpm typecheck # TypeScript type checking
|
||||||
|
pnpm format # Prettier formatting
|
||||||
|
```
|
||||||
|
|
||||||
|
## Quality Gates
|
||||||
|
|
||||||
|
**All must pass before committing:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pnpm typecheck && pnpm lint && pnpm test
|
||||||
|
```
|
||||||
|
|
||||||
|
## API Conventions
|
||||||
|
|
||||||
|
### NestJS Patterns
|
||||||
|
- Controllers handle HTTP, Services handle business logic
|
||||||
|
- Use DTOs with `class-validator` for request validation
|
||||||
|
- Use Guards for authentication/authorization
|
||||||
|
- Use Interceptors for response transformation
|
||||||
|
- Use Prisma for database access (no raw SQL)
|
||||||
|
|
||||||
|
### REST API Standards
|
||||||
|
- `GET /resource` — List (with pagination)
|
||||||
|
- `GET /resource/:id` — Get single
|
||||||
|
- `POST /resource` — Create
|
||||||
|
- `PATCH /resource/:id` — Update
|
||||||
|
- `DELETE /resource/:id` — Delete
|
||||||
|
- Return proper HTTP status codes (201 Created, 204 No Content, etc.)
|
||||||
|
|
||||||
|
## Frontend Conventions
|
||||||
|
|
||||||
|
### Next.js Patterns
|
||||||
|
- Use App Router (`app/` directory)
|
||||||
|
- Server Components by default, `'use client'` only when needed
|
||||||
|
- Use Shadcn/ui components — don't create custom UI primitives
|
||||||
|
- Use TailwindCSS for styling — no CSS modules or styled-components
|
||||||
|
|
||||||
|
### Component Structure
|
||||||
|
```
|
||||||
|
components/
|
||||||
|
├── ui/ # Shadcn/ui components (auto-generated)
|
||||||
|
├── layout/ # Layout components (header, sidebar, etc.)
|
||||||
|
├── forms/ # Form components
|
||||||
|
└── features/ # Feature-specific components
|
||||||
|
```
|
||||||
|
|
||||||
|
## Database
|
||||||
|
|
||||||
|
### Prisma Workflow
|
||||||
|
```bash
|
||||||
|
# Generate client after schema changes
|
||||||
|
pnpm --filter api prisma generate
|
||||||
|
|
||||||
|
# Create migration
|
||||||
|
pnpm --filter api prisma migrate dev --name description
|
||||||
|
|
||||||
|
# Apply migrations
|
||||||
|
pnpm --filter api prisma migrate deploy
|
||||||
|
|
||||||
|
# Reset database (dev only)
|
||||||
|
pnpm --filter api prisma migrate reset
|
||||||
|
```
|
||||||
|
|
||||||
|
### Rules
|
||||||
|
- Always create migrations for schema changes
|
||||||
|
- Include migrations in commits
|
||||||
|
- Never use raw SQL — use Prisma client
|
||||||
|
- Use transactions for multi-table operations
|
||||||
|
|
||||||
|
## Code Review
|
||||||
|
|
||||||
|
### Independent Review (Automated)
|
||||||
|
After completing code changes, run independent reviews:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Code quality review (Codex)
|
||||||
|
~/.mosaic/rails/codex/codex-code-review.sh --uncommitted
|
||||||
|
|
||||||
|
# Security review (Codex)
|
||||||
|
~/.mosaic/rails/codex/codex-security-review.sh --uncommitted
|
||||||
|
```
|
||||||
|
|
||||||
|
## Issue Tracking
|
||||||
|
|
||||||
|
### Workflow
|
||||||
|
1. Check for assigned issues before starting work
|
||||||
|
2. Create scratchpad: `docs/scratchpads/{issue-number}-{short-name}.md`
|
||||||
|
3. Reference issues in commits: `Fixes #123` or `Refs #123`
|
||||||
|
4. Close issues after successful testing
|
||||||
|
|
||||||
|
### Commits
|
||||||
|
```
|
||||||
|
<type>(#issue): Brief description
|
||||||
|
|
||||||
|
Fixes #123
|
||||||
|
```
|
||||||
|
Types: `feat`, `fix`, `docs`, `test`, `refactor`, `chore`
|
||||||
|
|
||||||
|
## Secrets Management
|
||||||
|
|
||||||
|
**NEVER hardcode secrets.**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# .env.example is committed (with placeholders)
|
||||||
|
# .env is NOT committed (contains real values)
|
||||||
|
```
|
||||||
|
|
||||||
|
Required environment variables are documented in `.env.example`.
|
||||||
43
templates/agent/projects/python-fastapi/AGENTS.md.template
Normal file
43
templates/agent/projects/python-fastapi/AGENTS.md.template
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
# ${PROJECT_NAME} — Agent Context
|
||||||
|
|
||||||
|
> Patterns, gotchas, and guidelines for AI agents working on this project.
|
||||||
|
> **Update this file** when you discover reusable patterns or non-obvious requirements.
|
||||||
|
|
||||||
|
## Codebase Patterns
|
||||||
|
|
||||||
|
- Use Pydantic models for all request/response validation
|
||||||
|
- Use dependency injection (`Depends()`) for shared resources
|
||||||
|
- Use `httpx.AsyncClient` for external HTTP calls
|
||||||
|
- Use `BackgroundTasks` for fire-and-forget operations
|
||||||
|
- Structured logging with `structlog` or `logging`
|
||||||
|
<!-- Add project-specific patterns as you discover them -->
|
||||||
|
|
||||||
|
## Common Gotchas
|
||||||
|
|
||||||
|
- Always run `uv sync` after pulling — dependencies may have changed
|
||||||
|
- Database migrations must be run before tests that use the DB
|
||||||
|
- Async tests need `@pytest.mark.asyncio` decorator
|
||||||
|
<!-- Add project-specific gotchas -->
|
||||||
|
|
||||||
|
## Quality Gates
|
||||||
|
|
||||||
|
**All must pass before any commit:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
uv run ruff check src/ tests/ && uv run ruff format --check src/ && uv run mypy src/ && uv run pytest --cov
|
||||||
|
```
|
||||||
|
|
||||||
|
## Key Files
|
||||||
|
|
||||||
|
| File | Purpose |
|
||||||
|
|------|---------|
|
||||||
|
| `pyproject.toml` | Project configuration, dependencies |
|
||||||
|
| `src/${PROJECT_SLUG}/main.py` | Application entry point |
|
||||||
|
<!-- Add project-specific key files -->
|
||||||
|
|
||||||
|
## Testing Approaches
|
||||||
|
|
||||||
|
- Unit tests: pytest with fixtures in `conftest.py`
|
||||||
|
- API tests: `httpx.AsyncClient` with `TestClient`
|
||||||
|
- Coverage minimum: 85%
|
||||||
|
<!-- Document project-specific testing patterns -->
|
||||||
135
templates/agent/projects/python-fastapi/CLAUDE.md.template
Normal file
135
templates/agent/projects/python-fastapi/CLAUDE.md.template
Normal file
@@ -0,0 +1,135 @@
|
|||||||
|
# ${PROJECT_NAME} — Claude Code Instructions
|
||||||
|
|
||||||
|
> **Project:** ${PROJECT_DESCRIPTION}
|
||||||
|
> **Repository:** ${REPO_URL}
|
||||||
|
|
||||||
|
## Conditional Documentation Loading
|
||||||
|
|
||||||
|
**Read the relevant guide before starting work:**
|
||||||
|
|
||||||
|
| Task Type | Guide |
|
||||||
|
|-----------|-------|
|
||||||
|
| Bootstrapping this project | `~/.mosaic/guides/bootstrap.md` |
|
||||||
|
| Orchestrating autonomous tasks | `~/.mosaic/guides/orchestrator.md` |
|
||||||
|
| Ralph autonomous development | `~/.mosaic/guides/ralph-autonomous.md` |
|
||||||
|
| Backend/API development | `~/.mosaic/guides/backend.md` |
|
||||||
|
| Authentication/Authorization | `~/.mosaic/guides/authentication.md` |
|
||||||
|
| Code review | `~/.mosaic/guides/code-review.md` |
|
||||||
|
| QA/Testing | `~/.mosaic/guides/qa-testing.md` |
|
||||||
|
| Infrastructure/DevOps | `~/.mosaic/guides/infrastructure.md` |
|
||||||
|
| Secrets management (Vault) | `~/.mosaic/guides/vault-secrets.md` |
|
||||||
|
|
||||||
|
## Technology Stack
|
||||||
|
|
||||||
|
| Layer | Technology |
|
||||||
|
|-------|------------|
|
||||||
|
| **Backend** | FastAPI |
|
||||||
|
| **Language** | Python 3.11+ |
|
||||||
|
| **Database** | ${DATABASE_STACK} |
|
||||||
|
| **Testing** | pytest + httpx |
|
||||||
|
| **Linting** | ruff |
|
||||||
|
| **Type Checking** | mypy (strict) |
|
||||||
|
| **Security** | bandit + pip-audit |
|
||||||
|
| **Package Manager** | uv |
|
||||||
|
| **Deployment** | ${DEPLOYMENT_STACK} |
|
||||||
|
|
||||||
|
## Repository Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
${PROJECT_DIR}/
|
||||||
|
├── CLAUDE.md # This file
|
||||||
|
├── AGENTS.md # Agent-specific patterns and gotchas
|
||||||
|
├── src/ # Source code
|
||||||
|
│ └── ${PROJECT_SLUG}/ # Main package
|
||||||
|
├── tests/ # Test files
|
||||||
|
├── docs/
|
||||||
|
│ └── scratchpads/ # Per-issue working documents
|
||||||
|
├── pyproject.toml # Project configuration
|
||||||
|
└── .env.example # Environment template
|
||||||
|
```
|
||||||
|
|
||||||
|
## Development Workflow
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
```bash
|
||||||
|
uv sync --all-extras
|
||||||
|
```
|
||||||
|
|
||||||
|
### Running
|
||||||
|
```bash
|
||||||
|
uv run uvicorn ${PROJECT_SLUG}.main:app --reload --port 8000
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
```bash
|
||||||
|
uv run pytest # Run all tests
|
||||||
|
uv run pytest --cov # With coverage (85% min)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Linting & Type Checking
|
||||||
|
```bash
|
||||||
|
uv run ruff check src/ tests/ # Lint
|
||||||
|
uv run ruff format --check src/ # Format check
|
||||||
|
uv run mypy src/ # Type check
|
||||||
|
```
|
||||||
|
|
||||||
|
### Security
|
||||||
|
```bash
|
||||||
|
uv run bandit -r src/ # SAST scanning
|
||||||
|
uv run pip-audit # Dependency vulnerabilities
|
||||||
|
```
|
||||||
|
|
||||||
|
## Quality Gates
|
||||||
|
|
||||||
|
**All must pass before committing:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
uv run ruff check src/ tests/ && uv run ruff format --check src/ && uv run mypy src/ && uv run pytest --cov
|
||||||
|
```
|
||||||
|
|
||||||
|
## Issue Tracking
|
||||||
|
|
||||||
|
All work is tracked as issues in the project's git repository.
|
||||||
|
|
||||||
|
1. Check for assigned issues before starting work
|
||||||
|
2. Create scratchpad: `docs/scratchpads/{issue-number}-{short-name}.md`
|
||||||
|
3. Reference issues in commits: `Fixes #123` or `Refs #123`
|
||||||
|
4. Close issues only after successful testing
|
||||||
|
|
||||||
|
## Commits
|
||||||
|
|
||||||
|
```
|
||||||
|
<type>(#issue): Brief description
|
||||||
|
|
||||||
|
Detailed explanation if needed.
|
||||||
|
|
||||||
|
Fixes #123
|
||||||
|
```
|
||||||
|
Types: `feat`, `fix`, `docs`, `test`, `refactor`, `chore`
|
||||||
|
|
||||||
|
## Code Review
|
||||||
|
|
||||||
|
After completing code changes, run independent reviews:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
~/.mosaic/rails/codex/codex-code-review.sh --uncommitted
|
||||||
|
~/.mosaic/rails/codex/codex-security-review.sh --uncommitted
|
||||||
|
```
|
||||||
|
|
||||||
|
See `~/.mosaic/guides/code-review.md` for the full review checklist.
|
||||||
|
|
||||||
|
## Secrets Management
|
||||||
|
|
||||||
|
**NEVER hardcode secrets.** Use `.env` files (gitignored) or a secrets manager.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# .env.example is committed (with placeholders)
|
||||||
|
# .env is NOT committed (contains real values)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Multi-Agent Coordination
|
||||||
|
|
||||||
|
When multiple agents work on this project:
|
||||||
|
1. `git pull --rebase` before editing
|
||||||
|
2. `git pull --rebase` before pushing
|
||||||
|
3. If conflicts, **alert the user** — don't auto-resolve data conflicts
|
||||||
39
templates/agent/projects/python-library/AGENTS.md.template
Normal file
39
templates/agent/projects/python-library/AGENTS.md.template
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
# ${PROJECT_NAME} — Agent Context
|
||||||
|
|
||||||
|
> Patterns, gotchas, and guidelines for AI agents working on this project.
|
||||||
|
> **Update this file** when you discover reusable patterns or non-obvious requirements.
|
||||||
|
|
||||||
|
## Codebase Patterns
|
||||||
|
|
||||||
|
- All public APIs must have type hints and docstrings
|
||||||
|
- Zero or minimal runtime dependencies — be conservative adding deps
|
||||||
|
- Exports defined in `__init__.py`
|
||||||
|
<!-- Add project-specific patterns as you discover them -->
|
||||||
|
|
||||||
|
## Common Gotchas
|
||||||
|
|
||||||
|
- Always run `uv sync` after pulling — dependencies may have changed
|
||||||
|
- Ensure backward compatibility — this is a library consumed by other projects
|
||||||
|
<!-- Add project-specific gotchas -->
|
||||||
|
|
||||||
|
## Quality Gates
|
||||||
|
|
||||||
|
**All must pass before any commit:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
uv run ruff check src/ tests/ && uv run ruff format --check src/ && uv run mypy src/ && uv run pytest --cov
|
||||||
|
```
|
||||||
|
|
||||||
|
## Key Files
|
||||||
|
|
||||||
|
| File | Purpose |
|
||||||
|
|------|---------|
|
||||||
|
| `pyproject.toml` | Project configuration, dependencies, build |
|
||||||
|
| `src/${PROJECT_SLUG}/__init__.py` | Public API exports |
|
||||||
|
<!-- Add project-specific key files -->
|
||||||
|
|
||||||
|
## Testing Approaches
|
||||||
|
|
||||||
|
- Unit tests: pytest with fixtures in `conftest.py`
|
||||||
|
- Coverage minimum: 85%
|
||||||
|
<!-- Document project-specific testing patterns -->
|
||||||
120
templates/agent/projects/python-library/CLAUDE.md.template
Normal file
120
templates/agent/projects/python-library/CLAUDE.md.template
Normal file
@@ -0,0 +1,120 @@
|
|||||||
|
# ${PROJECT_NAME} — Claude Code Instructions
|
||||||
|
|
||||||
|
> **Project:** ${PROJECT_DESCRIPTION}
|
||||||
|
> **Repository:** ${REPO_URL}
|
||||||
|
|
||||||
|
## Conditional Documentation Loading
|
||||||
|
|
||||||
|
**Read the relevant guide before starting work:**
|
||||||
|
|
||||||
|
| Task Type | Guide |
|
||||||
|
|-----------|-------|
|
||||||
|
| Bootstrapping this project | `~/.mosaic/guides/bootstrap.md` |
|
||||||
|
| Orchestrating autonomous tasks | `~/.mosaic/guides/orchestrator.md` |
|
||||||
|
| Ralph autonomous development | `~/.mosaic/guides/ralph-autonomous.md` |
|
||||||
|
| Backend/API development | `~/.mosaic/guides/backend.md` |
|
||||||
|
| Code review | `~/.mosaic/guides/code-review.md` |
|
||||||
|
| QA/Testing | `~/.mosaic/guides/qa-testing.md` |
|
||||||
|
|
||||||
|
## Technology Stack
|
||||||
|
|
||||||
|
| Layer | Technology |
|
||||||
|
|-------|------------|
|
||||||
|
| **Language** | Python 3.11+ |
|
||||||
|
| **Testing** | pytest |
|
||||||
|
| **Linting** | ruff |
|
||||||
|
| **Type Checking** | mypy (strict) |
|
||||||
|
| **Package Manager** | uv |
|
||||||
|
| **Build** | ${BUILD_SYSTEM} |
|
||||||
|
|
||||||
|
## Repository Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
${PROJECT_DIR}/
|
||||||
|
├── CLAUDE.md # This file
|
||||||
|
├── AGENTS.md # Agent-specific patterns and gotchas
|
||||||
|
├── src/
|
||||||
|
│ └── ${PROJECT_SLUG}/ # Main package
|
||||||
|
├── tests/ # Test files
|
||||||
|
├── docs/
|
||||||
|
│ └── scratchpads/ # Per-issue working documents
|
||||||
|
└── pyproject.toml # Project configuration
|
||||||
|
```
|
||||||
|
|
||||||
|
## Development Workflow
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
```bash
|
||||||
|
uv sync --all-extras
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
```bash
|
||||||
|
uv run pytest # Run all tests
|
||||||
|
uv run pytest --cov # With coverage (85% min)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Linting & Type Checking
|
||||||
|
```bash
|
||||||
|
uv run ruff check src/ tests/ # Lint
|
||||||
|
uv run ruff format --check src/ # Format check
|
||||||
|
uv run mypy src/ # Type check
|
||||||
|
```
|
||||||
|
|
||||||
|
## Quality Gates
|
||||||
|
|
||||||
|
**All must pass before committing:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
uv run ruff check src/ tests/ && uv run ruff format --check src/ && uv run mypy src/ && uv run pytest --cov
|
||||||
|
```
|
||||||
|
|
||||||
|
## Library Conventions
|
||||||
|
|
||||||
|
- Zero or minimal runtime dependencies
|
||||||
|
- All public APIs must have type hints
|
||||||
|
- All public functions must have docstrings
|
||||||
|
- Exports defined in `__init__.py`
|
||||||
|
- Versioning via `pyproject.toml`
|
||||||
|
|
||||||
|
## Issue Tracking
|
||||||
|
|
||||||
|
All work is tracked as issues in the project's git repository.
|
||||||
|
|
||||||
|
1. Check for assigned issues before starting work
|
||||||
|
2. Create scratchpad: `docs/scratchpads/{issue-number}-{short-name}.md`
|
||||||
|
3. Reference issues in commits: `Fixes #123` or `Refs #123`
|
||||||
|
4. Close issues only after successful testing
|
||||||
|
|
||||||
|
## Commits
|
||||||
|
|
||||||
|
```
|
||||||
|
<type>(#issue): Brief description
|
||||||
|
|
||||||
|
Detailed explanation if needed.
|
||||||
|
|
||||||
|
Fixes #123
|
||||||
|
```
|
||||||
|
Types: `feat`, `fix`, `docs`, `test`, `refactor`, `chore`
|
||||||
|
|
||||||
|
## Code Review
|
||||||
|
|
||||||
|
After completing code changes, run independent reviews:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
~/.mosaic/rails/codex/codex-code-review.sh --uncommitted
|
||||||
|
~/.mosaic/rails/codex/codex-security-review.sh --uncommitted
|
||||||
|
```
|
||||||
|
|
||||||
|
See `~/.mosaic/guides/code-review.md` for the full review checklist.
|
||||||
|
|
||||||
|
## Secrets Management
|
||||||
|
|
||||||
|
**NEVER hardcode secrets.** Use `.env` files (gitignored) or a secrets manager.
|
||||||
|
|
||||||
|
## Multi-Agent Coordination
|
||||||
|
|
||||||
|
When multiple agents work on this project:
|
||||||
|
1. `git pull --rebase` before editing
|
||||||
|
2. `git pull --rebase` before pushing
|
||||||
|
3. If conflicts, **alert the user** — don't auto-resolve data conflicts
|
||||||
41
templates/agent/projects/typescript/AGENTS.md.template
Normal file
41
templates/agent/projects/typescript/AGENTS.md.template
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
# ${PROJECT_NAME} — Agent Context
|
||||||
|
|
||||||
|
> Patterns, gotchas, and guidelines for AI agents working on this project.
|
||||||
|
> **Update this file** when you discover reusable patterns or non-obvious requirements.
|
||||||
|
|
||||||
|
## Codebase Patterns
|
||||||
|
|
||||||
|
- TypeScript strict mode enabled — no `any`, no implicit types
|
||||||
|
- See `~/.mosaic/guides/typescript.md` for mandatory TypeScript rules
|
||||||
|
<!-- Add project-specific patterns as you discover them -->
|
||||||
|
|
||||||
|
## Common Gotchas
|
||||||
|
|
||||||
|
<!-- Add things that trip up agents -->
|
||||||
|
<!-- Examples: -->
|
||||||
|
<!-- - Frontend env vars need NEXT_PUBLIC_ prefix -->
|
||||||
|
<!-- - Tests require specific setup (describe in Testing section) -->
|
||||||
|
|
||||||
|
## Quality Gates
|
||||||
|
|
||||||
|
**All must pass before any commit:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
${QUALITY_GATES}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Key Files
|
||||||
|
|
||||||
|
| File | Purpose |
|
||||||
|
|------|---------|
|
||||||
|
| `tsconfig.json` | TypeScript configuration |
|
||||||
|
| `package.json` | Dependencies and scripts |
|
||||||
|
<!-- Add project-specific key files -->
|
||||||
|
|
||||||
|
## Testing Approaches
|
||||||
|
|
||||||
|
<!-- Document how tests should be written for this project -->
|
||||||
|
<!-- Examples: -->
|
||||||
|
<!-- - Unit tests use Vitest with fixtures -->
|
||||||
|
<!-- - Component tests use React Testing Library -->
|
||||||
|
<!-- - E2E tests use Playwright -->
|
||||||
119
templates/agent/projects/typescript/CLAUDE.md.template
Normal file
119
templates/agent/projects/typescript/CLAUDE.md.template
Normal file
@@ -0,0 +1,119 @@
|
|||||||
|
# ${PROJECT_NAME} — Claude Code Instructions
|
||||||
|
|
||||||
|
> **Project:** ${PROJECT_DESCRIPTION}
|
||||||
|
> **Repository:** ${REPO_URL}
|
||||||
|
|
||||||
|
## Conditional Documentation Loading
|
||||||
|
|
||||||
|
**Read the relevant guide before starting work:**
|
||||||
|
|
||||||
|
| Task Type | Guide |
|
||||||
|
|-----------|-------|
|
||||||
|
| Bootstrapping this project | `~/.mosaic/guides/bootstrap.md` |
|
||||||
|
| Orchestrating autonomous tasks | `~/.mosaic/guides/orchestrator.md` |
|
||||||
|
| Ralph autonomous development | `~/.mosaic/guides/ralph-autonomous.md` |
|
||||||
|
| Frontend development | `~/.mosaic/guides/frontend.md` |
|
||||||
|
| TypeScript strict typing | `~/.mosaic/guides/typescript.md` |
|
||||||
|
| Code review | `~/.mosaic/guides/code-review.md` |
|
||||||
|
| QA/Testing | `~/.mosaic/guides/qa-testing.md` |
|
||||||
|
|
||||||
|
## Technology Stack
|
||||||
|
|
||||||
|
| Layer | Technology |
|
||||||
|
|-------|------------|
|
||||||
|
| **Language** | TypeScript (strict mode) |
|
||||||
|
| **Framework** | ${FRAMEWORK} |
|
||||||
|
| **Testing** | ${TESTING_STACK} |
|
||||||
|
| **Linting** | ESLint + Prettier |
|
||||||
|
| **Package Manager** | ${PACKAGE_MANAGER} |
|
||||||
|
| **Deployment** | ${DEPLOYMENT_STACK} |
|
||||||
|
|
||||||
|
## Repository Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
${PROJECT_DIR}/
|
||||||
|
├── CLAUDE.md # This file
|
||||||
|
├── AGENTS.md # Agent-specific patterns and gotchas
|
||||||
|
├── src/ # Source code
|
||||||
|
├── tests/ # Test files
|
||||||
|
├── docs/
|
||||||
|
│ └── scratchpads/ # Per-issue working documents
|
||||||
|
└── ${CONFIG_FILES} # Configuration files
|
||||||
|
```
|
||||||
|
|
||||||
|
## Development Workflow
|
||||||
|
|
||||||
|
### Building
|
||||||
|
```bash
|
||||||
|
${BUILD_COMMAND}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
```bash
|
||||||
|
${TEST_COMMAND}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Linting & Type Checking
|
||||||
|
```bash
|
||||||
|
${LINT_COMMAND}
|
||||||
|
${TYPECHECK_COMMAND}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Quality Gates
|
||||||
|
|
||||||
|
**All must pass before committing:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
${QUALITY_GATES}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Issue Tracking
|
||||||
|
|
||||||
|
All work is tracked as issues in the project's git repository.
|
||||||
|
|
||||||
|
1. Check for assigned issues before starting work
|
||||||
|
2. Create scratchpad: `docs/scratchpads/{issue-number}-{short-name}.md`
|
||||||
|
3. Reference issues in commits: `Fixes #123` or `Refs #123`
|
||||||
|
4. Close issues only after successful testing
|
||||||
|
|
||||||
|
## Commits
|
||||||
|
|
||||||
|
```
|
||||||
|
<type>(#issue): Brief description
|
||||||
|
|
||||||
|
Detailed explanation if needed.
|
||||||
|
|
||||||
|
Fixes #123
|
||||||
|
```
|
||||||
|
Types: `feat`, `fix`, `docs`, `test`, `refactor`, `chore`
|
||||||
|
|
||||||
|
## Code Review
|
||||||
|
|
||||||
|
After completing code changes, run independent reviews:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Code quality review (Codex)
|
||||||
|
~/.mosaic/rails/codex/codex-code-review.sh --uncommitted
|
||||||
|
|
||||||
|
# Security review (Codex)
|
||||||
|
~/.mosaic/rails/codex/codex-security-review.sh --uncommitted
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fallback:** If Codex is unavailable, use Claude's built-in review skills.
|
||||||
|
See `~/.mosaic/guides/code-review.md` for the full review checklist.
|
||||||
|
|
||||||
|
## Secrets Management
|
||||||
|
|
||||||
|
**NEVER hardcode secrets.** Use `.env` files (gitignored) or a secrets manager.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# .env.example is committed (with placeholders)
|
||||||
|
# .env is NOT committed (contains real values)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Multi-Agent Coordination
|
||||||
|
|
||||||
|
When multiple agents work on this project:
|
||||||
|
1. `git pull --rebase` before editing
|
||||||
|
2. `git pull --rebase` before pushing
|
||||||
|
3. If conflicts, **alert the user** — don't auto-resolve data conflicts
|
||||||
233
templates/agent/qa-remediation-actions.md
Normal file
233
templates/agent/qa-remediation-actions.md
Normal file
@@ -0,0 +1,233 @@
|
|||||||
|
---
|
||||||
|
file_path: {full_path}
|
||||||
|
file_name: {sanitized_name}
|
||||||
|
remediation_report: {FileName}_remediation_needed.md
|
||||||
|
timestamp_start: {ISO_timestamp}
|
||||||
|
timestamp_end: {ISO_timestamp}
|
||||||
|
iteration: {current_iteration}
|
||||||
|
status: {planning|researching|executing|validating|completed|failed}
|
||||||
|
success_metrics:
|
||||||
|
typescript: {pass|fail|not_applicable}
|
||||||
|
eslint: {pass|fail|not_applicable}
|
||||||
|
prettier: {pass|fail|not_applicable}
|
||||||
|
security: {pass|fail|not_applicable}
|
||||||
|
---
|
||||||
|
|
||||||
|
# Remediation Actions: {file_name}
|
||||||
|
|
||||||
|
## Planning Phase
|
||||||
|
**Start Time**: {timestamp}
|
||||||
|
**Status**: {in_progress|completed}
|
||||||
|
|
||||||
|
### Sequential Thinking Analysis
|
||||||
|
```
|
||||||
|
Thought 1: Analyzing reported issues - {analysis}
|
||||||
|
Thought 2: Determining fix priority - {priority reasoning}
|
||||||
|
Thought 3: Identifying dependencies - {dependency analysis}
|
||||||
|
Thought 4: Planning execution order - {order rationale}
|
||||||
|
Thought 5: Estimating complexity - {complexity assessment}
|
||||||
|
Thought 6: Validation approach - {how to verify success}
|
||||||
|
Total Thoughts: {n}
|
||||||
|
Decision: {chosen approach}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Issues Prioritization
|
||||||
|
1. **Critical**: {issues that block compilation/execution}
|
||||||
|
2. **High**: {issues affecting functionality}
|
||||||
|
3. **Medium**: {code quality issues}
|
||||||
|
4. **Low**: {style/formatting issues}
|
||||||
|
|
||||||
|
## Research Phase
|
||||||
|
**Start Time**: {timestamp}
|
||||||
|
**Status**: {in_progress|completed}
|
||||||
|
|
||||||
|
### Context7 Documentation Retrieved
|
||||||
|
```javascript
|
||||||
|
// Query 1: TypeScript best practices
|
||||||
|
await mcp__context7__get_library_docs({
|
||||||
|
context7CompatibleLibraryID: "/microsoft/TypeScript",
|
||||||
|
topic: "{specific topic}",
|
||||||
|
tokens: 3000
|
||||||
|
});
|
||||||
|
// Result: {summary of findings}
|
||||||
|
|
||||||
|
// Query 2: ESLint rules
|
||||||
|
await mcp__context7__get_library_docs({
|
||||||
|
context7CompatibleLibraryID: "/eslint/eslint",
|
||||||
|
topic: "{specific rules}",
|
||||||
|
tokens: 2000
|
||||||
|
});
|
||||||
|
// Result: {summary of findings}
|
||||||
|
|
||||||
|
// Query 3: Framework patterns
|
||||||
|
await mcp__context7__get_library_docs({
|
||||||
|
context7CompatibleLibraryID: "{framework library}",
|
||||||
|
topic: "{specific patterns}",
|
||||||
|
tokens: 2500
|
||||||
|
});
|
||||||
|
// Result: {summary of findings}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Relevant Patterns Identified
|
||||||
|
- **Pattern 1**: {description and application}
|
||||||
|
- **Pattern 2**: {description and application}
|
||||||
|
- **Best Practice**: {relevant best practice from docs}
|
||||||
|
|
||||||
|
## Action Plan
|
||||||
|
**Generated**: {timestamp}
|
||||||
|
**Total Actions**: {count}
|
||||||
|
|
||||||
|
### Planned Actions
|
||||||
|
1. [ ] **Fix TypeScript interface issue**
|
||||||
|
- **Issue**: Property 'onClick' missing from ButtonProps
|
||||||
|
- **Solution**: Add optional onClick property with proper typing
|
||||||
|
- **Rationale**: Maintains backward compatibility while fixing type error
|
||||||
|
- **Rollback**: Remove property if breaks existing usage
|
||||||
|
- **Estimated Impact**: Low risk, improves type safety
|
||||||
|
|
||||||
|
2. [ ] **Resolve ESLint violations**
|
||||||
|
- **Issue**: no-unused-vars on line 45
|
||||||
|
- **Solution**: Remove unused import or implement usage
|
||||||
|
- **Rationale**: Clean code practice, reduces bundle size
|
||||||
|
- **Rollback**: Re-add if functionality needed
|
||||||
|
- **Estimated Impact**: No functional change
|
||||||
|
|
||||||
|
3. [ ] **Apply Prettier formatting**
|
||||||
|
- **Issue**: Inconsistent formatting throughout file
|
||||||
|
- **Solution**: Run prettier --write
|
||||||
|
- **Rationale**: Maintains code style consistency
|
||||||
|
- **Rollback**: Not needed (formatting only)
|
||||||
|
- **Estimated Impact**: No functional change
|
||||||
|
|
||||||
|
## Execution Phase
|
||||||
|
**Start Time**: {timestamp}
|
||||||
|
**Status**: {in_progress|completed}
|
||||||
|
|
||||||
|
### Actions Taken
|
||||||
|
|
||||||
|
#### Action 1: Fix TypeScript interface issue
|
||||||
|
**Timestamp**: {ISO timestamp}
|
||||||
|
**Tool Used**: Edit
|
||||||
|
**Target**: Lines 10-15 of ButtonProps interface
|
||||||
|
**Change Applied**:
|
||||||
|
```typescript
|
||||||
|
// Before
|
||||||
|
interface ButtonProps {
|
||||||
|
label: string;
|
||||||
|
disabled?: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
// After
|
||||||
|
interface ButtonProps {
|
||||||
|
label: string;
|
||||||
|
disabled?: boolean;
|
||||||
|
onClick?: React.MouseEventHandler<HTMLButtonElement>;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
**Result**: ✅ Success - TypeScript compilation now passes
|
||||||
|
**Side Effects**: None observed
|
||||||
|
**Status**: [x] Completed
|
||||||
|
|
||||||
|
#### Action 2: Resolve ESLint violations
|
||||||
|
**Timestamp**: {ISO timestamp}
|
||||||
|
**Tool Used**: MultiEdit
|
||||||
|
**Target**: Multiple locations with violations
|
||||||
|
**Changes Applied**:
|
||||||
|
- Removed unused import on line 3
|
||||||
|
- Fixed missing dependency in useEffect on line 67
|
||||||
|
- Added missing return type on line 89
|
||||||
|
**Result**: ✅ Success - ESLint now reports 0 errors
|
||||||
|
**Side Effects**: None
|
||||||
|
**Status**: [x] Completed
|
||||||
|
|
||||||
|
#### Action 3: Apply Prettier formatting
|
||||||
|
**Timestamp**: {ISO timestamp}
|
||||||
|
**Tool Used**: Bash
|
||||||
|
**Command**: `npx prettier --write {file_path}`
|
||||||
|
**Result**: ✅ Success - File formatted
|
||||||
|
**Lines Changed**: 47
|
||||||
|
**Status**: [x] Completed
|
||||||
|
|
||||||
|
### Unexpected Issues Encountered
|
||||||
|
{Any issues that arose during execution}
|
||||||
|
|
||||||
|
### Adjustments Made
|
||||||
|
{Any deviations from the original plan and why}
|
||||||
|
|
||||||
|
## Validation Phase
|
||||||
|
**Start Time**: {timestamp}
|
||||||
|
**Status**: {in_progress|completed}
|
||||||
|
|
||||||
|
### Re-run QA Checks
|
||||||
|
|
||||||
|
#### TypeScript Validation
|
||||||
|
```bash
|
||||||
|
npx tsc --noEmit {file_path}
|
||||||
|
```
|
||||||
|
**Result**: ✅ PASS - No errors
|
||||||
|
**Details**: Compilation successful, all types resolved
|
||||||
|
|
||||||
|
#### ESLint Validation
|
||||||
|
```bash
|
||||||
|
npx eslint {file_path}
|
||||||
|
```
|
||||||
|
**Result**: ✅ PASS - 0 errors, 2 warnings
|
||||||
|
**Warnings**:
|
||||||
|
- Line 34: Prefer const over let (prefer-const)
|
||||||
|
- Line 78: Missing explicit return type (explicit-function-return-type)
|
||||||
|
|
||||||
|
#### Prettier Validation
|
||||||
|
```bash
|
||||||
|
npx prettier --check {file_path}
|
||||||
|
```
|
||||||
|
**Result**: ✅ PASS - File formatted correctly
|
||||||
|
|
||||||
|
#### Security Scan
|
||||||
|
```bash
|
||||||
|
# Security check command
|
||||||
|
```
|
||||||
|
**Result**: ✅ PASS - No vulnerabilities detected
|
||||||
|
|
||||||
|
### Overall Validation Status
|
||||||
|
- **All Critical Issues**: ✅ Resolved
|
||||||
|
- **All High Issues**: ✅ Resolved
|
||||||
|
- **Medium Issues**: ⚠️ 2 warnings remain (non-blocking)
|
||||||
|
- **Low Issues**: ✅ Resolved
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
### If Successful (All Pass)
|
||||||
|
- [x] Move reports to done/
|
||||||
|
- [x] Archive after 7 days
|
||||||
|
- [x] Log success metrics
|
||||||
|
|
||||||
|
### If Failed (Issues Remain)
|
||||||
|
- [ ] Check iteration count: {current}/5
|
||||||
|
- [ ] If < 5: Plan next iteration approach
|
||||||
|
- [ ] If >= 5: Escalate with detailed analysis
|
||||||
|
|
||||||
|
### Next Iteration Planning (If Needed)
|
||||||
|
**Remaining Issues**: {list}
|
||||||
|
**New Approach**: {different strategy based on learnings}
|
||||||
|
**Sequential Thinking**:
|
||||||
|
```
|
||||||
|
Thought 1: Why did previous approach fail?
|
||||||
|
Thought 2: What alternative solutions exist?
|
||||||
|
Thought 3: Which approach has highest success probability?
|
||||||
|
Decision: {new approach}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
**Total Execution Time**: {duration}
|
||||||
|
**Actions Completed**: {n}/{total}
|
||||||
|
**Success Rate**: {percentage}
|
||||||
|
**Final Status**: {completed|needs_iteration|escalated}
|
||||||
|
|
||||||
|
## Lessons Learned
|
||||||
|
{Any insights that could help future remediation}
|
||||||
|
|
||||||
|
---
|
||||||
|
*Generated by Auto-Remediation Agent*
|
||||||
|
*Start: {ISO timestamp}*
|
||||||
|
*End: {ISO timestamp}*
|
||||||
|
*Agent Version: 1.0.0*
|
||||||
117
templates/agent/qa-remediation-needed.md
Normal file
117
templates/agent/qa-remediation-needed.md
Normal file
@@ -0,0 +1,117 @@
|
|||||||
|
---
|
||||||
|
file_path: {full_path}
|
||||||
|
file_name: {sanitized_name}
|
||||||
|
epic_association: {E.XXXX-name or "general"}
|
||||||
|
epic_exists: {true|false|created}
|
||||||
|
timestamp: {YYYYMMDD-HHMM}
|
||||||
|
iteration: {1-5}
|
||||||
|
max_iterations: 5
|
||||||
|
tool_triggered: {Edit|MultiEdit|Write}
|
||||||
|
severity: {CRITICAL|HIGH|MEDIUM|LOW}
|
||||||
|
status: pending
|
||||||
|
error_context: {any errors during creation}
|
||||||
|
---
|
||||||
|
|
||||||
|
# Remediation Needed: {file_name}
|
||||||
|
|
||||||
|
## Environment Context
|
||||||
|
- **Epic Status**: {existing|created|general}
|
||||||
|
- **Report Location**: {full path to this report}
|
||||||
|
- **Previous Iterations**: {list if any}
|
||||||
|
- **Project Type**: {React Frontend|NestJS Backend|Node.js Library}
|
||||||
|
|
||||||
|
## Issues Detected
|
||||||
|
|
||||||
|
### TypeScript Compilation
|
||||||
|
**Status**: ❌ FAILED | ✅ PASSED
|
||||||
|
**Errors Found**: {count}
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Error details with line numbers
|
||||||
|
{specific errors}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Context7 Documentation**:
|
||||||
|
- {relevant TypeScript docs retrieved}
|
||||||
|
|
||||||
|
### ESLint Violations
|
||||||
|
**Status**: ❌ ERRORS | ⚠️ WARNINGS | ✅ CLEAN
|
||||||
|
**Issues Found**: {count}
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Violation details with rule names
|
||||||
|
{specific violations}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Context7 Documentation**:
|
||||||
|
- {relevant ESLint rule docs}
|
||||||
|
|
||||||
|
### Prettier Formatting
|
||||||
|
**Status**: ❌ NEEDS FORMATTING | ✅ FORMATTED
|
||||||
|
**Changes Required**: {yes|no}
|
||||||
|
|
||||||
|
```diff
|
||||||
|
// Formatting differences
|
||||||
|
- {original}
|
||||||
|
+ {formatted}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Security Issues
|
||||||
|
**Status**: ❌ VULNERABILITIES | ✅ SECURE
|
||||||
|
**Critical Issues**: {count}
|
||||||
|
|
||||||
|
- {list of security concerns}
|
||||||
|
|
||||||
|
## Recommended Fixes
|
||||||
|
|
||||||
|
### Priority 1: Critical (Must Fix)
|
||||||
|
1. **{Issue}**: {specific fix with code example}
|
||||||
|
- Rationale: {why this fix}
|
||||||
|
- Context7 Reference: {documentation link/content}
|
||||||
|
|
||||||
|
### Priority 2: High (Should Fix)
|
||||||
|
1. **{Issue}**: {specific fix}
|
||||||
|
- Rationale: {reasoning}
|
||||||
|
- Auto-fixable: {yes|no}
|
||||||
|
|
||||||
|
### Priority 3: Medium (Consider Fixing)
|
||||||
|
1. **{Issue}**: {improvement suggestion}
|
||||||
|
- Impact: {what this improves}
|
||||||
|
|
||||||
|
## Sequential Thinking Analysis
|
||||||
|
```
|
||||||
|
Thought 1: {initial analysis}
|
||||||
|
Thought 2: {problem identification}
|
||||||
|
Thought 3: {solution approach}
|
||||||
|
Thought 4: {validation strategy}
|
||||||
|
Decision: {recommended approach}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Auto-Fix Availability
|
||||||
|
- **TypeScript**: {percentage auto-fixable}
|
||||||
|
- **ESLint**: {percentage auto-fixable with --fix}
|
||||||
|
- **Prettier**: ✅ 100% auto-fixable
|
||||||
|
- **Overall**: {percentage requiring manual intervention}
|
||||||
|
|
||||||
|
## Execution Plan
|
||||||
|
1. [ ] Apply Prettier formatting
|
||||||
|
2. [ ] Run ESLint with --fix flag
|
||||||
|
3. [ ] Fix TypeScript compilation errors
|
||||||
|
4. [ ] Address security vulnerabilities
|
||||||
|
5. [ ] Re-run validation suite
|
||||||
|
|
||||||
|
## Risk Assessment
|
||||||
|
- **Breaking Changes**: {none|low|medium|high}
|
||||||
|
- **Side Effects**: {list potential impacts}
|
||||||
|
- **Dependencies**: {any new dependencies needed}
|
||||||
|
|
||||||
|
## Manual Actions Required
|
||||||
|
{If any issues cannot be auto-fixed, list specific manual interventions needed}
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
{Additional context, warnings, or information}
|
||||||
|
|
||||||
|
---
|
||||||
|
*Generated by Universal QA Agent*
|
||||||
|
*Timestamp: {ISO timestamp}*
|
||||||
|
*Agent Version: 1.0.0*
|
||||||
17
templates/agent/sub-agents.md.template
Normal file
17
templates/agent/sub-agents.md.template
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
# ${DIRECTORY_NAME} — Agent Context
|
||||||
|
|
||||||
|
> ${DIRECTORY_PURPOSE}
|
||||||
|
|
||||||
|
## Patterns
|
||||||
|
|
||||||
|
<!-- Add module-specific patterns as you discover them -->
|
||||||
|
|
||||||
|
## Gotchas
|
||||||
|
|
||||||
|
<!-- Add things that trip up agents in this module -->
|
||||||
|
|
||||||
|
## Key Files
|
||||||
|
|
||||||
|
| File | Purpose |
|
||||||
|
|------|---------|
|
||||||
|
<!-- Add important files in this directory -->
|
||||||
Reference in New Issue
Block a user