feat: Initial agent-skills repo — 4 adapted skills for Mosaic Stack

Skills included:
- pr-reviewer: Adapted for Gitea/GitHub via platform-aware scripts
  (dropped fetch_pr_data.py and add_inline_comment.py, kept generate_review_files.py)
- code-review-excellence: Methodology and checklists (React, TS, Python, etc.)
- vercel-react-best-practices: 57 rules for React/Next.js performance
- tailwind-design-system: Tailwind CSS v4 patterns, CVA, design tokens

New shell scripts added to ~/.claude/scripts/git/:
- pr-diff.sh: Get PR diff (GitHub gh / Gitea API)
- pr-metadata.sh: Get PR metadata as normalized JSON

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Jason Woltje
2026-02-16 16:03:39 -06:00
commit d9bcdc4a8d
87 changed files with 19317 additions and 0 deletions

194
skills/pr-reviewer/SKILL.md Normal file
View File

@@ -0,0 +1,194 @@
---
name: pr-reviewer
description: >
Structured PR code review workflow. Use when asked to "review this PR",
"code review", "review pull request", or "check this PR". Works with both
GitHub and Gitea via platform-aware scripts. Fetches PR metadata and diff,
analyzes against quality criteria, generates structured review files, and
posts feedback only after explicit approval.
version: 2.0.0
category: code-review
triggers:
- review pr
- code review
- review pull request
- check pr
- pr review
author: Adapted from SpillwaveSolutions/pr-reviewer-skill
license: MIT
tags:
- code-review
- pull-request
- quality-assurance
- gitea
- github
---
# PR Reviewer
Structured, two-stage PR code review workflow. Nothing is posted until explicit user approval.
## Prerequisites
- `~/.claude/scripts/git/` — Platform-aware git scripts must be available
- `python3` — For review file generation
- Current working directory must be inside the target git repository
## Workflow
### Stage 1: Collect and Analyze
#### 1. Gather PR Data
Run from inside the repo directory:
```bash
# Get PR metadata as JSON
~/.claude/scripts/git/pr-metadata.sh -n <PR_NUMBER> -o /tmp/pr-review/metadata.json
# Get PR diff
~/.claude/scripts/git/pr-diff.sh -n <PR_NUMBER> -o /tmp/pr-review/diff.patch
# View PR details (human-readable)
~/.claude/scripts/git/pr-view.sh -n <PR_NUMBER>
```
#### 2. Analyze the Changes
With the diff and metadata collected, analyze the PR against these criteria:
| Category | Key Questions |
|----------|--------------|
| **Functionality** | Does the code solve the stated problem? Edge cases handled? |
| **Security** | OWASP top 10? Secrets in code? Input validation? |
| **Testing** | Tests exist? Cover happy paths and edge cases? |
| **Performance** | Efficient algorithms? N+1 queries? Bundle size impact? |
| **Readability** | Clear naming? DRY? Consistent with codebase patterns? |
| **Architecture** | Follows project conventions? Proper separation of concerns? |
| **PR Quality** | Focused scope? Clean commits? Clear description? |
**Priority levels for findings:**
- **Blocker** — Must fix before merge
- **Important** — Should be addressed
- **Nit** — Nice to have, optional
- **Suggestion** — Consider for future
- **Question** — Clarification needed
- **Praise** — Good work worth calling out
#### 3. Generate Review Files
Create a findings JSON and run the generator:
```bash
python3 ~/.claude/skills/pr-reviewer/scripts/generate_review_files.py \
/tmp/pr-review --findings /tmp/pr-review/findings.json
```
This creates:
- `pr/review.md` — Detailed analysis (internal use)
- `pr/human.md` — Clean version for posting (no emojis, concise)
- `pr/inline.md` — Proposed inline comments with code context
**Findings JSON schema:**
```json
{
"summary": "Overall assessment of the PR",
"metadata": {
"repository": "owner/repo",
"number": 123,
"title": "PR title",
"author": "username",
"head_branch": "feature-branch",
"base_branch": "main"
},
"blockers": [
{
"category": "Security",
"issue": "Brief description",
"file": "src/path/file.ts",
"line": 45,
"details": "Explanation of the problem",
"fix": "Suggested solution",
"code_snippet": "relevant code"
}
],
"important": [],
"nits": [],
"suggestions": ["Consider adding..."],
"questions": ["Is this intended to..."],
"praise": ["Excellent test coverage"],
"inline_comments": [
{
"file": "src/path/file.ts",
"line": 42,
"comment": "Consider edge case",
"code_snippet": "relevant code",
"start_line": 40,
"end_line": 44
}
]
}
```
### Stage 2: Review and Post
#### 4. Present to User
Show the user:
1. The summary from `pr/review.md`
2. Count of blockers, important issues, nits
3. Overall recommendation (approve vs request changes)
4. Ask for approval before posting
**NOTHING is posted without explicit user consent.**
#### 5. Post the Review
After user approval:
```bash
# Option A: Approve
~/.claude/scripts/git/pr-review.sh -n <PR_NUMBER> -a approve -c "$(cat /tmp/pr-review/pr/human.md)"
# Option B: Request changes
~/.claude/scripts/git/pr-review.sh -n <PR_NUMBER> -a request-changes -c "$(cat /tmp/pr-review/pr/human.md)"
# Option C: Comment only (no verdict)
~/.claude/scripts/git/pr-review.sh -n <PR_NUMBER> -a comment -c "$(cat /tmp/pr-review/pr/human.md)"
```
## Review Criteria Reference
See `references/review_criteria.md` for the complete checklist.
### Quick Checklist
- [ ] Code compiles and passes CI
- [ ] Tests cover new functionality
- [ ] No hardcoded secrets or credentials
- [ ] Error handling is appropriate
- [ ] No obvious performance issues (N+1, unnecessary re-renders, large bundles)
- [ ] Follows project CLAUDE.md and AGENTS.md conventions
- [ ] Commit messages follow project format
- [ ] PR description explains the "why"
## Best Practices
### Communication
- Frame feedback as suggestions, not demands
- Explain WHY something matters, not just WHAT is wrong
- Acknowledge good work — praise is part of review
- Prioritize: blockers first, nits last
### Efficiency
- Focus on what automated tools can't catch (logic, architecture, intent)
- Don't nitpick formatting — that's the linter's job
- For large PRs (>400 lines), suggest splitting
- Review in logical chunks (API layer, then UI, then tests)
### Platform Notes
- **Gitea**: Inline code comments are posted as regular PR comments with file/line references in the text
- **GitHub**: Full inline comment API available
- Both platforms support approve/reject/comment review actions via our scripts

View File

@@ -0,0 +1,368 @@
# GitHub CLI (gh) Guide for PR Reviews
This reference provides quick commands and patterns for accessing PR data using the GitHub CLI.
## Prerequisites
Install GitHub CLI: https://cli.github.com/
Authenticate:
```bash
gh auth login
```
## Basic PR Information
### View PR Details
```bash
gh pr view <number> --repo <owner>/<repo>
# With JSON output
gh pr view <number> --repo <owner>/<repo> --json number,title,body,state,author,headRefName,baseRefName
```
### View PR Diff
```bash
gh pr diff <number> --repo <owner>/<repo>
# Save to file
gh pr diff <number> --repo <owner>/<repo> > pr_diff.patch
```
### List PR Files
```bash
gh pr view <number> --repo <owner>/<repo> --json files --jq '.files[].path'
```
## PR Comments and Reviews
### Get PR Comments (Review Comments on Code)
```bash
gh api /repos/<owner>/<repo>/pulls/<number>/comments
# Paginate through all comments
gh api /repos/<owner>/<repo>/pulls/<number>/comments --paginate
# With JQ filtering
gh api /repos/<owner>/<repo>/pulls/<number>/comments --jq '.[] | {path, line, body, user: .user.login}'
```
### Get PR Reviews
```bash
gh api /repos/<owner>/<repo>/pulls/<number>/reviews
# With formatted output
gh api /repos/<owner>/<repo>/pulls/<number>/reviews --jq '.[] | {state, user: .user.login, body}'
```
### Get Issue Comments (General PR Comments)
```bash
gh api /repos/<owner>/<repo>/issues/<number>/comments
```
## Commit Information
### List PR Commits
```bash
gh api /repos/<owner>/<repo>/pulls/<number>/commits
# Get commit messages
gh api /repos/<owner>/<repo>/pulls/<number>/commits --jq '.[] | {sha: .sha[0:7], message: .commit.message}'
# Get latest commit SHA
gh api /repos/<owner>/<repo>/pulls/<number>/commits --jq '.[-1].sha'
```
### Get Commit Details
```bash
gh api /repos/<owner>/<repo>/commits/<sha>
# Get commit diff
gh api /repos/<owner>/<repo>/commits/<sha> -H "Accept: application/vnd.github.diff"
```
## Branches
### Get Branch Information
```bash
# Source branch (head)
gh pr view <number> --repo <owner>/<repo> --json headRefName --jq '.headRefName'
# Target branch (base)
gh pr view <number> --repo <owner>/<repo> --json baseRefName --jq '.baseRefName'
```
### Compare Branches
```bash
gh api /repos/<owner>/<repo>/compare/<base>...<head>
# Get files changed
gh api /repos/<owner>/<repo>/compare/<base>...<head> --jq '.files[] | {filename, status, additions, deletions}'
```
## Related Issues and Tickets
### Get Linked Issues
```bash
# Get PR body which may contain issue references
gh pr view <number> --repo <owner>/<repo> --json body --jq '.body'
# Search for issue references (#123 format)
gh pr view <number> --repo <owner>/<repo> --json body --jq '.body' | grep -oE '#[0-9]+'
```
### Get Issue Details
```bash
gh issue view <number> --repo <owner>/<repo>
# JSON format
gh issue view <number> --repo <owner>/<repo> --json number,title,body,state,labels,assignees
```
### Get Issue Comments
```bash
gh api /repos/<owner>/<repo>/issues/<number>/comments
```
## PR Status Checks
### Get PR Status
```bash
gh pr checks <number> --repo <owner>/<repo>
# JSON format
gh api /repos/<owner>/<repo>/commits/<sha>/status
```
### Get Check Runs
```bash
gh api /repos/<owner>/<repo>/commits/<sha>/check-runs
```
## Adding Comments
### Add Inline Code Comment
```bash
gh api -X POST /repos/<owner>/<repo>/pulls/<number>/comments \
-f body="Your comment here" \
-f commit_id="<sha>" \
-f path="src/file.py" \
-f side="RIGHT" \
-f line=42
```
### Add Multi-line Inline Comment
```bash
gh api -X POST /repos/<owner>/<repo>/pulls/<number>/comments \
-f body="Multi-line comment" \
-f commit_id="<sha>" \
-f path="src/file.py" \
-f side="RIGHT" \
-f start_line=40 \
-f start_side="RIGHT" \
-f line=45
```
### Add General PR Comment
```bash
gh pr comment <number> --repo <owner>/<repo> --body "Your comment"
# Or via API
gh api -X POST /repos/<owner>/<repo>/issues/<number>/comments \
-f body="Your comment"
```
## Creating a Review
### Create Review with Comments
```bash
gh api -X POST /repos/<owner>/<repo>/pulls/<number>/reviews \
-f body="Overall review comments" \
-f event="COMMENT" \
-f commit_id="<sha>" \
-f comments='[{"path":"src/file.py","line":42,"body":"Comment on line 42"}]'
```
### Submit Review (Approve/Request Changes)
```bash
# Approve
gh api -X POST /repos/<owner>/<repo>/pulls/<number>/reviews \
-f body="LGTM!" \
-f event="APPROVE" \
-f commit_id="<sha>"
# Request changes
gh api -X POST /repos/<owner>/<repo>/pulls/<number>/reviews \
-f body="Please address these issues" \
-f event="REQUEST_CHANGES" \
-f commit_id="<sha>"
```
## Searching and Filtering
### Search Code in PR
```bash
# Get PR diff and search
gh pr diff <number> --repo <owner>/<repo> | grep "search_term"
# Search in specific files
gh pr view <number> --repo <owner>/<repo> --json files --jq '.files[] | select(.path | contains("search_term"))'
```
### Filter by File Type
```bash
gh pr view <number> --repo <owner>/<repo> --json files --jq '.files[] | select(.path | endswith(".py"))'
```
## Labels, Assignees, and Metadata
### Get Labels
```bash
gh pr view <number> --repo <owner>/<repo> --json labels --jq '.labels[].name'
```
### Get Assignees
```bash
gh pr view <number> --repo <owner>/<repo> --json assignees --jq '.assignees[].login'
```
### Get Reviewers
```bash
gh pr view <number> --repo <owner>/<repo> --json reviewRequests --jq '.reviewRequests[].login'
```
## Advanced Queries
### Get PR Timeline
```bash
gh api /repos/<owner>/<repo>/issues/<number>/timeline
```
### Get PR Events
```bash
gh api /repos/<owner>/<repo>/issues/<number>/events
```
### Get All PR Data
```bash
gh pr view <number> --repo <owner>/<repo> --json \
number,title,body,state,author,headRefName,baseRefName,\
commits,reviews,comments,files,labels,assignees,milestone,\
createdAt,updatedAt,mergedAt,closedAt,url,isDraft
```
## Common JQ Patterns
### Extract specific fields
```bash
--jq '.field'
--jq '.array[].field'
--jq '.[] | {field1, field2}'
```
### Filter arrays
```bash
--jq '.[] | select(.field == "value")'
--jq '.[] | select(.field | contains("substring"))'
```
### Count items
```bash
--jq '. | length'
--jq '.array | length'
```
### Map and transform
```bash
--jq '.array | map(.field)'
--jq '.[] | {newField: .oldField}'
```
## Line Number Considerations for Inline Comments
**IMPORTANT**: The `line` parameter for inline comments refers to the **line number in the diff**, not the absolute line number in the file.
### Understanding Diff Line Numbers
In a diff:
- Lines are numbered relative to the diff context, not the file
- The `side` parameter determines which version:
- `"RIGHT"`: New version (after changes)
- `"LEFT"`: Old version (before changes)
### Finding Diff Line Numbers
```bash
# Get diff with line numbers
gh pr diff <number> --repo <owner>/<repo> | cat -n
# Get specific file diff
gh api /repos/<owner>/<repo>/pulls/<number>/files --jq '.[] | select(.filename == "path/to/file")'
```
### Example Diff
```diff
@@ -10,7 +10,8 @@ def process_data(data):
if not data:
return None
- result = old_function(data)
+ # New implementation
+ result = new_function(data)
return result
```
In this diff:
- Line 13 (old) would be `side: "LEFT"`
- Line 14-15 (new) would be `side: "RIGHT"`
- Line numbers are relative to the diff hunk starting at line 10
## Error Handling
### Common Errors
**Resource not found**:
```bash
# Check repo access
gh repo view <owner>/<repo>
# Check PR exists
gh pr list --repo <owner>/<repo> | grep <number>
```
**API rate limit**:
```bash
# Check rate limit
gh api /rate_limit
# Use authentication to get higher limits
gh auth login
```
**Permission denied**:
```bash
# Check authentication
gh auth status
# May need additional scopes
gh auth refresh -s repo
```
## Tips and Best Practices
1. **Use `--paginate`** for large result sets (comments, commits)
2. **Combine with `jq`** for powerful filtering and formatting
3. **Cache results** by saving to files to avoid repeated API calls
4. **Check rate limits** when making many API calls
5. **Use `--json` output** for programmatic parsing
6. **Specify `--repo`** when outside repository directory
7. **Get latest commit** before adding inline comments
8. **Test comments** on draft PRs or test repositories first
## Reference Links
- GitHub CLI Manual: https://cli.github.com/manual/
- GitHub REST API: https://docs.github.com/en/rest
- JQ Manual: https://jqlang.github.io/jq/manual/
- PR Review Comments API: https://docs.github.com/en/rest/pulls/comments
- PR Reviews API: https://docs.github.com/en/rest/pulls/reviews

View File

@@ -0,0 +1,345 @@
# Code Review Criteria
This document outlines the comprehensive criteria for conducting pull request code reviews. Use this as a checklist when reviewing PRs to ensure thorough, consistent, and constructive feedback.
## Review Process Overview
When reviewing a PR, the goal is to ensure changes are:
- **Correct**: Solves the intended problem without bugs
- **Maintainable**: Easy to understand and modify
- **Aligned**: Follows project standards and conventions
- **Secure**: Free from vulnerabilities
- **Tested**: Covered by appropriate tests
## 1. Functionality and Correctness
### Problem Resolution
- [ ] **Does the code solve the intended problem?**
- Verify changes address the issue or feature described in the PR
- Cross-reference with linked tickets (JIRA, GitHub issues)
- Test manually or run the code if possible
### Bugs and Logic
- [ ] **Are there bugs or logical errors?**
- Check for off-by-one errors
- Verify null/undefined/None handling
- Review assumptions about inputs and outputs
- Look for race conditions or concurrency issues
- Check loop termination conditions
### Edge Cases and Error Handling
- [ ] **Edge cases handled?**
- Empty collections (arrays, lists, maps)
- Null/None/undefined values
- Boundary values (min/max integers, empty strings)
- Invalid or malformed inputs
- [ ] **Error handling implemented?**
- Network failures
- File system errors
- Database connection issues
- API errors and timeouts
- Graceful degradation
### Compatibility
- [ ] **Works across supported environments?**
- Browser compatibility (if web app)
- OS versions (if desktop/mobile)
- Database versions
- Language/runtime versions
- Doesn't break existing features (regression check)
## 2. Readability and Maintainability
### Code Clarity
- [ ] **Easy to read and understand?**
- Meaningful variable names (avoid `x`, `temp`, `data`)
- Meaningful function names (verb-first, descriptive)
- Short methods/functions (ideally < 50 lines)
- Logical structure and flow
- Minimal nested complexity
### Modularity
- [ ] **Single Responsibility Principle?**
- Functions/methods do one thing well
- Classes have a clear, focused purpose
- No "god objects" or overly complex logic
- [ ] **Suggest refactoring if needed:**
- Extract complex logic into helper functions
- Break large functions into smaller ones
- Separate concerns (UI, business logic, data access)
### Code Duplication
- [ ] **DRY (Don't Repeat Yourself)?**
- Repeated code abstracted into helpers
- Shared logic moved to libraries/utilities
- Avoid copy-paste programming
### Future-Proofing
- [ ] **Allows for easy extensions?**
- Avoid hard-coded values (use constants/configs)
- Use dependency injection where appropriate
- Follow SOLID principles
- Consider extensibility without modification
## 3. Style and Conventions
### Style Guide Adherence
- [ ] **Follows project linter rules?**
- ESLint (JavaScript/TypeScript)
- Pylint/Flake8/Black (Python)
- RuboCop (Ruby)
- Checkstyle/PMD (Java)
- golangci-lint (Go)
- [ ] **Formatting consistent?**
- Proper indentation (spaces vs. tabs)
- Consistent spacing
- Line length limits
- Import/require organization
### Codebase Consistency
- [ ] **Matches existing patterns?**
- Follows established architectural patterns
- Uses existing utilities and helpers
- Consistent naming conventions
- Matches idioms of the language/framework
### Comments and Documentation
- [ ] **Sufficient comments?**
- Complex algorithms explained
- Non-obvious decisions documented
- API contracts clarified
- TODOs tracked with ticket numbers
- [ ] **Not excessive?**
- Code should be self-documenting where possible
- Avoid obvious comments ("increment i")
- [ ] **Documentation updated?**
- README reflects new features
- API docs updated
- Inline docs (JSDoc, docstrings, etc.)
- Architecture diagrams current
## 4. Performance and Efficiency
### Resource Usage
- [ ] **Algorithm efficiency?**
- Avoid O(n²) or worse in loops
- Use appropriate data structures
- Minimize database queries (N+1 problem)
- Avoid unnecessary computations
### Scalability
- [ ] **Performs well under load?**
- No blocking operations in critical paths
- Async/await for I/O operations
- Pagination for large datasets
- Caching where appropriate
### Optimization Balance
- [ ] **Optimizations necessary?**
- Premature optimization avoided
- Readability not sacrificed for micro-optimizations
- Benchmark before complex optimizations
- Profile to identify actual bottlenecks
## 5. Security and Best Practices
### Vulnerabilities
- [ ] **Common security issues addressed?**
- SQL injection (use parameterized queries)
- XSS (Cross-Site Scripting) - proper escaping
- CSRF (Cross-Site Request Forgery) - tokens
- Command injection
- Path traversal
- Authentication/authorization checks
### Data Handling
- [ ] **Sensitive data protected?**
- Encrypted in transit (HTTPS/TLS)
- Encrypted at rest
- Input validation and sanitization
- Output encoding
- PII handling compliance (GDPR, etc.)
- [ ] **Secrets management?**
- No hardcoded passwords/API keys
- Use environment variables
- Use secret management systems
- No secrets in logs
### Dependencies
- [ ] **New packages justified?**
- Actually necessary
- From trusted sources
- Up-to-date and maintained
- No known vulnerabilities
- License compatible
- [ ] **Dependency management?**
- Lock files committed
- Minimal dependency footprint
- Consider alternatives if bloated
## 6. Testing and Quality Assurance
### Test Coverage
- [ ] **Tests exist for new code?**
- Unit tests for individual functions/methods
- Integration tests for workflows
- End-to-end tests for critical paths
- [ ] **Tests cover scenarios?**
- Happy paths
- Error conditions
- Edge cases
- Boundary conditions
### Test Quality
- [ ] **Tests are meaningful?**
- Not just for coverage metrics
- Assert actual behavior
- Test intent, not implementation
- Avoid brittle tests
- [ ] **Test maintainability?**
- Clear test names
- Arrange-Act-Assert pattern
- Minimal test duplication
- Fast execution
### CI/CD Integration
- [ ] **Automated checks pass?**
- Linting
- Tests (unit, integration, e2e)
- Build process
- Security scans
- Code coverage thresholds
## 7. Overall PR Quality
### Scope
- [ ] **PR is focused?**
- Single feature/fix per PR
- Not too large (< 400 lines ideal)
- Suggest splitting if combines unrelated changes
### Commit History
- [ ] **Clean, atomic commits?**
- Each commit is logical unit
- Descriptive commit messages
- Follow conventional commits if applicable
- Avoid "fix", "update", "wip" vagueness
### PR Description
- [ ] **Clear description?**
- Explains **why** changes were made
- Links to tickets/issues
- Steps to reproduce/test
- Screenshots for UI changes
- Breaking changes called out
- Migration steps if needed
### Impact Assessment
- [ ] **Considered downstream effects?**
- API changes (breaking vs. backward-compatible)
- Database schema changes
- Impact on other teams/services
- Performance implications
- Monitoring and alerting needs
## Review Feedback Guidelines
### Communication Style
- **Be constructive and kind**
- Frame as suggestions: "Consider X because Y"
- Not criticism: "This is wrong"
- Acknowledge good work
- Explain the "why" behind feedback
### Prioritization
- **Focus on critical issues first:**
1. Bugs and correctness
2. Security vulnerabilities
3. Performance problems
4. Design/architecture issues
5. Style and conventions
### Feedback Markers
Use clear markers to indicate severity:
- **🔴 Blocker**: Must be fixed before merge
- **🟡 Important**: Should be addressed
- **🟢 Nit**: Nice to have, optional
- **💡 Suggestion**: Consider for future
- **❓ Question**: Clarification needed
- **✅ Praise**: Good work!
### Time Efficiency
- Review promptly (within 24 hours)
- For large PRs, review in chunks
- Request smaller PRs if too large
- Use automated tools to catch style issues
### Decision Making
- **Approve**: Solid overall, minor nits acceptable
- **Request Changes**: Blockers must be addressed
- **Comment**: Provide feedback without blocking
## Language/Framework-Specific Considerations
### JavaScript/TypeScript
- Type safety (TypeScript)
- Promise handling (avoid callback hell)
- Memory leaks (event listeners)
- Bundle size impact
### Python
- PEP 8 compliance
- Type hints (Python 3.5+)
- Virtual environment dependencies
- Generator usage for memory efficiency
### Java
- Memory management
- Exception handling (checked vs. unchecked)
- Thread safety
- Immutability where appropriate
### Go
- Error handling (no exceptions)
- Goroutine management
- Channel usage
- Interface design
### SQL/Database
- Index usage
- Query performance
- Transaction boundaries
- Migration reversibility
### Frontend (React, Vue, Angular)
- Component reusability
- State management
- Accessibility (a11y)
- Performance (re-renders, bundle size)
## Tools and Automation
Leverage tools to automate checks:
- **Linters**: ESLint, Pylint, RuboCop
- **Formatters**: Prettier, Black, gofmt
- **Security**: Snyk, CodeQL, Dependabot
- **Coverage**: Codecov, Coveralls
- **Performance**: Lighthouse, WebPageTest
- **Accessibility**: axe, WAVE
## Resources
- Google Engineering Practices: https://google.github.io/eng-practices/review/
- GitHub Code Review Guide: https://github.com/features/code-review
- OWASP Top 10: https://owasp.org/www-project-top-ten/
- Clean Code (Robert C. Martin)
- Code Complete (Steve McConnell)

View File

@@ -0,0 +1,71 @@
# Common Review Scenarios
Detailed workflows for specific review use cases.
## Scenario 1: Quick Review Request
**Trigger**: User provides PR URL and requests review.
**Workflow**:
1. Run `fetch_pr_data.py` to collect data
2. Read `SUMMARY.txt` and `metadata.json`
3. Scan `diff.patch` for obvious issues
4. Apply critical criteria (security, bugs, tests)
5. Create findings JSON with analysis
6. Run `generate_review_files.py` to create review files
7. Direct user to review `pr/review.md` and `pr/human.md`
8. Remind user to use `/show` to edit, then `/send` or `/send-decline`
## Scenario 2: Thorough Review with Inline Comments
**Trigger**: User requests comprehensive review with inline comments.
**Workflow**:
1. Run `fetch_pr_data.py` with cloning enabled
2. Read all collected files (metadata, diff, comments, commits)
3. Apply full `review_criteria.md` checklist
4. Identify critical issues, important issues, and nits
5. Create findings JSON with `inline_comments` array
6. Run `generate_review_files.py` to create all files
7. Direct user to:
- Review `pr/review.md` for detailed analysis
- Edit `pr/human.md` if needed
- Check `pr/inline.md` for proposed comments
- Use `/show` to open in VS Code
- Use `/send` or `/send-decline` when ready
- Optionally post inline comments from `pr/inline.md`
## Scenario 3: Security-Focused Review
**Trigger**: User requests security-specific review.
**Workflow**:
1. Fetch PR data
2. Focus on `review_criteria.md` Section 5 (Security)
3. Check for: SQL injection, XSS, CSRF, secrets exposure
4. Examine dependencies in metadata
5. Review authentication/authorization changes
6. Report security findings with severity ratings
## Scenario 4: Review with Related Tickets
**Trigger**: User requests review against linked JIRA/GitHub ticket.
**Workflow**:
1. Fetch PR data (captures ticket references)
2. Read `related_issues.json`
3. Compare PR changes against ticket requirements
4. Verify all acceptance criteria met
5. Note any missing functionality
6. Suggest additional tests if needed
## Scenario 5: Large PR Review (>400 lines)
**Trigger**: PR contains more than 400 lines of changes.
**Workflow**:
1. Suggest splitting into smaller PRs if feasible
2. Review in logical chunks by file or feature
3. Focus on architecture and design first
4. Document structural concerns before line-level issues
5. Prioritize security and correctness over style

View File

@@ -0,0 +1,55 @@
# Troubleshooting Guide
Common issues and solutions for the PR Reviewer skill.
## gh CLI Not Found
Install GitHub CLI: https://cli.github.com/
```bash
# macOS
brew install gh
# Linux
sudo apt install gh # or yum, dnf, etc.
# Authenticate
gh auth login
```
## Permission Denied Errors
Check authentication:
```bash
gh auth status
gh auth refresh -s repo
```
## Invalid PR URL
Ensure URL format: `https://github.com/owner/repo/pull/NUMBER`
## Line Number Mismatch in Diff
Inline comment line numbers are **relative to the diff**, not absolute file positions.
Use `gh pr diff <number>` to see diff line numbers.
## Rate Limit Errors
```bash
# Check rate limit
gh api /rate_limit
# Authenticated users get higher limits
gh auth login
```
## Common Error Patterns
| Error | Cause | Solution |
|-------|-------|----------|
| 401 Unauthorized | Token expired | Run `gh auth refresh` |
| 403 Forbidden | Missing scope | Run `gh auth refresh -s repo` |
| 404 Not Found | Private repo access | Verify repo permissions |
| 422 Unprocessable | Invalid request | Check command arguments |

View File

@@ -0,0 +1,480 @@
#!/usr/bin/env python3
"""
Generate structured review files from PR analysis.
Creates three review files:
- pr/review.md: Detailed review for internal use
- pr/human.md: Short, clean review for posting (no emojis, em-dashes, line numbers)
- pr/inline.md: List of inline comments with code snippets
Usage:
python generate_review_files.py <pr_review_dir> --findings <findings_json>
Example:
python generate_review_files.py /tmp/PRs/myrepo/123 --findings findings.json
"""
import argparse
import json
import os
import sys
from pathlib import Path
from typing import Dict, List, Any
def create_pr_directory(pr_review_dir: Path) -> Path:
"""Create the pr/ subdirectory for review files."""
pr_dir = pr_review_dir / "pr"
pr_dir.mkdir(parents=True, exist_ok=True)
return pr_dir
def load_findings(findings_file: str) -> Dict[str, Any]:
"""
Load review findings from JSON file.
Expected structure:
{
"summary": "Overall assessment...",
"blockers": [{
"category": "Security",
"issue": "SQL injection vulnerability",
"file": "src/db/queries.py",
"line": 45,
"details": "Using string concatenation...",
"fix": "Use parameterized queries",
"code_snippet": "result = db.execute(...)"
}],
"important": [...],
"nits": [...],
"suggestions": [...],
"questions": [...],
"praise": [...],
"inline_comments": [{
"file": "src/app.py",
"line": 42,
"comment": "Consider edge case handling",
"code_snippet": "def process(data):\n return data.strip()",
"start_line": 41,
"end_line": 43
}]
}
"""
with open(findings_file, 'r') as f:
return json.load(f)
def generate_detailed_review(findings: Dict[str, Any], metadata: Dict[str, Any]) -> str:
"""Generate detailed review.md with full analysis."""
review = f"""# Pull Request Review - Detailed Analysis
## PR Information
**Repository**: {metadata.get('repository', 'N/A')}
**PR Number**: #{metadata.get('number', 'N/A')}
**Title**: {metadata.get('title', 'N/A')}
**Author**: {metadata.get('author', 'N/A')}
**Branch**: {metadata.get('head_branch', 'N/A')}{metadata.get('base_branch', 'N/A')}
## Summary
{findings.get('summary', 'No summary provided')}
"""
# Add blockers
blockers = findings.get('blockers', [])
if blockers:
review += "## 🔴 Critical Issues (Blockers)\n\n"
review += "**These MUST be fixed before merging.**\n\n"
for i, blocker in enumerate(blockers, 1):
review += f"### {i}. {blocker.get('category', 'Issue')}: {blocker.get('issue', 'Unknown')}\n\n"
if blocker.get('file'):
review += f"**File**: `{blocker['file']}"
if blocker.get('line'):
review += f":{blocker['line']}"
review += "`\n\n"
review += f"**Problem**: {blocker.get('details', 'No details')}\n\n"
if blocker.get('fix'):
review += f"**Solution**: {blocker['fix']}\n\n"
if blocker.get('code_snippet'):
review += f"**Current Code**:\n```\n{blocker['code_snippet']}\n```\n\n"
review += "---\n\n"
# Add important issues
important = findings.get('important', [])
if important:
review += "## 🟡 Important Issues\n\n"
review += "**Should be addressed before merging.**\n\n"
for i, issue in enumerate(important, 1):
review += f"### {i}. {issue.get('category', 'Issue')}: {issue.get('issue', 'Unknown')}\n\n"
if issue.get('file'):
review += f"**File**: `{issue['file']}"
if issue.get('line'):
review += f":{issue['line']}"
review += "`\n\n"
review += f"**Impact**: {issue.get('details', 'No details')}\n\n"
if issue.get('fix'):
review += f"**Suggestion**: {issue['fix']}\n\n"
if issue.get('code_snippet'):
review += f"**Code**:\n```\n{issue['code_snippet']}\n```\n\n"
review += "---\n\n"
# Add nits
nits = findings.get('nits', [])
if nits:
review += "## 🟢 Minor Issues (Nits)\n\n"
review += "**Nice to have, but not blocking.**\n\n"
for i, nit in enumerate(nits, 1):
review += f"{i}. **{nit.get('category', 'Style')}**: {nit.get('issue', 'Unknown')}\n"
if nit.get('file'):
review += f" - File: `{nit['file']}`\n"
if nit.get('details'):
review += f" - {nit['details']}\n"
review += "\n"
# Add suggestions
suggestions = findings.get('suggestions', [])
if suggestions:
review += "## 💡 Suggestions for Future\n\n"
for i, suggestion in enumerate(suggestions, 1):
review += f"{i}. {suggestion}\n"
review += "\n"
# Add questions
questions = findings.get('questions', [])
if questions:
review += "## ❓ Questions / Clarifications Needed\n\n"
for i, question in enumerate(questions, 1):
review += f"{i}. {question}\n"
review += "\n"
# Add praise
praise = findings.get('praise', [])
if praise:
review += "## ✅ Positive Notes\n\n"
for item in praise:
review += f"- {item}\n"
review += "\n"
# Add overall recommendation
review += "## Overall Recommendation\n\n"
if blockers:
review += "**Request Changes** - Critical issues must be addressed.\n"
elif important:
review += "**Request Changes** - Important issues should be fixed.\n"
else:
review += "**Approve** - Looks good! Minor nits can be addressed optionally.\n"
return review
def generate_human_review(findings: Dict[str, Any], metadata: Dict[str, Any]) -> str:
"""
Generate short, clean human.md for posting.
Rules:
- No emojis
- No em dashes (use regular hyphens)
- No code line numbers
- Concise and professional
"""
def clean_text(text: str) -> str:
"""Remove em-dashes and replace with regular hyphens."""
if not text:
return text
# Replace em dash (—) with regular hyphen (-)
# Also replace en dash () with regular hyphen
return text.replace('', '-').replace('', '-')
title = clean_text(metadata.get('title', 'N/A'))
summary = clean_text(findings.get('summary', 'No summary provided'))
review = f"""# Code Review
**PR #{metadata.get('number', 'N/A')}**: {title}
## Summary
{summary}
"""
# Add blockers - no emojis
blockers = findings.get('blockers', [])
if blockers:
review += "## Critical Issues - Must Fix\n\n"
for i, blocker in enumerate(blockers, 1):
# No emojis, no em dashes, no line numbers
issue = clean_text(blocker.get('issue', 'Issue'))
details = clean_text(blocker.get('details', 'No details'))
fix = clean_text(blocker.get('fix', ''))
review += f"{i}. **{issue}**\n"
if blocker.get('file'):
# File path without line number
review += f" - File: `{blocker['file']}`\n"
review += f" - {details}\n"
if fix:
review += f" - Fix: {fix}\n"
review += "\n"
# Add important issues
important = findings.get('important', [])
if important:
review += "## Important Issues - Should Fix\n\n"
for i, issue_item in enumerate(important, 1):
issue = clean_text(issue_item.get('issue', 'Issue'))
details = clean_text(issue_item.get('details', 'No details'))
fix = clean_text(issue_item.get('fix', ''))
review += f"{i}. **{issue}**\n"
if issue_item.get('file'):
review += f" - File: `{issue_item['file']}`\n"
review += f" - {details}\n"
if fix:
review += f" - Suggestion: {fix}\n"
review += "\n"
# Add nits - keep brief
nits = findings.get('nits', [])
if nits and len(nits) <= 3: # Only include if few
review += "## Minor Issues\n\n"
for i, nit in enumerate(nits, 1):
issue = clean_text(nit.get('issue', 'Issue'))
review += f"{i}. {issue}"
if nit.get('file'):
review += f" in `{nit['file']}`"
review += "\n"
review += "\n"
# Add praise
praise = findings.get('praise', [])
if praise:
review += "## Positive Notes\n\n"
for item in praise:
clean_item = clean_text(item)
review += f"- {clean_item}\n"
review += "\n"
# Add overall recommendation - no emojis
if blockers:
review += "## Recommendation\n\nRequest changes - critical issues need to be addressed before merging.\n"
elif important:
review += "## Recommendation\n\nRequest changes - please address the important issues listed above.\n"
else:
review += "## Recommendation\n\nApprove - the code looks good. Minor items can be addressed optionally.\n"
return review
def generate_inline_comments_file(findings: Dict[str, Any]) -> str:
"""
Generate inline.md with list of proposed inline comments.
Includes code snippets with line number headers.
"""
inline_comments = findings.get('inline_comments', [])
if not inline_comments:
return "# Inline Comments\n\nNo inline comments proposed.\n"
content = "# Proposed Inline Comments\n\n"
content += f"**Total Comments**: {len(inline_comments)}\n\n"
content += "Review these before posting. Edit as needed.\n\n"
content += "---\n\n"
for i, comment in enumerate(inline_comments, 1):
content += f"## Comment {i}\n\n"
content += f"**File**: `{comment.get('file', 'unknown')}`\n"
content += f"**Line**: {comment.get('line', 'N/A')}\n"
if comment.get('start_line') and comment.get('end_line'):
content += f"**Range**: Lines {comment['start_line']}-{comment['end_line']}\n"
content += f"\n**Comment**:\n{comment.get('comment', 'No comment')}\n\n"
if comment.get('code_snippet'):
# Add line numbers in header
start = comment.get('start_line', comment.get('line', 1))
end = comment.get('end_line', comment.get('line', 1))
if start == end:
content += f"**Code (Line {start})**:\n"
else:
content += f"**Code (Lines {start}-{end})**:\n"
content += f"```\n{comment['code_snippet']}\n```\n\n"
# Add command to post this comment
owner = comment.get('owner', 'OWNER')
repo = comment.get('repo', 'REPO')
pr_num = comment.get('pr_number', 'PR_NUM')
content += "**Command to post**:\n```bash\n"
content += f"python scripts/add_inline_comment.py {owner} {repo} {pr_num} latest \\\n"
content += f" \"{comment.get('file', 'file.py')}\" {comment.get('line', 42)} \\\n"
content += f" \"{comment.get('comment', 'comment')}\"\n"
content += "```\n\n"
content += "---\n\n"
return content
def generate_claude_commands(pr_review_dir: Path, metadata: Dict[str, Any]):
"""Generate .claude directory with custom slash commands."""
claude_dir = pr_review_dir / ".claude" / "commands"
claude_dir.mkdir(parents=True, exist_ok=True)
owner = metadata.get('owner', 'owner')
repo = metadata.get('repo', 'repo')
pr_number = metadata.get('number', '123')
# /send command - approve and post human.md
send_cmd = f"""Post the human-friendly review and approve the PR.
Steps:
1. Read the file `pr/human.md` in the current directory
2. Post the review content as a PR comment using:
`gh pr comment {pr_number} --repo {owner}/{repo} --body-file pr/human.md`
3. Approve the PR using:
`gh pr review {pr_number} --repo {owner}/{repo} --approve`
4. Confirm to the user that the review was posted and PR was approved
"""
with open(claude_dir / "send.md", 'w') as f:
f.write(send_cmd)
# /send-decline command - request changes and post human.md
send_decline_cmd = f"""Post the human-friendly review and request changes on the PR.
Steps:
1. Read the file `pr/human.md` in the current directory
2. Post the review content as a PR comment using:
`gh pr comment {pr_number} --repo {owner}/{repo} --body-file pr/human.md`
3. Request changes on the PR using:
`gh pr review {pr_number} --repo {owner}/{repo} --request-changes`
4. Confirm to the user that the review was posted and changes were requested
"""
with open(claude_dir / "send-decline.md", 'w') as f:
f.write(send_decline_cmd)
# /show command - open in VS Code
show_cmd = f"""Open the PR review directory in VS Code for editing.
Steps:
1. Run `code .` to open the current directory in VS Code
2. Tell the user they can now edit the review files:
- pr/review.md (detailed review)
- pr/human.md (short review for posting)
- pr/inline.md (inline comments)
3. Remind them to use /send or /send-decline when ready to post
"""
with open(claude_dir / "show.md", 'w') as f:
f.write(show_cmd)
print(f"✅ Created slash commands in {claude_dir}")
print(" - /send (approve and post)")
print(" - /send-decline (request changes and post)")
print(" - /show (open in VS Code)")
def main():
parser = argparse.ArgumentParser(
description='Generate structured review files from PR analysis',
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=__doc__
)
parser.add_argument('pr_review_dir', help='PR review directory path')
parser.add_argument('--findings', required=True, help='JSON file with review findings')
parser.add_argument('--metadata', help='JSON file with PR metadata (optional)')
args = parser.parse_args()
try:
# Load findings
findings = load_findings(args.findings)
# Load metadata if provided
metadata = {}
if args.metadata and os.path.exists(args.metadata):
with open(args.metadata, 'r') as f:
metadata = json.load(f)
# Extract metadata from findings if not provided
if not metadata:
metadata = findings.get('metadata', {})
# Create pr directory
pr_review_dir = Path(args.pr_review_dir)
pr_dir = create_pr_directory(pr_review_dir)
print(f"📝 Generating review files in {pr_dir}...")
# Generate detailed review
detailed_review = generate_detailed_review(findings, metadata)
review_file = pr_dir / "review.md"
with open(review_file, 'w') as f:
f.write(detailed_review)
print(f"✅ Created detailed review: {review_file}")
# Generate human-friendly review
human_review = generate_human_review(findings, metadata)
human_file = pr_dir / "human.md"
with open(human_file, 'w') as f:
f.write(human_review)
print(f"✅ Created human review: {human_file}")
# Generate inline comments file
inline_comments = generate_inline_comments_file(findings)
inline_file = pr_dir / "inline.md"
with open(inline_file, 'w') as f:
f.write(inline_comments)
print(f"✅ Created inline comments: {inline_file}")
# Generate Claude slash commands
generate_claude_commands(pr_review_dir, metadata)
# Create summary file
summary = f"""PR Review Files Generated
========================
Directory: {pr_review_dir}
Files created:
- pr/review.md - Detailed analysis for your review
- pr/human.md - Clean version for posting (no emojis, no line numbers)
- pr/inline.md - Proposed inline comments with code snippets
Slash commands available:
- /send - Post human.md and approve PR
- /send-decline - Post human.md and request changes
- /show - Open directory in VS Code
Next steps:
1. Review the files (use /show to open in VS Code)
2. Edit as needed
3. Use /send or /send-decline when ready to post
IMPORTANT: Nothing will be posted until you run /send or /send-decline
"""
summary_file = pr_review_dir / "REVIEW_READY.txt"
with open(summary_file, 'w') as f:
f.write(summary)
print(f"\n{summary}")
except Exception as e:
print(f"Error: {e}", file=sys.stderr)
sys.exit(1)
if __name__ == '__main__':
main()