diff --git a/README.md b/README.md
index 807ed2f..ddcdad9 100644
--- a/README.md
+++ b/README.md
@@ -1,86 +1,156 @@
# Agent Skills
-Curated agent skill fleet for Mosaic Stack. Covers coding, business development, design, marketing, orchestration, and more. Platform-aware — works with both GitHub (`gh`) and Gitea (`tea`) via our abstraction scripts.
+Complete agent skill fleet for Mosaic Stack. 78 skills across 10 domains — coding, business development, design, marketing, writing, orchestration, document generation, and more. Platform-aware — works with both GitHub (`gh`) and Gitea (`tea`) via our abstraction scripts.
-## Skills (23)
+## Skills (78)
-### Code Quality & Review
+### Code Quality & Review (5)
| Skill | Purpose | Origin |
|-------|---------|--------|
-| `pr-reviewer` | Structured PR code review workflow (Gitea/GitHub) | Adapted from [SpillwaveSolutions](https://github.com/SpillwaveSolutions/pr-reviewer-skill) |
-| `code-review-excellence` | Code review methodology and checklists | Adapted from [awesome-skills](https://github.com/awesome-skills/code-review-skill) |
-| `verification-before-completion` | Evidence-based completion claims — no success without verification | [obra/superpowers](https://github.com/obra/superpowers) |
+| `pr-reviewer` | Structured PR code review workflow (Gitea/GitHub) | Adapted from SpillwaveSolutions |
+| `code-review-excellence` | Code review methodology and checklists | awesome-skills |
+| `verification-before-completion` | Evidence-based completion claims | obra/superpowers |
+| `receiving-code-review` | How to receive and respond to code reviews | obra/superpowers |
+| `requesting-code-review` | How to request effective code reviews | obra/superpowers |
-### Frontend & UI
+### Frontend & UI (8)
| Skill | Purpose | Origin |
|-------|---------|--------|
-| `next-best-practices` | Next.js 15+ — RSC, async patterns, self-hosting, data patterns | [vercel-labs/next-skills](https://github.com/vercel-labs/next-skills) |
-| `vercel-react-best-practices` | React/Next.js performance optimization (57 rules) | [vercel-labs/agent-skills](https://github.com/vercel-labs/agent-skills) |
-| `shadcn-ui` | Component patterns — forms, dialogs, tables, charts | [developer-kit](https://github.com/giuseppe-trisciuoglio/developer-kit) |
-| `tailwind-design-system` | Tailwind CSS v4 design system patterns | Adapted from [wshobson/agents](https://github.com/wshobson/agents) |
-| `ui-animation` | Motion design — performance, accessibility, easing curves | [mblode/agent-skills](https://github.com/mblode/agent-skills) |
+| `next-best-practices` | Next.js 15+ — RSC, async, self-hosting, data patterns | vercel-labs/next-skills |
+| `vercel-react-best-practices` | React/Next.js performance (57 rules) | vercel-labs |
+| `vercel-composition-patterns` | React composition and component patterns | vercel-labs |
+| `vercel-react-native-skills` | React Native development patterns | vercel-labs |
+| `shadcn-ui` | Component patterns — forms, dialogs, tables, charts | developer-kit |
+| `tailwind-design-system` | Tailwind CSS v4 design system patterns | wshobson |
+| `ui-animation` | Motion design — performance, accessibility, easing | mblode |
+| `web-design-guidelines` | Web design principles and guidelines | vercel-labs |
-### Backend & API
+### Backend & API (4)
| Skill | Purpose | Origin |
|-------|---------|--------|
-| `nestjs-best-practices` | NestJS — 40 rules across 10 categories, priority-ranked | [kadajett/agent-nestjs-skills](https://github.com/kadajett/agent-nestjs-skills) |
-| `fastapi` | FastAPI with Pydantic v2, async SQLAlchemy 2.0, JWT auth | [jezweb/claude-skills](https://github.com/jezweb/claude-skills) |
-| `architecture-patterns` | Clean Architecture, Hexagonal, DDD patterns | [wshobson/agents](https://github.com/wshobson/agents) |
-| `python-performance-optimization` | Profiling, memory optimization, parallelization | [wshobson/agents](https://github.com/wshobson/agents) |
+| `nestjs-best-practices` | NestJS — 40 rules, 10 categories, priority-ranked | kadajett |
+| `fastapi` | FastAPI + Pydantic v2 + async SQLAlchemy 2.0 | jezweb |
+| `architecture-patterns` | Clean Architecture, Hexagonal, DDD | wshobson |
+| `python-performance-optimization` | Profiling, memory, parallelization | wshobson |
-### Authentication
+### Authentication (5)
| Skill | Purpose | Origin |
|-------|---------|--------|
-| `better-auth-best-practices` | Better-Auth — Drizzle adapter, sessions, plugins, security | [better-auth/skills](https://github.com/better-auth/skills) |
+| `better-auth-best-practices` | Better-Auth — Drizzle, sessions, plugins, security | better-auth |
+| `create-auth-skill` | Creating custom Better-Auth skills | better-auth |
+| `email-and-password-best-practices` | Email/password auth patterns | better-auth |
+| `organization-best-practices` | Multi-org/team auth patterns | better-auth |
+| `two-factor-authentication-best-practices` | 2FA implementation patterns | better-auth |
-### AI & Agent Building
+### AI & Agent Building (7)
| Skill | Purpose | Origin |
|-------|---------|--------|
-| `ai-sdk` | Vercel AI SDK — streaming, multi-provider, agent patterns | [vercel/ai](https://github.com/vercel/ai) |
-| `create-agent` | Modular agent architecture with OpenRouter multi-model access | [openrouterteam/agent-skills](https://github.com/openrouterteam/agent-skills) |
-| `proactive-agent` | Proactive agent architecture — WAL Protocol, compaction recovery, self-improvement | [halthelobster/proactive-agent](https://github.com/halthelobster/proactive-agent) |
+| `ai-sdk` | Vercel AI SDK — streaming, multi-provider, agents | vercel/ai |
+| `create-agent` | Modular agent with OpenRouter multi-model access | openrouterteam |
+| `proactive-agent` | WAL Protocol, compaction recovery, self-improvement | halthelobster |
+| `dispatching-parallel-agents` | Launching and managing parallel subagents | obra/superpowers |
+| `subagent-driven-development` | Development workflow using subagents | obra/superpowers |
+| `executing-plans` | Executing multi-step implementation plans | obra/superpowers |
+| `using-superpowers` | Overview of the superpowers skill system | obra/superpowers |
-### Marketing & Business Development
+### Development Workflow (6)
| Skill | Purpose | Origin |
|-------|---------|--------|
-| `marketing-ideas` | 139 proven marketing ideas across 14 categories | [coreyhaines31/marketingskills](https://github.com/coreyhaines31/marketingskills) |
-| `pricing-strategy` | SaaS pricing — value metrics, tier design, research methods | [coreyhaines31/marketingskills](https://github.com/coreyhaines31/marketingskills) |
-| `programmatic-seo` | SEO at scale — templates, playbooks, internal linking | [coreyhaines31/marketingskills](https://github.com/coreyhaines31/marketingskills) |
-| `competitor-alternatives` | Competitor comparison pages — content architecture, templates | [coreyhaines31/marketingskills](https://github.com/coreyhaines31/marketingskills) |
-| `referral-program` | Referral & affiliate programs — incentives, metrics, launch | [coreyhaines31/marketingskills](https://github.com/coreyhaines31/marketingskills) |
+| `test-driven-development` | TDD Red-Green-Refactor discipline | obra/superpowers |
+| `systematic-debugging` | Structured debugging methodology | obra/superpowers |
+| `using-git-worktrees` | Git worktree patterns for parallel work | obra/superpowers |
+| `finishing-a-development-branch` | Branch cleanup, squash, merge patterns | obra/superpowers |
+| `writing-plans` | Writing effective implementation plans | obra/superpowers |
+| `brainstorming` | Structured brainstorming methodology | obra/superpowers |
-### Design & Brand
+### Document Generation (6)
| Skill | Purpose | Origin |
|-------|---------|--------|
-| `brand-guidelines` | Brand identity enforcement — fonts, colors, styling | [anthropics/skills](https://github.com/anthropics/skills) |
+| `pdf` | PDF document generation | anthropics |
+| `docx` | Word document generation | anthropics |
+| `pptx` | PowerPoint presentation generation | anthropics |
+| `xlsx` | Excel spreadsheet generation | anthropics |
+| `doc-coauthoring` | Collaborative document writing | anthropics |
+| `internal-comms` | Internal communications drafting | anthropics |
-### Meta / Skill Authoring
+### Design & Creative (7)
| Skill | Purpose | Origin |
|-------|---------|--------|
-| `writing-skills` | TDD-based methodology for creating and testing agent skills | [obra/superpowers](https://github.com/obra/superpowers) |
+| `brand-guidelines` | Brand identity enforcement | anthropics |
+| `frontend-design` | Frontend design patterns and principles | anthropics |
+| `canvas-design` | Canvas/visual design patterns | anthropics |
+| `algorithmic-art` | Generative/algorithmic art creation | anthropics |
+| `theme-factory` | Theme generation and customization | anthropics |
+| `slack-gif-creator` | Animated GIF creation for Slack | anthropics |
+| `web-artifacts-builder` | Self-contained HTML artifact building | anthropics |
-## Mosaic Stack Alignment
+### Marketing & Business (25)
-These skills are curated for the Mosaic Stack platform, which serves coding, business development, design, marketing, writing, logistics, analysis, and more. The Orchestrator and subagents can load any combination of skills based on the task at hand.
+| Skill | Purpose | Origin |
+|-------|---------|--------|
+| `marketing-ideas` | 139 ideas across 14 categories | coreyhaines31 |
+| `pricing-strategy` | SaaS pricing — value metrics, tiers, research | coreyhaines31 |
+| `programmatic-seo` | SEO at scale — templates, playbooks | coreyhaines31 |
+| `competitor-alternatives` | Competitor comparison pages | coreyhaines31 |
+| `referral-program` | Referral & affiliate programs | coreyhaines31 |
+| `seo-audit` | Comprehensive SEO audit methodology | coreyhaines31 |
+| `copywriting` | Marketing copywriting patterns | coreyhaines31 |
+| `copy-editing` | Copy editing and proofreading | coreyhaines31 |
+| `content-strategy` | Content strategy and planning | coreyhaines31 |
+| `social-content` | Social media content creation | coreyhaines31 |
+| `email-sequence` | Email sequence design and automation | coreyhaines31 |
+| `launch-strategy` | Product launch planning | coreyhaines31 |
+| `marketing-psychology` | Psychology-driven marketing | coreyhaines31 |
+| `product-marketing-context` | Product marketing positioning | coreyhaines31 |
+| `paid-ads` | Paid advertising campaigns | coreyhaines31 |
+| `schema-markup` | Schema.org structured data | coreyhaines31 |
+| `analytics-tracking` | Analytics setup and tracking | coreyhaines31 |
+| `ab-test-setup` | A/B testing methodology | coreyhaines31 |
+| `page-cro` | Landing page conversion optimization | coreyhaines31 |
+| `form-cro` | Form conversion optimization | coreyhaines31 |
+| `signup-flow-cro` | Signup flow conversion optimization | coreyhaines31 |
+| `onboarding-cro` | User onboarding optimization | coreyhaines31 |
+| `popup-cro` | Popup/modal conversion optimization | coreyhaines31 |
+| `paywall-upgrade-cro` | Paywall/upgrade conversion optimization | coreyhaines31 |
+| `free-tool-strategy` | Free tool as marketing strategy | coreyhaines31 |
-**Core tech stack:**
-- **Backend:** NestJS + TypeScript
-- **Frontend:** Next.js 15 + React 19 (App Router)
-- **Styling:** Tailwind CSS v4 + shadcn/ui
-- **Auth:** Better-Auth with Drizzle adapter
-- **Database:** PostgreSQL + Drizzle ORM
-- **CI/CD:** Woodpecker CI / Gitea
-- **Deployment:** Docker Swarm
+### Meta / Skill Authoring & Deployment (5)
-Skills are either used as-is (when framework-agnostic or matching our stack) or adapted with Mosaic Stack-specific notes where upstream assumptions differ.
+| Skill | Purpose | Origin |
+|-------|---------|--------|
+| `writing-skills` | TDD-based skill authoring methodology | obra/superpowers |
+| `skill-creator` | Anthropic's skill creation guide | anthropics |
+| `mcp-builder` | Building MCP (Model Context Protocol) servers | anthropics |
+| `webapp-testing` | Web application testing patterns | anthropics |
+| `vercel-deploy` | Vercel deployment patterns | vercel-labs |
+
+## Source Repositories
+
+| Repository | Skills | Domain Focus |
+|-----------|--------|-------------|
+| [anthropics/skills](https://github.com/anthropics/skills) | 16 | Documents, design, MCP, testing |
+| [obra/superpowers](https://github.com/obra/superpowers) | 14 | Agent workflows, TDD, code review, planning |
+| [coreyhaines31/marketingskills](https://github.com/coreyhaines31/marketingskills) | 25 | Marketing, CRO, SEO, growth |
+| [better-auth/skills](https://github.com/better-auth/skills) | 5 | Authentication patterns |
+| [vercel-labs/agent-skills](https://github.com/vercel-labs/agent-skills) | 5 | React, design, Vercel |
+| [vercel-labs/next-skills](https://github.com/vercel-labs/next-skills) | 1 | Next.js 15+ |
+| [vercel/ai](https://github.com/vercel/ai) | 1 | AI SDK |
+| [halthelobster/proactive-agent](https://github.com/halthelobster/proactive-agent) | 1 | Agent architecture |
+| [openrouterteam/agent-skills](https://github.com/openrouterteam/agent-skills) | 1 | Agent building |
+| [kadajett/agent-nestjs-skills](https://github.com/kadajett/agent-nestjs-skills) | 1 | NestJS |
+| [jezweb/claude-skills](https://github.com/jezweb/claude-skills) | 1 | FastAPI |
+| [wshobson/agents](https://github.com/wshobson/agents) | 3 | Architecture, Python, Tailwind |
+| [mblode/agent-skills](https://github.com/mblode/agent-skills) | 1 | UI animation |
+| [giuseppe-trisciuoglio/developer-kit](https://github.com/giuseppe-trisciuoglio/developer-kit) | 1 | shadcn/ui |
+| Adapted (Mosaic Stack) | 2 | PR review, code review |
## Installation
@@ -94,41 +164,28 @@ npx skills add https://git.mosaicstack.dev/mosaic/agent-skills.git --skill nestj
npx skills add https://git.mosaicstack.dev/mosaic/agent-skills.git --agent claude-code
# Non-interactive (CI/scripting)
-npx skills add https://git.mosaicstack.dev/mosaic/agent-skills.git --skill pr-reviewer --yes --agent claude-code
+npx skills add https://git.mosaicstack.dev/mosaic/agent-skills.git --yes --agent claude-code
```
-**Note:** The `.git` suffix on the URL is required for Gitea-hosted repos (forces git clone instead of well-known endpoint discovery).
+**Note:** The `.git` suffix on the URL is required for Gitea-hosted repos.
### Manual (symlink from local clone)
```bash
-# Clone the repo
git clone https://git.mosaicstack.dev/mosaic/agent-skills.git ~/src/agent-skills
-
-# Symlink all skills
for skill in ~/src/agent-skills/skills/*/; do
ln -sf "$skill" ~/.claude/skills/$(basename "$skill")
done
```
-### Per-Project
-
-Symlink into a project's `.claude/skills/` directory for project-specific availability.
-
-## Dependencies
-
-- `~/.claude/scripts/git/` — Platform-aware git scripts (pr-reviewer only)
-- `python3` — For review file generation (pr-reviewer only)
-
## Adapting Skills
When adding skills from the community:
1. Replace raw `gh`/`tea` calls with our `~/.claude/scripts/git/` scripts
2. Test on both GitHub and Gitea repos
-3. Remove features that don't work cross-platform
-4. Add Mosaic Stack context notes where upstream assumptions differ
-5. Document any platform-specific limitations
+3. Add Mosaic Stack context notes where upstream assumptions differ
+4. Document any platform-specific limitations
## License
diff --git a/skills/ab-test-setup/SKILL.md b/skills/ab-test-setup/SKILL.md
new file mode 100644
index 0000000..b0bc652
--- /dev/null
+++ b/skills/ab-test-setup/SKILL.md
@@ -0,0 +1,265 @@
+---
+name: ab-test-setup
+version: 1.0.0
+description: When the user wants to plan, design, or implement an A/B test or experiment. Also use when the user mentions "A/B test," "split test," "experiment," "test this change," "variant copy," "multivariate test," or "hypothesis." For tracking implementation, see analytics-tracking.
+---
+
+# A/B Test Setup
+
+You are an expert in experimentation and A/B testing. Your goal is to help design tests that produce statistically valid, actionable results.
+
+## Initial Assessment
+
+**Check for product marketing context first:**
+If `.claude/product-marketing-context.md` exists, read it before asking questions. Use that context and only ask for information not already covered or specific to this task.
+
+Before designing a test, understand:
+
+1. **Test Context** - What are you trying to improve? What change are you considering?
+2. **Current State** - Baseline conversion rate? Current traffic volume?
+3. **Constraints** - Technical complexity? Timeline? Tools available?
+
+---
+
+## Core Principles
+
+### 1. Start with a Hypothesis
+- Not just "let's see what happens"
+- Specific prediction of outcome
+- Based on reasoning or data
+
+### 2. Test One Thing
+- Single variable per test
+- Otherwise you don't know what worked
+
+### 3. Statistical Rigor
+- Pre-determine sample size
+- Don't peek and stop early
+- Commit to the methodology
+
+### 4. Measure What Matters
+- Primary metric tied to business value
+- Secondary metrics for context
+- Guardrail metrics to prevent harm
+
+---
+
+## Hypothesis Framework
+
+### Structure
+
+```
+Because [observation/data],
+we believe [change]
+will cause [expected outcome]
+for [audience].
+We'll know this is true when [metrics].
+```
+
+### Example
+
+**Weak**: "Changing the button color might increase clicks."
+
+**Strong**: "Because users report difficulty finding the CTA (per heatmaps and feedback), we believe making the button larger and using contrasting color will increase CTA clicks by 15%+ for new visitors. We'll measure click-through rate from page view to signup start."
+
+---
+
+## Test Types
+
+| Type | Description | Traffic Needed |
+|------|-------------|----------------|
+| A/B | Two versions, single change | Moderate |
+| A/B/n | Multiple variants | Higher |
+| MVT | Multiple changes in combinations | Very high |
+| Split URL | Different URLs for variants | Moderate |
+
+---
+
+## Sample Size
+
+### Quick Reference
+
+| Baseline | 10% Lift | 20% Lift | 50% Lift |
+|----------|----------|----------|----------|
+| 1% | 150k/variant | 39k/variant | 6k/variant |
+| 3% | 47k/variant | 12k/variant | 2k/variant |
+| 5% | 27k/variant | 7k/variant | 1.2k/variant |
+| 10% | 12k/variant | 3k/variant | 550/variant |
+
+**Calculators:**
+- [Evan Miller's](https://www.evanmiller.org/ab-testing/sample-size.html)
+- [Optimizely's](https://www.optimizely.com/sample-size-calculator/)
+
+**For detailed sample size tables and duration calculations**: See [references/sample-size-guide.md](references/sample-size-guide.md)
+
+---
+
+## Metrics Selection
+
+### Primary Metric
+- Single metric that matters most
+- Directly tied to hypothesis
+- What you'll use to call the test
+
+### Secondary Metrics
+- Support primary metric interpretation
+- Explain why/how the change worked
+
+### Guardrail Metrics
+- Things that shouldn't get worse
+- Stop test if significantly negative
+
+### Example: Pricing Page Test
+- **Primary**: Plan selection rate
+- **Secondary**: Time on page, plan distribution
+- **Guardrail**: Support tickets, refund rate
+
+---
+
+## Designing Variants
+
+### What to Vary
+
+| Category | Examples |
+|----------|----------|
+| Headlines/Copy | Message angle, value prop, specificity, tone |
+| Visual Design | Layout, color, images, hierarchy |
+| CTA | Button copy, size, placement, number |
+| Content | Information included, order, amount, social proof |
+
+### Best Practices
+- Single, meaningful change
+- Bold enough to make a difference
+- True to the hypothesis
+
+---
+
+## Traffic Allocation
+
+| Approach | Split | When to Use |
+|----------|-------|-------------|
+| Standard | 50/50 | Default for A/B |
+| Conservative | 90/10, 80/20 | Limit risk of bad variant |
+| Ramping | Start small, increase | Technical risk mitigation |
+
+**Considerations:**
+- Consistency: Users see same variant on return
+- Balanced exposure across time of day/week
+
+---
+
+## Implementation
+
+### Client-Side
+- JavaScript modifies page after load
+- Quick to implement, can cause flicker
+- Tools: PostHog, Optimizely, VWO
+
+### Server-Side
+- Variant determined before render
+- No flicker, requires dev work
+- Tools: PostHog, LaunchDarkly, Split
+
+---
+
+## Running the Test
+
+### Pre-Launch Checklist
+- [ ] Hypothesis documented
+- [ ] Primary metric defined
+- [ ] Sample size calculated
+- [ ] Variants implemented correctly
+- [ ] Tracking verified
+- [ ] QA completed on all variants
+
+### During the Test
+
+**DO:**
+- Monitor for technical issues
+- Check segment quality
+- Document external factors
+
+**DON'T:**
+- Peek at results and stop early
+- Make changes to variants
+- Add traffic from new sources
+
+### The Peeking Problem
+Looking at results before reaching sample size and stopping early leads to false positives and wrong decisions. Pre-commit to sample size and trust the process.
+
+---
+
+## Analyzing Results
+
+### Statistical Significance
+- 95% confidence = p-value < 0.05
+- Means <5% chance result is random
+- Not a guarantee—just a threshold
+
+### Analysis Checklist
+
+1. **Reach sample size?** If not, result is preliminary
+2. **Statistically significant?** Check confidence intervals
+3. **Effect size meaningful?** Compare to MDE, project impact
+4. **Secondary metrics consistent?** Support the primary?
+5. **Guardrail concerns?** Anything get worse?
+6. **Segment differences?** Mobile vs. desktop? New vs. returning?
+
+### Interpreting Results
+
+| Result | Conclusion |
+|--------|------------|
+| Significant winner | Implement variant |
+| Significant loser | Keep control, learn why |
+| No significant difference | Need more traffic or bolder test |
+| Mixed signals | Dig deeper, maybe segment |
+
+---
+
+## Documentation
+
+Document every test with:
+- Hypothesis
+- Variants (with screenshots)
+- Results (sample, metrics, significance)
+- Decision and learnings
+
+**For templates**: See [references/test-templates.md](references/test-templates.md)
+
+---
+
+## Common Mistakes
+
+### Test Design
+- Testing too small a change (undetectable)
+- Testing too many things (can't isolate)
+- No clear hypothesis
+
+### Execution
+- Stopping early
+- Changing things mid-test
+- Not checking implementation
+
+### Analysis
+- Ignoring confidence intervals
+- Cherry-picking segments
+- Over-interpreting inconclusive results
+
+---
+
+## Task-Specific Questions
+
+1. What's your current conversion rate?
+2. How much traffic does this page get?
+3. What change are you considering and why?
+4. What's the smallest improvement worth detecting?
+5. What tools do you have for testing?
+6. Have you tested this area before?
+
+---
+
+## Related Skills
+
+- **page-cro**: For generating test ideas based on CRO principles
+- **analytics-tracking**: For setting up test measurement
+- **copywriting**: For creating variant copy
diff --git a/skills/ab-test-setup/ab-test-setup b/skills/ab-test-setup/ab-test-setup
new file mode 120000
index 0000000..4a04213
--- /dev/null
+++ b/skills/ab-test-setup/ab-test-setup
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/ab-test-setup/
\ No newline at end of file
diff --git a/skills/ab-test-setup/references/sample-size-guide.md b/skills/ab-test-setup/references/sample-size-guide.md
new file mode 100644
index 0000000..c934b02
--- /dev/null
+++ b/skills/ab-test-setup/references/sample-size-guide.md
@@ -0,0 +1,252 @@
+# Sample Size Guide
+
+Reference for calculating sample sizes and test duration.
+
+## Sample Size Fundamentals
+
+### Required Inputs
+
+1. **Baseline conversion rate**: Your current rate
+2. **Minimum detectable effect (MDE)**: Smallest change worth detecting
+3. **Statistical significance level**: Usually 95% (α = 0.05)
+4. **Statistical power**: Usually 80% (β = 0.20)
+
+### What These Mean
+
+**Baseline conversion rate**: If your page converts at 5%, that's your baseline.
+
+**MDE (Minimum Detectable Effect)**: The smallest improvement you care about detecting. Set this based on:
+- Business impact (is a 5% lift meaningful?)
+- Implementation cost (worth the effort?)
+- Realistic expectations (what have past tests shown?)
+
+**Statistical significance (95%)**: Means there's less than 5% chance the observed difference is due to random chance.
+
+**Statistical power (80%)**: Means if there's a real effect of size MDE, you have 80% chance of detecting it.
+
+---
+
+## Sample Size Quick Reference Tables
+
+### Conversion Rate: 1%
+
+| Lift to Detect | Sample per Variant | Total Sample |
+|----------------|-------------------|--------------|
+| 5% (1% → 1.05%) | 1,500,000 | 3,000,000 |
+| 10% (1% → 1.1%) | 380,000 | 760,000 |
+| 20% (1% → 1.2%) | 97,000 | 194,000 |
+| 50% (1% → 1.5%) | 16,000 | 32,000 |
+| 100% (1% → 2%) | 4,200 | 8,400 |
+
+### Conversion Rate: 3%
+
+| Lift to Detect | Sample per Variant | Total Sample |
+|----------------|-------------------|--------------|
+| 5% (3% → 3.15%) | 480,000 | 960,000 |
+| 10% (3% → 3.3%) | 120,000 | 240,000 |
+| 20% (3% → 3.6%) | 31,000 | 62,000 |
+| 50% (3% → 4.5%) | 5,200 | 10,400 |
+| 100% (3% → 6%) | 1,400 | 2,800 |
+
+### Conversion Rate: 5%
+
+| Lift to Detect | Sample per Variant | Total Sample |
+|----------------|-------------------|--------------|
+| 5% (5% → 5.25%) | 280,000 | 560,000 |
+| 10% (5% → 5.5%) | 72,000 | 144,000 |
+| 20% (5% → 6%) | 18,000 | 36,000 |
+| 50% (5% → 7.5%) | 3,100 | 6,200 |
+| 100% (5% → 10%) | 810 | 1,620 |
+
+### Conversion Rate: 10%
+
+| Lift to Detect | Sample per Variant | Total Sample |
+|----------------|-------------------|--------------|
+| 5% (10% → 10.5%) | 130,000 | 260,000 |
+| 10% (10% → 11%) | 34,000 | 68,000 |
+| 20% (10% → 12%) | 8,700 | 17,400 |
+| 50% (10% → 15%) | 1,500 | 3,000 |
+| 100% (10% → 20%) | 400 | 800 |
+
+### Conversion Rate: 20%
+
+| Lift to Detect | Sample per Variant | Total Sample |
+|----------------|-------------------|--------------|
+| 5% (20% → 21%) | 60,000 | 120,000 |
+| 10% (20% → 22%) | 16,000 | 32,000 |
+| 20% (20% → 24%) | 4,000 | 8,000 |
+| 50% (20% → 30%) | 700 | 1,400 |
+| 100% (20% → 40%) | 200 | 400 |
+
+---
+
+## Duration Calculator
+
+### Formula
+
+```
+Duration (days) = (Sample per variant × Number of variants) / (Daily traffic × % exposed)
+```
+
+### Examples
+
+**Scenario 1: High-traffic page**
+- Need: 10,000 per variant (2 variants = 20,000 total)
+- Daily traffic: 5,000 visitors
+- 100% exposed to test
+- Duration: 20,000 / 5,000 = **4 days**
+
+**Scenario 2: Medium-traffic page**
+- Need: 30,000 per variant (60,000 total)
+- Daily traffic: 2,000 visitors
+- 100% exposed
+- Duration: 60,000 / 2,000 = **30 days**
+
+**Scenario 3: Low-traffic with partial exposure**
+- Need: 15,000 per variant (30,000 total)
+- Daily traffic: 500 visitors
+- 50% exposed to test
+- Effective daily: 250
+- Duration: 30,000 / 250 = **120 days** (too long!)
+
+### Minimum Duration Rules
+
+Even with sufficient sample size, run tests for at least:
+- **1 full week**: To capture day-of-week variation
+- **2 business cycles**: If B2B (weekday vs. weekend patterns)
+- **Through paydays**: If e-commerce (beginning/end of month)
+
+### Maximum Duration Guidelines
+
+Avoid running tests longer than 4-8 weeks:
+- Novelty effects wear off
+- External factors intervene
+- Opportunity cost of other tests
+
+---
+
+## Online Calculators
+
+### Recommended Tools
+
+**Evan Miller's Calculator**
+https://www.evanmiller.org/ab-testing/sample-size.html
+- Simple interface
+- Bookmark-worthy
+
+**Optimizely's Calculator**
+https://www.optimizely.com/sample-size-calculator/
+- Business-friendly language
+- Duration estimates
+
+**AB Test Guide Calculator**
+https://www.abtestguide.com/calc/
+- Includes Bayesian option
+- Multiple test types
+
+**VWO Duration Calculator**
+https://vwo.com/tools/ab-test-duration-calculator/
+- Duration-focused
+- Good for planning
+
+---
+
+## Adjusting for Multiple Variants
+
+With more than 2 variants (A/B/n tests), you need more sample:
+
+| Variants | Multiplier |
+|----------|------------|
+| 2 (A/B) | 1x |
+| 3 (A/B/C) | ~1.5x |
+| 4 (A/B/C/D) | ~2x |
+| 5+ | Consider reducing variants |
+
+**Why?** More comparisons increase chance of false positives. You're comparing:
+- A vs B
+- A vs C
+- B vs C (sometimes)
+
+Apply Bonferroni correction or use tools that handle this automatically.
+
+---
+
+## Common Sample Size Mistakes
+
+### 1. Underpowered tests
+**Problem**: Not enough sample to detect realistic effects
+**Fix**: Be realistic about MDE, get more traffic, or don't test
+
+### 2. Overpowered tests
+**Problem**: Waiting for sample size when you already have significance
+**Fix**: This is actually fine—you committed to sample size, honor it
+
+### 3. Wrong baseline rate
+**Problem**: Using wrong conversion rate for calculation
+**Fix**: Use the specific metric and page, not site-wide averages
+
+### 4. Ignoring segments
+**Problem**: Calculating for full traffic, then analyzing segments
+**Fix**: If you plan segment analysis, calculate sample for smallest segment
+
+### 5. Testing too many things
+**Problem**: Dividing traffic too many ways
+**Fix**: Prioritize ruthlessly, run fewer concurrent tests
+
+---
+
+## When Sample Size Requirements Are Too High
+
+Options when you can't get enough traffic:
+
+1. **Increase MDE**: Accept only detecting larger effects (20%+ lift)
+2. **Lower confidence**: Use 90% instead of 95% (risky, document it)
+3. **Reduce variants**: Test only the most promising variant
+4. **Combine traffic**: Test across multiple similar pages
+5. **Test upstream**: Test earlier in funnel where traffic is higher
+6. **Don't test**: Make decision based on qualitative data instead
+7. **Longer test**: Accept longer duration (weeks/months)
+
+---
+
+## Sequential Testing
+
+If you must check results before reaching sample size:
+
+### What is it?
+Statistical method that adjusts for multiple looks at data.
+
+### When to use
+- High-risk changes
+- Need to stop bad variants early
+- Time-sensitive decisions
+
+### Tools that support it
+- Optimizely (Stats Accelerator)
+- VWO (SmartStats)
+- PostHog (Bayesian approach)
+
+### Tradeoff
+- More flexibility to stop early
+- Slightly larger sample size requirement
+- More complex analysis
+
+---
+
+## Quick Decision Framework
+
+### Can I run this test?
+
+```
+Daily traffic to page: _____
+Baseline conversion rate: _____
+MDE I care about: _____
+
+Sample needed per variant: _____ (from tables above)
+Days to run: Sample / Daily traffic = _____
+
+If days > 60: Consider alternatives
+If days > 30: Acceptable for high-impact tests
+If days < 14: Likely feasible
+If days < 7: Easy to run, consider running longer anyway
+```
diff --git a/skills/ab-test-setup/references/test-templates.md b/skills/ab-test-setup/references/test-templates.md
new file mode 100644
index 0000000..a504421
--- /dev/null
+++ b/skills/ab-test-setup/references/test-templates.md
@@ -0,0 +1,268 @@
+# A/B Test Templates Reference
+
+Templates for planning, documenting, and analyzing experiments.
+
+## Test Plan Template
+
+```markdown
+# A/B Test: [Name]
+
+## Overview
+- **Owner**: [Name]
+- **Test ID**: [ID in testing tool]
+- **Page/Feature**: [What's being tested]
+- **Planned dates**: [Start] - [End]
+
+## Hypothesis
+
+Because [observation/data],
+we believe [change]
+will cause [expected outcome]
+for [audience].
+We'll know this is true when [metrics].
+
+## Test Design
+
+| Element | Details |
+|---------|---------|
+| Test type | A/B / A/B/n / MVT |
+| Duration | X weeks |
+| Sample size | X per variant |
+| Traffic allocation | 50/50 |
+| Tool | [Tool name] |
+| Implementation | Client-side / Server-side |
+
+## Variants
+
+### Control (A)
+[Screenshot]
+- Current experience
+- [Key details about current state]
+
+### Variant (B)
+[Screenshot or mockup]
+- [Specific change #1]
+- [Specific change #2]
+- Rationale: [Why we think this will win]
+
+## Metrics
+
+### Primary
+- **Metric**: [metric name]
+- **Definition**: [how it's calculated]
+- **Current baseline**: [X%]
+- **Minimum detectable effect**: [X%]
+
+### Secondary
+- [Metric 1]: [what it tells us]
+- [Metric 2]: [what it tells us]
+- [Metric 3]: [what it tells us]
+
+### Guardrails
+- [Metric that shouldn't get worse]
+- [Another safety metric]
+
+## Segment Analysis Plan
+- Mobile vs. desktop
+- New vs. returning visitors
+- Traffic source
+- [Other relevant segments]
+
+## Success Criteria
+- Winner: [Primary metric improves by X% with 95% confidence]
+- Loser: [Primary metric decreases significantly]
+- Inconclusive: [What we'll do if no significant result]
+
+## Pre-Launch Checklist
+- [ ] Hypothesis documented and reviewed
+- [ ] Primary metric defined and trackable
+- [ ] Sample size calculated
+- [ ] Test duration estimated
+- [ ] Variants implemented correctly
+- [ ] Tracking verified in all variants
+- [ ] QA completed on all variants
+- [ ] Stakeholders informed
+- [ ] Calendar hold for analysis date
+```
+
+---
+
+## Results Documentation Template
+
+```markdown
+# A/B Test Results: [Name]
+
+## Summary
+| Element | Value |
+|---------|-------|
+| Test ID | [ID] |
+| Dates | [Start] - [End] |
+| Duration | X days |
+| Result | Winner / Loser / Inconclusive |
+| Decision | [What we're doing] |
+
+## Hypothesis (Reminder)
+[Copy from test plan]
+
+## Results
+
+### Sample Size
+| Variant | Target | Actual | % of target |
+|---------|--------|--------|-------------|
+| Control | X | Y | Z% |
+| Variant | X | Y | Z% |
+
+### Primary Metric: [Metric Name]
+| Variant | Value | 95% CI | vs. Control |
+|---------|-------|--------|-------------|
+| Control | X% | [X%, Y%] | — |
+| Variant | X% | [X%, Y%] | +X% |
+
+**Statistical significance**: p = X.XX (95% = sig / not sig)
+**Practical significance**: [Is this lift meaningful for the business?]
+
+### Secondary Metrics
+
+| Metric | Control | Variant | Change | Significant? |
+|--------|---------|---------|--------|--------------|
+| [Metric 1] | X | Y | +Z% | Yes/No |
+| [Metric 2] | X | Y | +Z% | Yes/No |
+
+### Guardrail Metrics
+
+| Metric | Control | Variant | Change | Concern? |
+|--------|---------|---------|--------|----------|
+| [Metric 1] | X | Y | +Z% | Yes/No |
+
+### Segment Analysis
+
+**Mobile vs. Desktop**
+| Segment | Control | Variant | Lift |
+|---------|---------|---------|------|
+| Mobile | X% | Y% | +Z% |
+| Desktop | X% | Y% | +Z% |
+
+**New vs. Returning**
+| Segment | Control | Variant | Lift |
+|---------|---------|---------|------|
+| New | X% | Y% | +Z% |
+| Returning | X% | Y% | +Z% |
+
+## Interpretation
+
+### What happened?
+[Explanation of results in plain language]
+
+### Why do we think this happened?
+[Analysis and reasoning]
+
+### Caveats
+[Any limitations, external factors, or concerns]
+
+## Decision
+
+**Winner**: [Control / Variant]
+
+**Action**: [Implement variant / Keep control / Re-test]
+
+**Timeline**: [When changes will be implemented]
+
+## Learnings
+
+### What we learned
+- [Key insight 1]
+- [Key insight 2]
+
+### What to test next
+- [Follow-up test idea 1]
+- [Follow-up test idea 2]
+
+### Impact
+- **Projected lift**: [X% improvement in Y metric]
+- **Business impact**: [Revenue, conversions, etc.]
+```
+
+---
+
+## Test Repository Entry Template
+
+For tracking all tests in a central location:
+
+```markdown
+| Test ID | Name | Page | Dates | Primary Metric | Result | Lift | Link |
+|---------|------|------|-------|----------------|--------|------|------|
+| 001 | Hero headline test | Homepage | 1/1-1/15 | CTR | Winner | +12% | [Link] |
+| 002 | Pricing table layout | Pricing | 1/10-1/31 | Plan selection | Loser | -5% | [Link] |
+| 003 | Signup form fields | Signup | 2/1-2/14 | Completion | Inconclusive | +2% | [Link] |
+```
+
+---
+
+## Quick Test Brief Template
+
+For simple tests that don't need full documentation:
+
+```markdown
+## [Test Name]
+
+**What**: [One sentence description]
+**Why**: [One sentence hypothesis]
+**Metric**: [Primary metric]
+**Duration**: [X weeks]
+**Result**: [TBD / Winner / Loser / Inconclusive]
+**Learnings**: [Key takeaway]
+```
+
+---
+
+## Stakeholder Update Template
+
+```markdown
+## A/B Test Update: [Name]
+
+**Status**: Running / Complete
+**Days remaining**: X (or complete)
+**Current sample**: X% of target
+
+### Preliminary observations
+[What we're seeing - without making decisions yet]
+
+### Next steps
+[What happens next]
+
+### Timeline
+- [Date]: Analysis complete
+- [Date]: Decision and recommendation
+- [Date]: Implementation (if winner)
+```
+
+---
+
+## Experiment Prioritization Scorecard
+
+For deciding which tests to run:
+
+| Factor | Weight | Test A | Test B | Test C |
+|--------|--------|--------|--------|--------|
+| Potential impact | 30% | | | |
+| Confidence in hypothesis | 25% | | | |
+| Ease of implementation | 20% | | | |
+| Risk if wrong | 15% | | | |
+| Strategic alignment | 10% | | | |
+| **Total** | | | | |
+
+Scoring: 1-5 (5 = best)
+
+---
+
+## Hypothesis Bank Template
+
+For collecting test ideas:
+
+```markdown
+| ID | Page/Area | Observation | Hypothesis | Potential Impact | Status |
+|----|-----------|-------------|------------|------------------|--------|
+| H1 | Homepage | Low scroll depth | Shorter hero will increase scroll | High | Testing |
+| H2 | Pricing | Users compare plans | Comparison table will help | Medium | Backlog |
+| H3 | Signup | Drop-off at email | Social login will increase completion | Medium | Backlog |
+```
diff --git a/skills/ai-sdk/ai-sdk b/skills/ai-sdk/ai-sdk
new file mode 120000
index 0000000..5126cb2
--- /dev/null
+++ b/skills/ai-sdk/ai-sdk
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/ai-sdk/
\ No newline at end of file
diff --git a/skills/algorithmic-art/LICENSE.txt b/skills/algorithmic-art/LICENSE.txt
new file mode 100644
index 0000000..7a4a3ea
--- /dev/null
+++ b/skills/algorithmic-art/LICENSE.txt
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
\ No newline at end of file
diff --git a/skills/algorithmic-art/SKILL.md b/skills/algorithmic-art/SKILL.md
new file mode 100644
index 0000000..634f6fa
--- /dev/null
+++ b/skills/algorithmic-art/SKILL.md
@@ -0,0 +1,405 @@
+---
+name: algorithmic-art
+description: Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
+license: Complete terms in LICENSE.txt
+---
+
+Algorithmic philosophies are computational aesthetic movements that are then expressed through code. Output .md files (philosophy), .html files (interactive viewer), and .js files (generative algorithms).
+
+This happens in two steps:
+1. Algorithmic Philosophy Creation (.md file)
+2. Express by creating p5.js generative art (.html + .js files)
+
+First, undertake this task:
+
+## ALGORITHMIC PHILOSOPHY CREATION
+
+To begin, create an ALGORITHMIC PHILOSOPHY (not static images or templates) that will be interpreted through:
+- Computational processes, emergent behavior, mathematical beauty
+- Seeded randomness, noise fields, organic systems
+- Particles, flows, fields, forces
+- Parametric variation and controlled chaos
+
+### THE CRITICAL UNDERSTANDING
+- What is received: Some subtle input or instructions by the user to take into account, but use as a foundation; it should not constrain creative freedom.
+- What is created: An algorithmic philosophy/generative aesthetic movement.
+- What happens next: The same version receives the philosophy and EXPRESSES IT IN CODE - creating p5.js sketches that are 90% algorithmic generation, 10% essential parameters.
+
+Consider this approach:
+- Write a manifesto for a generative art movement
+- The next phase involves writing the algorithm that brings it to life
+
+The philosophy must emphasize: Algorithmic expression. Emergent behavior. Computational beauty. Seeded variation.
+
+### HOW TO GENERATE AN ALGORITHMIC PHILOSOPHY
+
+**Name the movement** (1-2 words): "Organic Turbulence" / "Quantum Harmonics" / "Emergent Stillness"
+
+**Articulate the philosophy** (4-6 paragraphs - concise but complete):
+
+To capture the ALGORITHMIC essence, express how this philosophy manifests through:
+- Computational processes and mathematical relationships?
+- Noise functions and randomness patterns?
+- Particle behaviors and field dynamics?
+- Temporal evolution and system states?
+- Parametric variation and emergent complexity?
+
+**CRITICAL GUIDELINES:**
+- **Avoid redundancy**: Each algorithmic aspect should be mentioned once. Avoid repeating concepts about noise theory, particle dynamics, or mathematical principles unless adding new depth.
+- **Emphasize craftsmanship REPEATEDLY**: The philosophy MUST stress multiple times that the final algorithm should appear as though it took countless hours to develop, was refined with care, and comes from someone at the absolute top of their field. This framing is essential - repeat phrases like "meticulously crafted algorithm," "the product of deep computational expertise," "painstaking optimization," "master-level implementation."
+- **Leave creative space**: Be specific about the algorithmic direction, but concise enough that the next Claude has room to make interpretive implementation choices at an extremely high level of craftsmanship.
+
+The philosophy must guide the next version to express ideas ALGORITHMICALLY, not through static images. Beauty lives in the process, not the final frame.
+
+### PHILOSOPHY EXAMPLES
+
+**"Organic Turbulence"**
+Philosophy: Chaos constrained by natural law, order emerging from disorder.
+Algorithmic expression: Flow fields driven by layered Perlin noise. Thousands of particles following vector forces, their trails accumulating into organic density maps. Multiple noise octaves create turbulent regions and calm zones. Color emerges from velocity and density - fast particles burn bright, slow ones fade to shadow. The algorithm runs until equilibrium - a meticulously tuned balance where every parameter was refined through countless iterations by a master of computational aesthetics.
+
+**"Quantum Harmonics"**
+Philosophy: Discrete entities exhibiting wave-like interference patterns.
+Algorithmic expression: Particles initialized on a grid, each carrying a phase value that evolves through sine waves. When particles are near, their phases interfere - constructive interference creates bright nodes, destructive creates voids. Simple harmonic motion generates complex emergent mandalas. The result of painstaking frequency calibration where every ratio was carefully chosen to produce resonant beauty.
+
+**"Recursive Whispers"**
+Philosophy: Self-similarity across scales, infinite depth in finite space.
+Algorithmic expression: Branching structures that subdivide recursively. Each branch slightly randomized but constrained by golden ratios. L-systems or recursive subdivision generate tree-like forms that feel both mathematical and organic. Subtle noise perturbations break perfect symmetry. Line weights diminish with each recursion level. Every branching angle the product of deep mathematical exploration.
+
+**"Field Dynamics"**
+Philosophy: Invisible forces made visible through their effects on matter.
+Algorithmic expression: Vector fields constructed from mathematical functions or noise. Particles born at edges, flowing along field lines, dying when they reach equilibrium or boundaries. Multiple fields can attract, repel, or rotate particles. The visualization shows only the traces - ghost-like evidence of invisible forces. A computational dance meticulously choreographed through force balance.
+
+**"Stochastic Crystallization"**
+Philosophy: Random processes crystallizing into ordered structures.
+Algorithmic expression: Randomized circle packing or Voronoi tessellation. Start with random points, let them evolve through relaxation algorithms. Cells push apart until equilibrium. Color based on cell size, neighbor count, or distance from center. The organic tiling that emerges feels both random and inevitable. Every seed produces unique crystalline beauty - the mark of a master-level generative algorithm.
+
+*These are condensed examples. The actual algorithmic philosophy should be 4-6 substantial paragraphs.*
+
+### ESSENTIAL PRINCIPLES
+- **ALGORITHMIC PHILOSOPHY**: Creating a computational worldview to be expressed through code
+- **PROCESS OVER PRODUCT**: Always emphasize that beauty emerges from the algorithm's execution - each run is unique
+- **PARAMETRIC EXPRESSION**: Ideas communicate through mathematical relationships, forces, behaviors - not static composition
+- **ARTISTIC FREEDOM**: The next Claude interprets the philosophy algorithmically - provide creative implementation room
+- **PURE GENERATIVE ART**: This is about making LIVING ALGORITHMS, not static images with randomness
+- **EXPERT CRAFTSMANSHIP**: Repeatedly emphasize the final algorithm must feel meticulously crafted, refined through countless iterations, the product of deep expertise by someone at the absolute top of their field in computational aesthetics
+
+**The algorithmic philosophy should be 4-6 paragraphs long.** Fill it with poetic computational philosophy that brings together the intended vision. Avoid repeating the same points. Output this algorithmic philosophy as a .md file.
+
+---
+
+## DEDUCING THE CONCEPTUAL SEED
+
+**CRITICAL STEP**: Before implementing the algorithm, identify the subtle conceptual thread from the original request.
+
+**THE ESSENTIAL PRINCIPLE**:
+The concept is a **subtle, niche reference embedded within the algorithm itself** - not always literal, always sophisticated. Someone familiar with the subject should feel it intuitively, while others simply experience a masterful generative composition. The algorithmic philosophy provides the computational language. The deduced concept provides the soul - the quiet conceptual DNA woven invisibly into parameters, behaviors, and emergence patterns.
+
+This is **VERY IMPORTANT**: The reference must be so refined that it enhances the work's depth without announcing itself. Think like a jazz musician quoting another song through algorithmic harmony - only those who know will catch it, but everyone appreciates the generative beauty.
+
+---
+
+## P5.JS IMPLEMENTATION
+
+With the philosophy AND conceptual framework established, express it through code. Pause to gather thoughts before proceeding. Use only the algorithmic philosophy created and the instructions below.
+
+### ⚠️ STEP 0: READ THE TEMPLATE FIRST ⚠️
+
+**CRITICAL: BEFORE writing any HTML:**
+
+1. **Read** `templates/viewer.html` using the Read tool
+2. **Study** the exact structure, styling, and Anthropic branding
+3. **Use that file as the LITERAL STARTING POINT** - not just inspiration
+4. **Keep all FIXED sections exactly as shown** (header, sidebar structure, Anthropic colors/fonts, seed controls, action buttons)
+5. **Replace only the VARIABLE sections** marked in the file's comments (algorithm, parameters, UI controls for parameters)
+
+**Avoid:**
+- ❌ Creating HTML from scratch
+- ❌ Inventing custom styling or color schemes
+- ❌ Using system fonts or dark themes
+- ❌ Changing the sidebar structure
+
+**Follow these practices:**
+- ✅ Copy the template's exact HTML structure
+- ✅ Keep Anthropic branding (Poppins/Lora fonts, light colors, gradient backdrop)
+- ✅ Maintain the sidebar layout (Seed → Parameters → Colors? → Actions)
+- ✅ Replace only the p5.js algorithm and parameter controls
+
+The template is the foundation. Build on it, don't rebuild it.
+
+---
+
+To create gallery-quality computational art that lives and breathes, use the algorithmic philosophy as the foundation.
+
+### TECHNICAL REQUIREMENTS
+
+**Seeded Randomness (Art Blocks Pattern)**:
+```javascript
+// ALWAYS use a seed for reproducibility
+let seed = 12345; // or hash from user input
+randomSeed(seed);
+noiseSeed(seed);
+```
+
+**Parameter Structure - FOLLOW THE PHILOSOPHY**:
+
+To establish parameters that emerge naturally from the algorithmic philosophy, consider: "What qualities of this system can be adjusted?"
+
+```javascript
+let params = {
+ seed: 12345, // Always include seed for reproducibility
+ // colors
+ // Add parameters that control YOUR algorithm:
+ // - Quantities (how many?)
+ // - Scales (how big? how fast?)
+ // - Probabilities (how likely?)
+ // - Ratios (what proportions?)
+ // - Angles (what direction?)
+ // - Thresholds (when does behavior change?)
+};
+```
+
+**To design effective parameters, focus on the properties the system needs to be tunable rather than thinking in terms of "pattern types".**
+
+**Core Algorithm - EXPRESS THE PHILOSOPHY**:
+
+**CRITICAL**: The algorithmic philosophy should dictate what to build.
+
+To express the philosophy through code, avoid thinking "which pattern should I use?" and instead think "how to express this philosophy through code?"
+
+If the philosophy is about **organic emergence**, consider using:
+- Elements that accumulate or grow over time
+- Random processes constrained by natural rules
+- Feedback loops and interactions
+
+If the philosophy is about **mathematical beauty**, consider using:
+- Geometric relationships and ratios
+- Trigonometric functions and harmonics
+- Precise calculations creating unexpected patterns
+
+If the philosophy is about **controlled chaos**, consider using:
+- Random variation within strict boundaries
+- Bifurcation and phase transitions
+- Order emerging from disorder
+
+**The algorithm flows from the philosophy, not from a menu of options.**
+
+To guide the implementation, let the conceptual essence inform creative and original choices. Build something that expresses the vision for this particular request.
+
+**Canvas Setup**: Standard p5.js structure:
+```javascript
+function setup() {
+ createCanvas(1200, 1200);
+ // Initialize your system
+}
+
+function draw() {
+ // Your generative algorithm
+ // Can be static (noLoop) or animated
+}
+```
+
+### CRAFTSMANSHIP REQUIREMENTS
+
+**CRITICAL**: To achieve mastery, create algorithms that feel like they emerged through countless iterations by a master generative artist. Tune every parameter carefully. Ensure every pattern emerges with purpose. This is NOT random noise - this is CONTROLLED CHAOS refined through deep expertise.
+
+- **Balance**: Complexity without visual noise, order without rigidity
+- **Color Harmony**: Thoughtful palettes, not random RGB values
+- **Composition**: Even in randomness, maintain visual hierarchy and flow
+- **Performance**: Smooth execution, optimized for real-time if animated
+- **Reproducibility**: Same seed ALWAYS produces identical output
+
+### OUTPUT FORMAT
+
+Output:
+1. **Algorithmic Philosophy** - As markdown or text explaining the generative aesthetic
+2. **Single HTML Artifact** - Self-contained interactive generative art built from `templates/viewer.html` (see STEP 0 and next section)
+
+The HTML artifact contains everything: p5.js (from CDN), the algorithm, parameter controls, and UI - all in one file that works immediately in claude.ai artifacts or any browser. Start from the template file, not from scratch.
+
+---
+
+## INTERACTIVE ARTIFACT CREATION
+
+**REMINDER: `templates/viewer.html` should have already been read (see STEP 0). Use that file as the starting point.**
+
+To allow exploration of the generative art, create a single, self-contained HTML artifact. Ensure this artifact works immediately in claude.ai or any browser - no setup required. Embed everything inline.
+
+### CRITICAL: WHAT'S FIXED VS VARIABLE
+
+The `templates/viewer.html` file is the foundation. It contains the exact structure and styling needed.
+
+**FIXED (always include exactly as shown):**
+- Layout structure (header, sidebar, main canvas area)
+- Anthropic branding (UI colors, fonts, gradients)
+- Seed section in sidebar:
+ - Seed display
+ - Previous/Next buttons
+ - Random button
+ - Jump to seed input + Go button
+- Actions section in sidebar:
+ - Regenerate button
+ - Reset button
+
+**VARIABLE (customize for each artwork):**
+- The entire p5.js algorithm (setup/draw/classes)
+- The parameters object (define what the art needs)
+- The Parameters section in sidebar:
+ - Number of parameter controls
+ - Parameter names
+ - Min/max/step values for sliders
+ - Control types (sliders, inputs, etc.)
+- Colors section (optional):
+ - Some art needs color pickers
+ - Some art might use fixed colors
+ - Some art might be monochrome (no color controls needed)
+ - Decide based on the art's needs
+
+**Every artwork should have unique parameters and algorithm!** The fixed parts provide consistent UX - everything else expresses the unique vision.
+
+### REQUIRED FEATURES
+
+**1. Parameter Controls**
+- Sliders for numeric parameters (particle count, noise scale, speed, etc.)
+- Color pickers for palette colors
+- Real-time updates when parameters change
+- Reset button to restore defaults
+
+**2. Seed Navigation**
+- Display current seed number
+- "Previous" and "Next" buttons to cycle through seeds
+- "Random" button for random seed
+- Input field to jump to specific seed
+- Generate 100 variations when requested (seeds 1-100)
+
+**3. Single Artifact Structure**
+```html
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+**CRITICAL**: This is a single artifact. No external files, no imports (except p5.js CDN). Everything inline.
+
+**4. Implementation Details - BUILD THE SIDEBAR**
+
+The sidebar structure:
+
+**1. Seed (FIXED)** - Always include exactly as shown:
+- Seed display
+- Prev/Next/Random/Jump buttons
+
+**2. Parameters (VARIABLE)** - Create controls for the art:
+```html
+
+ Parameter Name
+
+ ...
+
+```
+Add as many control-group divs as there are parameters.
+
+**3. Colors (OPTIONAL/VARIABLE)** - Include if the art needs adjustable colors:
+- Add color pickers if users should control palette
+- Skip this section if the art uses fixed colors
+- Skip if the art is monochrome
+
+**4. Actions (FIXED)** - Always include exactly as shown:
+- Regenerate button
+- Reset button
+- Download PNG button
+
+**Requirements**:
+- Seed controls must work (prev/next/random/jump/display)
+- All parameters must have UI controls
+- Regenerate, Reset, Download buttons must work
+- Keep Anthropic branding (UI styling, not art colors)
+
+### USING THE ARTIFACT
+
+The HTML artifact works immediately:
+1. **In claude.ai**: Displayed as an interactive artifact - runs instantly
+2. **As a file**: Save and open in any browser - no server needed
+3. **Sharing**: Send the HTML file - it's completely self-contained
+
+---
+
+## VARIATIONS & EXPLORATION
+
+The artifact includes seed navigation by default (prev/next/random buttons), allowing users to explore variations without creating multiple files. If the user wants specific variations highlighted:
+
+- Include seed presets (buttons for "Variation 1: Seed 42", "Variation 2: Seed 127", etc.)
+- Add a "Gallery Mode" that shows thumbnails of multiple seeds side-by-side
+- All within the same single artifact
+
+This is like creating a series of prints from the same plate - the algorithm is consistent, but each seed reveals different facets of its potential. The interactive nature means users discover their own favorites by exploring the seed space.
+
+---
+
+## THE CREATIVE PROCESS
+
+**User request** → **Algorithmic philosophy** → **Implementation**
+
+Each request is unique. The process involves:
+
+1. **Interpret the user's intent** - What aesthetic is being sought?
+2. **Create an algorithmic philosophy** (4-6 paragraphs) describing the computational approach
+3. **Implement it in code** - Build the algorithm that expresses this philosophy
+4. **Design appropriate parameters** - What should be tunable?
+5. **Build matching UI controls** - Sliders/inputs for those parameters
+
+**The constants**:
+- Anthropic branding (colors, fonts, layout)
+- Seed navigation (always present)
+- Self-contained HTML artifact
+
+**Everything else is variable**:
+- The algorithm itself
+- The parameters
+- The UI controls
+- The visual outcome
+
+To achieve the best results, trust creativity and let the philosophy guide the implementation.
+
+---
+
+## RESOURCES
+
+This skill includes helpful templates and documentation:
+
+- **templates/viewer.html**: REQUIRED STARTING POINT for all HTML artifacts.
+ - This is the foundation - contains the exact structure and Anthropic branding
+ - **Keep unchanged**: Layout structure, sidebar organization, Anthropic colors/fonts, seed controls, action buttons
+ - **Replace**: The p5.js algorithm, parameter definitions, and UI controls in Parameters section
+ - The extensive comments in the file mark exactly what to keep vs replace
+
+- **templates/generator_template.js**: Reference for p5.js best practices and code structure principles.
+ - Shows how to organize parameters, use seeded randomness, structure classes
+ - NOT a pattern menu - use these principles to build unique algorithms
+ - Embed algorithms inline in the HTML artifact (don't create separate .js files)
+
+**Critical reminder**:
+- The **template is the STARTING POINT**, not inspiration
+- The **algorithm is where to create** something unique
+- Don't copy the flow field example - build what the philosophy demands
+- But DO keep the exact UI structure and Anthropic branding from the template
\ No newline at end of file
diff --git a/skills/algorithmic-art/algorithmic-art b/skills/algorithmic-art/algorithmic-art
new file mode 120000
index 0000000..7b4c9a1
--- /dev/null
+++ b/skills/algorithmic-art/algorithmic-art
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/algorithmic-art/
\ No newline at end of file
diff --git a/skills/algorithmic-art/templates/generator_template.js b/skills/algorithmic-art/templates/generator_template.js
new file mode 100644
index 0000000..e263fbd
--- /dev/null
+++ b/skills/algorithmic-art/templates/generator_template.js
@@ -0,0 +1,223 @@
+/**
+ * ═══════════════════════════════════════════════════════════════════════════
+ * P5.JS GENERATIVE ART - BEST PRACTICES
+ * ═══════════════════════════════════════════════════════════════════════════
+ *
+ * This file shows STRUCTURE and PRINCIPLES for p5.js generative art.
+ * It does NOT prescribe what art you should create.
+ *
+ * Your algorithmic philosophy should guide what you build.
+ * These are just best practices for how to structure your code.
+ *
+ * ═══════════════════════════════════════════════════════════════════════════
+ */
+
+// ============================================================================
+// 1. PARAMETER ORGANIZATION
+// ============================================================================
+// Keep all tunable parameters in one object
+// This makes it easy to:
+// - Connect to UI controls
+// - Reset to defaults
+// - Serialize/save configurations
+
+let params = {
+ // Define parameters that match YOUR algorithm
+ // Examples (customize for your art):
+ // - Counts: how many elements (particles, circles, branches, etc.)
+ // - Scales: size, speed, spacing
+ // - Probabilities: likelihood of events
+ // - Angles: rotation, direction
+ // - Colors: palette arrays
+
+ seed: 12345,
+ // define colorPalette as an array -- choose whatever colors you'd like ['#d97757', '#6a9bcc', '#788c5d', '#b0aea5']
+ // Add YOUR parameters here based on your algorithm
+};
+
+// ============================================================================
+// 2. SEEDED RANDOMNESS (Critical for reproducibility)
+// ============================================================================
+// ALWAYS use seeded random for Art Blocks-style reproducible output
+
+function initializeSeed(seed) {
+ randomSeed(seed);
+ noiseSeed(seed);
+ // Now all random() and noise() calls will be deterministic
+}
+
+// ============================================================================
+// 3. P5.JS LIFECYCLE
+// ============================================================================
+
+function setup() {
+ createCanvas(800, 800);
+
+ // Initialize seed first
+ initializeSeed(params.seed);
+
+ // Set up your generative system
+ // This is where you initialize:
+ // - Arrays of objects
+ // - Grid structures
+ // - Initial positions
+ // - Starting states
+
+ // For static art: call noLoop() at the end of setup
+ // For animated art: let draw() keep running
+}
+
+function draw() {
+ // Option 1: Static generation (runs once, then stops)
+ // - Generate everything in setup()
+ // - Call noLoop() in setup()
+ // - draw() doesn't do much or can be empty
+
+ // Option 2: Animated generation (continuous)
+ // - Update your system each frame
+ // - Common patterns: particle movement, growth, evolution
+ // - Can optionally call noLoop() after N frames
+
+ // Option 3: User-triggered regeneration
+ // - Use noLoop() by default
+ // - Call redraw() when parameters change
+}
+
+// ============================================================================
+// 4. CLASS STRUCTURE (When you need objects)
+// ============================================================================
+// Use classes when your algorithm involves multiple entities
+// Examples: particles, agents, cells, nodes, etc.
+
+class Entity {
+ constructor() {
+ // Initialize entity properties
+ // Use random() here - it will be seeded
+ }
+
+ update() {
+ // Update entity state
+ // This might involve:
+ // - Physics calculations
+ // - Behavioral rules
+ // - Interactions with neighbors
+ }
+
+ display() {
+ // Render the entity
+ // Keep rendering logic separate from update logic
+ }
+}
+
+// ============================================================================
+// 5. PERFORMANCE CONSIDERATIONS
+// ============================================================================
+
+// For large numbers of elements:
+// - Pre-calculate what you can
+// - Use simple collision detection (spatial hashing if needed)
+// - Limit expensive operations (sqrt, trig) when possible
+// - Consider using p5 vectors efficiently
+
+// For smooth animation:
+// - Aim for 60fps
+// - Profile if things are slow
+// - Consider reducing particle counts or simplifying calculations
+
+// ============================================================================
+// 6. UTILITY FUNCTIONS
+// ============================================================================
+
+// Color utilities
+function hexToRgb(hex) {
+ const result = /^#?([a-f\d]{2})([a-f\d]{2})([a-f\d]{2})$/i.exec(hex);
+ return result ? {
+ r: parseInt(result[1], 16),
+ g: parseInt(result[2], 16),
+ b: parseInt(result[3], 16)
+ } : null;
+}
+
+function colorFromPalette(index) {
+ return params.colorPalette[index % params.colorPalette.length];
+}
+
+// Mapping and easing
+function mapRange(value, inMin, inMax, outMin, outMax) {
+ return outMin + (outMax - outMin) * ((value - inMin) / (inMax - inMin));
+}
+
+function easeInOutCubic(t) {
+ return t < 0.5 ? 4 * t * t * t : 1 - Math.pow(-2 * t + 2, 3) / 2;
+}
+
+// Constrain to bounds
+function wrapAround(value, max) {
+ if (value < 0) return max;
+ if (value > max) return 0;
+ return value;
+}
+
+// ============================================================================
+// 7. PARAMETER UPDATES (Connect to UI)
+// ============================================================================
+
+function updateParameter(paramName, value) {
+ params[paramName] = value;
+ // Decide if you need to regenerate or just update
+ // Some params can update in real-time, others need full regeneration
+}
+
+function regenerate() {
+ // Reinitialize your generative system
+ // Useful when parameters change significantly
+ initializeSeed(params.seed);
+ // Then regenerate your system
+}
+
+// ============================================================================
+// 8. COMMON P5.JS PATTERNS
+// ============================================================================
+
+// Drawing with transparency for trails/fading
+function fadeBackground(opacity) {
+ fill(250, 249, 245, opacity); // Anthropic light with alpha
+ noStroke();
+ rect(0, 0, width, height);
+}
+
+// Using noise for organic variation
+function getNoiseValue(x, y, scale = 0.01) {
+ return noise(x * scale, y * scale);
+}
+
+// Creating vectors from angles
+function vectorFromAngle(angle, magnitude = 1) {
+ return createVector(cos(angle), sin(angle)).mult(magnitude);
+}
+
+// ============================================================================
+// 9. EXPORT FUNCTIONS
+// ============================================================================
+
+function exportImage() {
+ saveCanvas('generative-art-' + params.seed, 'png');
+}
+
+// ============================================================================
+// REMEMBER
+// ============================================================================
+//
+// These are TOOLS and PRINCIPLES, not a recipe.
+// Your algorithmic philosophy should guide WHAT you create.
+// This structure helps you create it WELL.
+//
+// Focus on:
+// - Clean, readable code
+// - Parameterized for exploration
+// - Seeded for reproducibility
+// - Performant execution
+//
+// The art itself is entirely up to you!
+//
+// ============================================================================
\ No newline at end of file
diff --git a/skills/algorithmic-art/templates/viewer.html b/skills/algorithmic-art/templates/viewer.html
new file mode 100644
index 0000000..630cc1f
--- /dev/null
+++ b/skills/algorithmic-art/templates/viewer.html
@@ -0,0 +1,599 @@
+
+
+
+
+
+
+ Generative Art Viewer
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Initializing generative art...
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/skills/analytics-tracking/SKILL.md b/skills/analytics-tracking/SKILL.md
new file mode 100644
index 0000000..9d8e0a0
--- /dev/null
+++ b/skills/analytics-tracking/SKILL.md
@@ -0,0 +1,307 @@
+---
+name: analytics-tracking
+version: 1.0.0
+description: When the user wants to set up, improve, or audit analytics tracking and measurement. Also use when the user mentions "set up tracking," "GA4," "Google Analytics," "conversion tracking," "event tracking," "UTM parameters," "tag manager," "GTM," "analytics implementation," or "tracking plan." For A/B test measurement, see ab-test-setup.
+---
+
+# Analytics Tracking
+
+You are an expert in analytics implementation and measurement. Your goal is to help set up tracking that provides actionable insights for marketing and product decisions.
+
+## Initial Assessment
+
+**Check for product marketing context first:**
+If `.claude/product-marketing-context.md` exists, read it before asking questions. Use that context and only ask for information not already covered or specific to this task.
+
+Before implementing tracking, understand:
+
+1. **Business Context** - What decisions will this data inform? What are key conversions?
+2. **Current State** - What tracking exists? What tools are in use?
+3. **Technical Context** - What's the tech stack? Any privacy/compliance requirements?
+
+---
+
+## Core Principles
+
+### 1. Track for Decisions, Not Data
+- Every event should inform a decision
+- Avoid vanity metrics
+- Quality > quantity of events
+
+### 2. Start with the Questions
+- What do you need to know?
+- What actions will you take based on this data?
+- Work backwards to what you need to track
+
+### 3. Name Things Consistently
+- Naming conventions matter
+- Establish patterns before implementing
+- Document everything
+
+### 4. Maintain Data Quality
+- Validate implementation
+- Monitor for issues
+- Clean data > more data
+
+---
+
+## Tracking Plan Framework
+
+### Structure
+
+```
+Event Name | Category | Properties | Trigger | Notes
+---------- | -------- | ---------- | ------- | -----
+```
+
+### Event Types
+
+| Type | Examples |
+|------|----------|
+| Pageviews | Automatic, enhanced with metadata |
+| User Actions | Button clicks, form submissions, feature usage |
+| System Events | Signup completed, purchase, subscription changed |
+| Custom Conversions | Goal completions, funnel stages |
+
+**For comprehensive event lists**: See [references/event-library.md](references/event-library.md)
+
+---
+
+## Event Naming Conventions
+
+### Recommended Format: Object-Action
+
+```
+signup_completed
+button_clicked
+form_submitted
+article_read
+checkout_payment_completed
+```
+
+### Best Practices
+- Lowercase with underscores
+- Be specific: `cta_hero_clicked` vs. `button_clicked`
+- Include context in properties, not event name
+- Avoid spaces and special characters
+- Document decisions
+
+---
+
+## Essential Events
+
+### Marketing Site
+
+| Event | Properties |
+|-------|------------|
+| cta_clicked | button_text, location |
+| form_submitted | form_type |
+| signup_completed | method, source |
+| demo_requested | - |
+
+### Product/App
+
+| Event | Properties |
+|-------|------------|
+| onboarding_step_completed | step_number, step_name |
+| feature_used | feature_name |
+| purchase_completed | plan, value |
+| subscription_cancelled | reason |
+
+**For full event library by business type**: See [references/event-library.md](references/event-library.md)
+
+---
+
+## Event Properties
+
+### Standard Properties
+
+| Category | Properties |
+|----------|------------|
+| Page | page_title, page_location, page_referrer |
+| User | user_id, user_type, account_id, plan_type |
+| Campaign | source, medium, campaign, content, term |
+| Product | product_id, product_name, category, price |
+
+### Best Practices
+- Use consistent property names
+- Include relevant context
+- Don't duplicate automatic properties
+- Avoid PII in properties
+
+---
+
+## GA4 Implementation
+
+### Quick Setup
+
+1. Create GA4 property and data stream
+2. Install gtag.js or GTM
+3. Enable enhanced measurement
+4. Configure custom events
+5. Mark conversions in Admin
+
+### Custom Event Example
+
+```javascript
+gtag('event', 'signup_completed', {
+ 'method': 'email',
+ 'plan': 'free'
+});
+```
+
+**For detailed GA4 implementation**: See [references/ga4-implementation.md](references/ga4-implementation.md)
+
+---
+
+## Google Tag Manager
+
+### Container Structure
+
+| Component | Purpose |
+|-----------|---------|
+| Tags | Code that executes (GA4, pixels) |
+| Triggers | When tags fire (page view, click) |
+| Variables | Dynamic values (click text, data layer) |
+
+### Data Layer Pattern
+
+```javascript
+dataLayer.push({
+ 'event': 'form_submitted',
+ 'form_name': 'contact',
+ 'form_location': 'footer'
+});
+```
+
+**For detailed GTM implementation**: See [references/gtm-implementation.md](references/gtm-implementation.md)
+
+---
+
+## UTM Parameter Strategy
+
+### Standard Parameters
+
+| Parameter | Purpose | Example |
+|-----------|---------|---------|
+| utm_source | Traffic source | google, newsletter |
+| utm_medium | Marketing medium | cpc, email, social |
+| utm_campaign | Campaign name | spring_sale |
+| utm_content | Differentiate versions | hero_cta |
+| utm_term | Paid search keywords | running+shoes |
+
+### Naming Conventions
+- Lowercase everything
+- Use underscores or hyphens consistently
+- Be specific but concise: `blog_footer_cta`, not `cta1`
+- Document all UTMs in a spreadsheet
+
+---
+
+## Debugging and Validation
+
+### Testing Tools
+
+| Tool | Use For |
+|------|---------|
+| GA4 DebugView | Real-time event monitoring |
+| GTM Preview Mode | Test triggers before publish |
+| Browser Extensions | Tag Assistant, dataLayer Inspector |
+
+### Validation Checklist
+
+- [ ] Events firing on correct triggers
+- [ ] Property values populating correctly
+- [ ] No duplicate events
+- [ ] Works across browsers and mobile
+- [ ] Conversions recorded correctly
+- [ ] No PII leaking
+
+### Common Issues
+
+| Issue | Check |
+|-------|-------|
+| Events not firing | Trigger config, GTM loaded |
+| Wrong values | Variable path, data layer structure |
+| Duplicate events | Multiple containers, trigger firing twice |
+
+---
+
+## Privacy and Compliance
+
+### Considerations
+- Cookie consent required in EU/UK/CA
+- No PII in analytics properties
+- Data retention settings
+- User deletion capabilities
+
+### Implementation
+- Use consent mode (wait for consent)
+- IP anonymization
+- Only collect what you need
+- Integrate with consent management platform
+
+---
+
+## Output Format
+
+### Tracking Plan Document
+
+```markdown
+# [Site/Product] Tracking Plan
+
+## Overview
+- Tools: GA4, GTM
+- Last updated: [Date]
+
+## Events
+
+| Event Name | Description | Properties | Trigger |
+|------------|-------------|------------|---------|
+| signup_completed | User completes signup | method, plan | Success page |
+
+## Custom Dimensions
+
+| Name | Scope | Parameter |
+|------|-------|-----------|
+| user_type | User | user_type |
+
+## Conversions
+
+| Conversion | Event | Counting |
+|------------|-------|----------|
+| Signup | signup_completed | Once per session |
+```
+
+---
+
+## Task-Specific Questions
+
+1. What tools are you using (GA4, Mixpanel, etc.)?
+2. What key actions do you want to track?
+3. What decisions will this data inform?
+4. Who implements - dev team or marketing?
+5. Are there privacy/consent requirements?
+6. What's already tracked?
+
+---
+
+## Tool Integrations
+
+For implementation, see the [tools registry](../../tools/REGISTRY.md). Key analytics tools:
+
+| Tool | Best For | MCP | Guide |
+|------|----------|:---:|-------|
+| **GA4** | Web analytics, Google ecosystem | ✓ | [ga4.md](../../tools/integrations/ga4.md) |
+| **Mixpanel** | Product analytics, event tracking | - | [mixpanel.md](../../tools/integrations/mixpanel.md) |
+| **Amplitude** | Product analytics, cohort analysis | - | [amplitude.md](../../tools/integrations/amplitude.md) |
+| **PostHog** | Open-source analytics, session replay | - | [posthog.md](../../tools/integrations/posthog.md) |
+| **Segment** | Customer data platform, routing | - | [segment.md](../../tools/integrations/segment.md) |
+
+---
+
+## Related Skills
+
+- **ab-test-setup**: For experiment tracking
+- **seo-audit**: For organic traffic analysis
+- **page-cro**: For conversion optimization (uses this data)
diff --git a/skills/analytics-tracking/analytics-tracking b/skills/analytics-tracking/analytics-tracking
new file mode 120000
index 0000000..84941cf
--- /dev/null
+++ b/skills/analytics-tracking/analytics-tracking
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/analytics-tracking/
\ No newline at end of file
diff --git a/skills/analytics-tracking/references/event-library.md b/skills/analytics-tracking/references/event-library.md
new file mode 100644
index 0000000..586025e
--- /dev/null
+++ b/skills/analytics-tracking/references/event-library.md
@@ -0,0 +1,251 @@
+# Event Library Reference
+
+Comprehensive list of events to track by business type and context.
+
+## Marketing Site Events
+
+### Navigation & Engagement
+
+| Event Name | Description | Properties |
+|------------|-------------|------------|
+| page_view | Page loaded (enhanced) | page_title, page_location, content_group |
+| scroll_depth | User scrolled to threshold | depth (25, 50, 75, 100) |
+| outbound_link_clicked | Click to external site | link_url, link_text |
+| internal_link_clicked | Click within site | link_url, link_text, location |
+| video_played | Video started | video_id, video_title, duration |
+| video_completed | Video finished | video_id, video_title, duration |
+
+### CTA & Form Interactions
+
+| Event Name | Description | Properties |
+|------------|-------------|------------|
+| cta_clicked | Call to action clicked | button_text, cta_location, page |
+| form_started | User began form | form_name, form_location |
+| form_field_completed | Field filled | form_name, field_name |
+| form_submitted | Form successfully sent | form_name, form_location |
+| form_error | Form validation failed | form_name, error_type |
+| resource_downloaded | Asset downloaded | resource_name, resource_type |
+
+### Conversion Events
+
+| Event Name | Description | Properties |
+|------------|-------------|------------|
+| signup_started | Initiated signup | source, page |
+| signup_completed | Finished signup | method, plan, source |
+| demo_requested | Demo form submitted | company_size, industry |
+| contact_submitted | Contact form sent | inquiry_type |
+| newsletter_subscribed | Email list signup | source, list_name |
+| trial_started | Free trial began | plan, source |
+
+---
+
+## Product/App Events
+
+### Onboarding
+
+| Event Name | Description | Properties |
+|------------|-------------|------------|
+| signup_completed | Account created | method, referral_source |
+| onboarding_started | Began onboarding | - |
+| onboarding_step_completed | Step finished | step_number, step_name |
+| onboarding_completed | All steps done | steps_completed, time_to_complete |
+| onboarding_skipped | User skipped onboarding | step_skipped_at |
+| first_key_action_completed | Aha moment reached | action_type |
+
+### Core Usage
+
+| Event Name | Description | Properties |
+|------------|-------------|------------|
+| session_started | App session began | session_number |
+| feature_used | Feature interaction | feature_name, feature_category |
+| action_completed | Core action done | action_type, count |
+| content_created | User created content | content_type |
+| content_edited | User modified content | content_type |
+| content_deleted | User removed content | content_type |
+| search_performed | In-app search | query, results_count |
+| settings_changed | Settings modified | setting_name, new_value |
+| invite_sent | User invited others | invite_type, count |
+
+### Errors & Support
+
+| Event Name | Description | Properties |
+|------------|-------------|------------|
+| error_occurred | Error experienced | error_type, error_message, page |
+| help_opened | Help accessed | help_type, page |
+| support_contacted | Support request made | contact_method, issue_type |
+| feedback_submitted | User feedback given | feedback_type, rating |
+
+---
+
+## Monetization Events
+
+### Pricing & Checkout
+
+| Event Name | Description | Properties |
+|------------|-------------|------------|
+| pricing_viewed | Pricing page seen | source |
+| plan_selected | Plan chosen | plan_name, billing_cycle |
+| checkout_started | Began checkout | plan, value |
+| payment_info_entered | Payment submitted | payment_method |
+| purchase_completed | Purchase successful | plan, value, currency, transaction_id |
+| purchase_failed | Purchase failed | error_reason, plan |
+
+### Subscription Management
+
+| Event Name | Description | Properties |
+|------------|-------------|------------|
+| trial_started | Trial began | plan, trial_length |
+| trial_ended | Trial expired | plan, converted (bool) |
+| subscription_upgraded | Plan upgraded | from_plan, to_plan, value |
+| subscription_downgraded | Plan downgraded | from_plan, to_plan |
+| subscription_cancelled | Cancelled | plan, reason, tenure |
+| subscription_renewed | Renewed | plan, value |
+| billing_updated | Payment method changed | - |
+
+---
+
+## E-commerce Events
+
+### Browsing
+
+| Event Name | Description | Properties |
+|------------|-------------|------------|
+| product_viewed | Product page viewed | product_id, product_name, category, price |
+| product_list_viewed | Category/list viewed | list_name, products[] |
+| product_searched | Search performed | query, results_count |
+| product_filtered | Filters applied | filter_type, filter_value |
+| product_sorted | Sort applied | sort_by, sort_order |
+
+### Cart
+
+| Event Name | Description | Properties |
+|------------|-------------|------------|
+| product_added_to_cart | Item added | product_id, product_name, price, quantity |
+| product_removed_from_cart | Item removed | product_id, product_name, price, quantity |
+| cart_viewed | Cart page viewed | cart_value, items_count |
+
+### Checkout
+
+| Event Name | Description | Properties |
+|------------|-------------|------------|
+| checkout_started | Checkout began | cart_value, items_count |
+| checkout_step_completed | Step finished | step_number, step_name |
+| shipping_info_entered | Address entered | shipping_method |
+| payment_info_entered | Payment entered | payment_method |
+| coupon_applied | Coupon used | coupon_code, discount_value |
+| purchase_completed | Order placed | transaction_id, value, currency, items[] |
+
+### Post-Purchase
+
+| Event Name | Description | Properties |
+|------------|-------------|------------|
+| order_confirmed | Confirmation viewed | transaction_id |
+| refund_requested | Refund initiated | transaction_id, reason |
+| refund_completed | Refund processed | transaction_id, value |
+| review_submitted | Product reviewed | product_id, rating |
+
+---
+
+## B2B / SaaS Specific Events
+
+### Team & Collaboration
+
+| Event Name | Description | Properties |
+|------------|-------------|------------|
+| team_created | New team/org made | team_size, plan |
+| team_member_invited | Invite sent | role, invite_method |
+| team_member_joined | Member accepted | role |
+| team_member_removed | Member removed | role |
+| role_changed | Permissions updated | user_id, old_role, new_role |
+
+### Integration Events
+
+| Event Name | Description | Properties |
+|------------|-------------|------------|
+| integration_viewed | Integration page seen | integration_name |
+| integration_started | Setup began | integration_name |
+| integration_connected | Successfully connected | integration_name |
+| integration_disconnected | Removed integration | integration_name, reason |
+
+### Account Events
+
+| Event Name | Description | Properties |
+|------------|-------------|------------|
+| account_created | New account | source, plan |
+| account_upgraded | Plan upgrade | from_plan, to_plan |
+| account_churned | Account closed | reason, tenure, mrr_lost |
+| account_reactivated | Returned customer | previous_tenure, new_plan |
+
+---
+
+## Event Properties (Parameters)
+
+### Standard Properties to Include
+
+**User Context:**
+```
+user_id: "12345"
+user_type: "free" | "trial" | "paid"
+account_id: "acct_123"
+plan_type: "starter" | "pro" | "enterprise"
+```
+
+**Session Context:**
+```
+session_id: "sess_abc"
+session_number: 5
+page: "/pricing"
+referrer: "https://google.com"
+```
+
+**Campaign Context:**
+```
+source: "google"
+medium: "cpc"
+campaign: "spring_sale"
+content: "hero_cta"
+```
+
+**Product Context (E-commerce):**
+```
+product_id: "SKU123"
+product_name: "Product Name"
+category: "Category"
+price: 99.99
+quantity: 1
+currency: "USD"
+```
+
+**Timing:**
+```
+timestamp: "2024-01-15T10:30:00Z"
+time_on_page: 45
+session_duration: 300
+```
+
+---
+
+## Funnel Event Sequences
+
+### Signup Funnel
+1. signup_started
+2. signup_step_completed (email)
+3. signup_step_completed (password)
+4. signup_completed
+5. onboarding_started
+
+### Purchase Funnel
+1. pricing_viewed
+2. plan_selected
+3. checkout_started
+4. payment_info_entered
+5. purchase_completed
+
+### E-commerce Funnel
+1. product_viewed
+2. product_added_to_cart
+3. cart_viewed
+4. checkout_started
+5. shipping_info_entered
+6. payment_info_entered
+7. purchase_completed
diff --git a/skills/analytics-tracking/references/ga4-implementation.md b/skills/analytics-tracking/references/ga4-implementation.md
new file mode 100644
index 0000000..2cf874f
--- /dev/null
+++ b/skills/analytics-tracking/references/ga4-implementation.md
@@ -0,0 +1,290 @@
+# GA4 Implementation Reference
+
+Detailed implementation guide for Google Analytics 4.
+
+## Configuration
+
+### Data Streams
+
+- One stream per platform (web, iOS, Android)
+- Enable enhanced measurement for automatic tracking
+- Configure data retention (2 months default, 14 months max)
+- Enable Google Signals (for cross-device, if consented)
+
+### Enhanced Measurement Events (Automatic)
+
+| Event | Description | Configuration |
+|-------|-------------|---------------|
+| page_view | Page loads | Automatic |
+| scroll | 90% scroll depth | Toggle on/off |
+| outbound_click | Click to external domain | Automatic |
+| site_search | Search query used | Configure parameter |
+| video_engagement | YouTube video plays | Toggle on/off |
+| file_download | PDF, docs, etc. | Configurable extensions |
+
+### Recommended Events
+
+Use Google's predefined events when possible for enhanced reporting:
+
+**All properties:**
+- login, sign_up
+- share
+- search
+
+**E-commerce:**
+- view_item, view_item_list
+- add_to_cart, remove_from_cart
+- begin_checkout
+- add_payment_info
+- purchase, refund
+
+**Games:**
+- level_up, unlock_achievement
+- post_score, spend_virtual_currency
+
+Reference: https://support.google.com/analytics/answer/9267735
+
+---
+
+## Custom Events
+
+### gtag.js Implementation
+
+```javascript
+// Basic event
+gtag('event', 'signup_completed', {
+ 'method': 'email',
+ 'plan': 'free'
+});
+
+// Event with value
+gtag('event', 'purchase', {
+ 'transaction_id': 'T12345',
+ 'value': 99.99,
+ 'currency': 'USD',
+ 'items': [{
+ 'item_id': 'SKU123',
+ 'item_name': 'Product Name',
+ 'price': 99.99
+ }]
+});
+
+// User properties
+gtag('set', 'user_properties', {
+ 'user_type': 'premium',
+ 'plan_name': 'pro'
+});
+
+// User ID (for logged-in users)
+gtag('config', 'GA_MEASUREMENT_ID', {
+ 'user_id': 'USER_ID'
+});
+```
+
+### Google Tag Manager (dataLayer)
+
+```javascript
+// Custom event
+dataLayer.push({
+ 'event': 'signup_completed',
+ 'method': 'email',
+ 'plan': 'free'
+});
+
+// Set user properties
+dataLayer.push({
+ 'user_id': '12345',
+ 'user_type': 'premium'
+});
+
+// E-commerce purchase
+dataLayer.push({
+ 'event': 'purchase',
+ 'ecommerce': {
+ 'transaction_id': 'T12345',
+ 'value': 99.99,
+ 'currency': 'USD',
+ 'items': [{
+ 'item_id': 'SKU123',
+ 'item_name': 'Product Name',
+ 'price': 99.99,
+ 'quantity': 1
+ }]
+ }
+});
+
+// Clear ecommerce before sending (best practice)
+dataLayer.push({ ecommerce: null });
+dataLayer.push({
+ 'event': 'view_item',
+ 'ecommerce': {
+ // ...
+ }
+});
+```
+
+---
+
+## Conversions Setup
+
+### Creating Conversions
+
+1. **Collect the event** - Ensure event is firing in GA4
+2. **Mark as conversion** - Admin > Events > Mark as conversion
+3. **Set counting method**:
+ - Once per session (leads, signups)
+ - Every event (purchases)
+4. **Import to Google Ads** - For conversion-optimized bidding
+
+### Conversion Values
+
+```javascript
+// Event with conversion value
+gtag('event', 'purchase', {
+ 'value': 99.99,
+ 'currency': 'USD'
+});
+```
+
+Or set default value in GA4 Admin when marking conversion.
+
+---
+
+## Custom Dimensions and Metrics
+
+### When to Use
+
+**Custom dimensions:**
+- Properties you want to segment/filter by
+- User attributes (plan type, industry)
+- Content attributes (author, category)
+
+**Custom metrics:**
+- Numeric values to aggregate
+- Scores, counts, durations
+
+### Setup Steps
+
+1. Admin > Data display > Custom definitions
+2. Create dimension or metric
+3. Choose scope:
+ - **Event**: Per event (content_type)
+ - **User**: Per user (account_type)
+ - **Item**: Per product (product_category)
+4. Enter parameter name (must match event parameter)
+
+### Examples
+
+| Dimension | Scope | Parameter | Description |
+|-----------|-------|-----------|-------------|
+| User Type | User | user_type | Free, trial, paid |
+| Content Author | Event | author | Blog post author |
+| Product Category | Item | item_category | E-commerce category |
+
+---
+
+## Audiences
+
+### Creating Audiences
+
+Admin > Data display > Audiences
+
+**Use cases:**
+- Remarketing audiences (export to Ads)
+- Segment analysis
+- Trigger-based events
+
+### Audience Examples
+
+**High-intent visitors:**
+- Viewed pricing page
+- Did not convert
+- In last 7 days
+
+**Engaged users:**
+- 3+ sessions
+- Or 5+ minutes total engagement
+
+**Purchasers:**
+- Purchase event
+- For exclusion or lookalike
+
+---
+
+## Debugging
+
+### DebugView
+
+Enable with:
+- URL parameter: `?debug_mode=true`
+- Chrome extension: GA Debugger
+- gtag: `'debug_mode': true` in config
+
+View at: Reports > Configure > DebugView
+
+### Real-Time Reports
+
+Check events within 30 minutes:
+Reports > Real-time
+
+### Common Issues
+
+**Events not appearing:**
+- Check DebugView first
+- Verify gtag/GTM firing
+- Check filter exclusions
+
+**Parameter values missing:**
+- Custom dimension not created
+- Parameter name mismatch
+- Data still processing (24-48 hrs)
+
+**Conversions not recording:**
+- Event not marked as conversion
+- Event name doesn't match
+- Counting method (once vs. every)
+
+---
+
+## Data Quality
+
+### Filters
+
+Admin > Data streams > [Stream] > Configure tag settings > Define internal traffic
+
+**Exclude:**
+- Internal IP addresses
+- Developer traffic
+- Testing environments
+
+### Cross-Domain Tracking
+
+For multiple domains sharing analytics:
+
+1. Admin > Data streams > [Stream] > Configure tag settings
+2. Configure your domains
+3. List all domains that should share sessions
+
+### Session Settings
+
+Admin > Data streams > [Stream] > Configure tag settings
+
+- Session timeout (default 30 min)
+- Engaged session duration (10 sec default)
+
+---
+
+## Integration with Google Ads
+
+### Linking
+
+1. Admin > Product links > Google Ads links
+2. Enable auto-tagging in Google Ads
+3. Import conversions in Google Ads
+
+### Audience Export
+
+Audiences created in GA4 can be used in Google Ads for:
+- Remarketing campaigns
+- Customer match
+- Similar audiences
diff --git a/skills/analytics-tracking/references/gtm-implementation.md b/skills/analytics-tracking/references/gtm-implementation.md
new file mode 100644
index 0000000..914ada1
--- /dev/null
+++ b/skills/analytics-tracking/references/gtm-implementation.md
@@ -0,0 +1,380 @@
+# Google Tag Manager Implementation Reference
+
+Detailed guide for implementing tracking via Google Tag Manager.
+
+## Container Structure
+
+### Tags
+
+Tags are code snippets that execute when triggered.
+
+**Common tag types:**
+- GA4 Configuration (base setup)
+- GA4 Event (custom events)
+- Google Ads Conversion
+- Facebook Pixel
+- LinkedIn Insight Tag
+- Custom HTML (for other pixels)
+
+### Triggers
+
+Triggers define when tags fire.
+
+**Built-in triggers:**
+- Page View: All Pages, DOM Ready, Window Loaded
+- Click: All Elements, Just Links
+- Form Submission
+- Scroll Depth
+- Timer
+- Element Visibility
+
+**Custom triggers:**
+- Custom Event (from dataLayer)
+- Trigger Groups (multiple conditions)
+
+### Variables
+
+Variables capture dynamic values.
+
+**Built-in (enable as needed):**
+- Click Text, Click URL, Click ID, Click Classes
+- Page Path, Page URL, Page Hostname
+- Referrer
+- Form Element, Form ID
+
+**User-defined:**
+- Data Layer variables
+- JavaScript variables
+- Lookup tables
+- RegEx tables
+- Constants
+
+---
+
+## Naming Conventions
+
+### Recommended Format
+
+```
+[Type] - [Description] - [Detail]
+
+Tags:
+GA4 - Event - Signup Completed
+GA4 - Config - Base Configuration
+FB - Pixel - Page View
+HTML - LiveChat Widget
+
+Triggers:
+Click - CTA Button
+Submit - Contact Form
+View - Pricing Page
+Custom - signup_completed
+
+Variables:
+DL - user_id
+JS - Current Timestamp
+LT - Campaign Source Map
+```
+
+---
+
+## Data Layer Patterns
+
+### Basic Structure
+
+```javascript
+// Initialize (in before GTM)
+window.dataLayer = window.dataLayer || [];
+
+// Push event
+dataLayer.push({
+ 'event': 'event_name',
+ 'property1': 'value1',
+ 'property2': 'value2'
+});
+```
+
+### Page Load Data
+
+```javascript
+// Set on page load (before GTM container)
+window.dataLayer = window.dataLayer || [];
+dataLayer.push({
+ 'pageType': 'product',
+ 'contentGroup': 'products',
+ 'user': {
+ 'loggedIn': true,
+ 'userId': '12345',
+ 'userType': 'premium'
+ }
+});
+```
+
+### Form Submission
+
+```javascript
+document.querySelector('#contact-form').addEventListener('submit', function() {
+ dataLayer.push({
+ 'event': 'form_submitted',
+ 'formName': 'contact',
+ 'formLocation': 'footer'
+ });
+});
+```
+
+### Button Click
+
+```javascript
+document.querySelector('.cta-button').addEventListener('click', function() {
+ dataLayer.push({
+ 'event': 'cta_clicked',
+ 'ctaText': this.innerText,
+ 'ctaLocation': 'hero'
+ });
+});
+```
+
+### E-commerce Events
+
+```javascript
+// Product view
+dataLayer.push({ ecommerce: null }); // Clear previous
+dataLayer.push({
+ 'event': 'view_item',
+ 'ecommerce': {
+ 'items': [{
+ 'item_id': 'SKU123',
+ 'item_name': 'Product Name',
+ 'price': 99.99,
+ 'item_category': 'Category',
+ 'quantity': 1
+ }]
+ }
+});
+
+// Add to cart
+dataLayer.push({ ecommerce: null });
+dataLayer.push({
+ 'event': 'add_to_cart',
+ 'ecommerce': {
+ 'items': [{
+ 'item_id': 'SKU123',
+ 'item_name': 'Product Name',
+ 'price': 99.99,
+ 'quantity': 1
+ }]
+ }
+});
+
+// Purchase
+dataLayer.push({ ecommerce: null });
+dataLayer.push({
+ 'event': 'purchase',
+ 'ecommerce': {
+ 'transaction_id': 'T12345',
+ 'value': 99.99,
+ 'currency': 'USD',
+ 'tax': 5.00,
+ 'shipping': 10.00,
+ 'items': [{
+ 'item_id': 'SKU123',
+ 'item_name': 'Product Name',
+ 'price': 99.99,
+ 'quantity': 1
+ }]
+ }
+});
+```
+
+---
+
+## Common Tag Configurations
+
+### GA4 Configuration Tag
+
+**Tag Type:** Google Analytics: GA4 Configuration
+
+**Settings:**
+- Measurement ID: G-XXXXXXXX
+- Send page view: Checked (for pageviews)
+- User Properties: Add any user-level dimensions
+
+**Trigger:** All Pages
+
+### GA4 Event Tag
+
+**Tag Type:** Google Analytics: GA4 Event
+
+**Settings:**
+- Configuration Tag: Select your config tag
+- Event Name: {{DL - event_name}} or hardcode
+- Event Parameters: Add parameters from dataLayer
+
+**Trigger:** Custom Event with event name match
+
+### Facebook Pixel - Base
+
+**Tag Type:** Custom HTML
+
+```html
+
+```
+
+**Trigger:** All Pages
+
+### Facebook Pixel - Event
+
+**Tag Type:** Custom HTML
+
+```html
+
+```
+
+**Trigger:** Custom Event - form_submitted
+
+---
+
+## Preview and Debug
+
+### Preview Mode
+
+1. Click "Preview" in GTM
+2. Enter site URL
+3. GTM debug panel opens at bottom
+
+**What to check:**
+- Tags fired on this event
+- Tags not fired (and why)
+- Variables and their values
+- Data layer contents
+
+### Debug Tips
+
+**Tag not firing:**
+- Check trigger conditions
+- Verify data layer push
+- Check tag sequencing
+
+**Wrong variable value:**
+- Check data layer structure
+- Verify variable path (nested objects)
+- Check timing (data may not exist yet)
+
+**Multiple firings:**
+- Check trigger uniqueness
+- Look for duplicate tags
+- Check tag firing options
+
+---
+
+## Workspaces and Versioning
+
+### Workspaces
+
+Use workspaces for team collaboration:
+- Default workspace for production
+- Separate workspaces for large changes
+- Merge when ready
+
+### Version Management
+
+**Best practices:**
+- Name every version descriptively
+- Add notes explaining changes
+- Review changes before publish
+- Keep production version noted
+
+**Version notes example:**
+```
+v15: Added purchase conversion tracking
+- New tag: GA4 - Event - Purchase
+- New trigger: Custom Event - purchase
+- New variables: DL - transaction_id, DL - value
+- Tested: Chrome, Safari, Mobile
+```
+
+---
+
+## Consent Management
+
+### Consent Mode Integration
+
+```javascript
+// Default state (before consent)
+gtag('consent', 'default', {
+ 'analytics_storage': 'denied',
+ 'ad_storage': 'denied'
+});
+
+// Update on consent
+function grantConsent() {
+ gtag('consent', 'update', {
+ 'analytics_storage': 'granted',
+ 'ad_storage': 'granted'
+ });
+}
+```
+
+### GTM Consent Overview
+
+1. Enable Consent Overview in Admin
+2. Configure consent for each tag
+3. Tags respect consent state automatically
+
+---
+
+## Advanced Patterns
+
+### Tag Sequencing
+
+**Setup tags to fire in order:**
+Tag Configuration > Advanced Settings > Tag Sequencing
+
+**Use cases:**
+- Config tag before event tags
+- Pixel initialization before tracking
+- Cleanup after conversion
+
+### Exception Handling
+
+**Trigger exceptions** - Prevent tag from firing:
+- Exclude certain pages
+- Exclude internal traffic
+- Exclude during testing
+
+### Custom JavaScript Variables
+
+```javascript
+// Get URL parameter
+function() {
+ var params = new URLSearchParams(window.location.search);
+ return params.get('campaign') || '(not set)';
+}
+
+// Get cookie value
+function() {
+ var match = document.cookie.match('(^|;) ?user_id=([^;]*)(;|$)');
+ return match ? match[2] : null;
+}
+
+// Get data from page
+function() {
+ var el = document.querySelector('.product-price');
+ return el ? parseFloat(el.textContent.replace('$', '')) : 0;
+}
+```
diff --git a/skills/antfu/SKILL.md b/skills/antfu/SKILL.md
new file mode 100644
index 0000000..efefbc9
--- /dev/null
+++ b/skills/antfu/SKILL.md
@@ -0,0 +1,130 @@
+---
+name: antfu
+description: Anthony Fu's opinionated tooling and conventions for JavaScript/TypeScript projects. Use when setting up new projects, configuring ESLint/Prettier alternatives, monorepos, library publishing, or when the user mentions Anthony Fu's preferences.
+metadata:
+ author: Anthony Fu
+ version: "2026.02.03"
+---
+
+## Coding Practices
+
+### Code Organization
+
+- **Single responsibility**: Each source file should have a clear, focused scope/purpose
+- **Split large files**: Break files when they become large or handle too many concerns
+- **Type separation**: Always separate types and interfaces into `types.ts` or `types/*.ts`
+- **Constants extraction**: Move constants to a dedicated `constants.ts` file
+
+### Runtime Environment
+
+- **Prefer isomorphic code**: Write runtime-agnostic code that works in Node, browser, and workers whenever possible
+- **Clear runtime indicators**: When code is environment-specific, add a comment at the top of the file:
+
+```ts
+// @env node
+// @env browser
+```
+
+### TypeScript
+
+- **Explicit return types**: Declare return types explicitly when possible
+- **Avoid complex inline types**: Extract complex types into dedicated `type` or `interface` declarations
+
+### Comments
+
+- **Avoid unnecessary comments**: Code should be self-explanatory
+- **Explain "why" not "how"**: Comments should describe the reasoning or intent, not what the code does
+
+### Testing (Vitest)
+
+- Test files: `foo.ts` → `foo.test.ts` (same directory)
+- Use `describe`/`it` API (not `test`)
+- Use `toMatchSnapshot` for complex outputs
+- Use `toMatchFileSnapshot` with explicit path for language-specific snapshots
+
+---
+
+## Tooling Choices
+
+### @antfu/ni Commands
+
+| Command | Description |
+|---------|-------------|
+| `ni` | Install dependencies |
+| `ni ` / `ni -D ` | Add dependency / dev dependency |
+| `nr
+```
diff --git a/skills/antfu/references/library-development.md b/skills/antfu/references/library-development.md
new file mode 100644
index 0000000..335fb29
--- /dev/null
+++ b/skills/antfu/references/library-development.md
@@ -0,0 +1,79 @@
+---
+name: library-development
+description: Building and publishing TypeScript libraries with tsdown. Use when creating npm packages, configuring library bundling, or setting up package.json exports.
+---
+
+# Library Development
+
+| Aspect | Choice |
+|--------|--------|
+| Bundler | tsdown |
+| Output | Pure ESM only (no CJS) |
+| DTS | Generated via tsdown |
+| Exports | Auto-generated via tsdown |
+
+## tsdown Configuration
+
+Use tsdown with these options enabled:
+
+```ts
+// tsdown.config.ts
+import { defineConfig } from 'tsdown'
+
+export default defineConfig({
+ entry: ['src/index.ts'],
+ format: ['esm'],
+ dts: true,
+ exports: true,
+})
+```
+
+| Option | Value | Purpose |
+|--------|-------|---------|
+| `format` | `['esm']` | Pure ESM, no CommonJS |
+| `dts` | `true` | Generate `.d.ts` files |
+| `exports` | `true` | Auto-update `exports` field in `package.json` |
+
+### Multiple Entry Points
+
+```ts
+export default defineConfig({
+ entry: [
+ 'src/index.ts',
+ 'src/utils.ts',
+ ],
+ format: ['esm'],
+ dts: true,
+ exports: true,
+})
+```
+
+The `exports: true` option auto-generates the `exports` field in `package.json` when running `tsdown`.
+
+---
+
+## package.json
+
+Required fields for pure ESM library:
+
+```json
+{
+ "type": "module",
+ "main": "./dist/index.mjs",
+ "module": "./dist/index.mjs",
+ "types": "./dist/index.d.mts",
+ "files": ["dist"],
+ "scripts": {
+ "build": "tsdown",
+ "prepack": "pnpm build",
+ "test": "vitest",
+ "release": "bumpp -r"
+ }
+}
+```
+
+The `exports` field is managed by tsdown when `exports: true`.
+
+### prepack Script
+
+For each public package, add `"prepack": "pnpm build"` to `scripts`. This ensures the package is automatically built before publishing (e.g., when running `npm publish` or `pnpm publish`). This prevents accidentally publishing stale or missing build artifacts.
diff --git a/skills/antfu/references/monorepo.md b/skills/antfu/references/monorepo.md
new file mode 100644
index 0000000..6beaa8e
--- /dev/null
+++ b/skills/antfu/references/monorepo.md
@@ -0,0 +1,124 @@
+---
+name: monorepo
+description: Monorepo setup with pnpm workspaces, centralized aliases, and Turborepo. Use when creating or managing multi-package repositories.
+---
+
+# Monorepo Setup
+
+## pnpm Workspaces
+
+Use pnpm workspaces for monorepo management:
+
+```yaml
+# pnpm-workspace.yaml
+packages:
+ - 'packages/*'
+```
+
+## Scripts Convention
+
+Have scripts in each package, and use `-r` (recursive) flag at root,
+Enable ESLint cache for faster linting in monorepos.
+
+```json
+// root package.json
+{
+ "scripts": {
+ "build": "pnpm run -r build",
+ "test": "vitest",
+ "lint": "eslint . --cache --concurrency=auto"
+ }
+}
+```
+
+In each package's `package.json`, add the scripts.
+
+```json
+// packages/*/package.json
+{
+ "scripts": {
+ "build": "tsdown",
+ "prepack": "pnpm build"
+ }
+}
+```
+
+## ESLint Cache
+
+
+```json
+{
+ "scripts": {
+ "lint": "eslint . --cache --concurrency=auto"
+ }
+}
+```
+
+## Turborepo (Optional)
+
+For monorepos with many packages or long build times, use Turborepo for task orchestration and caching.
+
+See the dedicated Turborepo skill for detailed configuration.
+
+## Centralized Alias
+
+For better DX across Vite, Nuxt, Vitest configs, create a centralized `alias.ts` at project root:
+
+```ts
+// alias.ts
+import fs from 'node:fs'
+import { fileURLToPath } from 'node:url'
+import { join, relative } from 'pathe'
+
+const root = fileURLToPath(new URL('.', import.meta.url))
+const r = (path: string) => fileURLToPath(new URL(`./packages/${path}`, import.meta.url))
+
+export const alias = {
+ '@myorg/core': r('core/src/index.ts'),
+ '@myorg/utils': r('utils/src/index.ts'),
+ '@myorg/ui': r('ui/src/index.ts'),
+ // Add more aliases as needed
+}
+
+// Auto-update tsconfig.alias.json paths
+const raw = fs.readFileSync(join(root, 'tsconfig.alias.json'), 'utf-8').trim()
+const tsconfig = JSON.parse(raw)
+tsconfig.compilerOptions.paths = Object.fromEntries(
+ Object.entries(alias).map(([key, value]) => [key, [`./${relative(root, value)}`]]),
+)
+const newRaw = JSON.stringify(tsconfig, null, 2)
+if (newRaw !== raw)
+ fs.writeFileSync(join(root, 'tsconfig.alias.json'), `${newRaw}\n`, 'utf-8')
+```
+
+Then update the `tsconfig.json` to use the alias file:
+
+```json
+{
+ "extends": [
+ "./tsconfig.alias.json"
+ ]
+}
+```
+
+### Using Alias in Configs
+
+Reference the centralized alias in all config files:
+
+```ts
+// vite.config.ts
+import { alias } from './alias'
+
+export default defineConfig({
+ resolve: { alias },
+})
+```
+
+```ts
+// nuxt.config.ts
+import { alias } from './alias'
+
+export default defineNuxtConfig({
+ alias,
+})
+```
diff --git a/skills/antfu/references/setting-up.md b/skills/antfu/references/setting-up.md
new file mode 100644
index 0000000..8ee5ba6
--- /dev/null
+++ b/skills/antfu/references/setting-up.md
@@ -0,0 +1,119 @@
+---
+name: setting-up
+description: Project setup files including .gitignore, GitHub Actions workflows, and VS Code extensions. Use when initializing new projects or adding CI/editor config.
+---
+
+# Project Setup
+
+## .gitignore
+
+Create when `.gitignore` is not present:
+
+```
+*.log
+*.tgz
+.cache
+.DS_Store
+.eslintcache
+.idea
+.env
+.nuxt
+.temp
+.output
+.turbo
+cache
+coverage
+dist
+lib-cov
+logs
+node_modules
+temp
+```
+
+## GitHub Actions
+
+Add these workflows when setting up a new project. Skip if workflows already exist. All use [sxzz/workflows](https://github.com/sxzz/workflows) reusable workflows.
+
+### Autofix Workflow
+
+**`.github/workflows/autofix.yml`** - Auto-fix linting on PRs:
+
+```yaml
+name: autofix.ci
+
+on: [pull_request]
+
+jobs:
+ autofix:
+ uses: sxzz/workflows/.github/workflows/autofix.yml@v1
+ permissions:
+ contents: read
+```
+
+### Unit Test Workflow
+
+**`.github/workflows/unit-test.yml`** - Run tests on push/PR:
+
+```yaml
+name: Unit Test
+
+on:
+ push:
+ branches: [main]
+ pull_request:
+ branches: [main]
+
+permissions: {}
+
+jobs:
+ unit-test:
+ uses: sxzz/workflows/.github/workflows/unit-test.yml@v1
+```
+
+### Release Workflow
+
+**`.github/workflows/release.yml`** - Publish on tag (library projects only):
+
+```yaml
+name: Release
+
+on:
+ push:
+ tags:
+ - 'v*'
+
+jobs:
+ release:
+ uses: sxzz/workflows/.github/workflows/release.yml@v1
+ with:
+ publish: true
+ permissions:
+ contents: write
+ id-token: write
+```
+
+## VS Code Extensions
+
+Configure in `.vscode/extensions.json`:
+
+```json
+{
+ "recommendations": [
+ "dbaeumer.vscode-eslint",
+ "antfu.pnpm-catalog-lens",
+ "antfu.iconify",
+ "antfu.unocss",
+ "antfu.slidev",
+ "vue.volar"
+ ]
+}
+```
+
+| Extension | Description |
+|-----------|-------------|
+| `dbaeumer.vscode-eslint` | ESLint integration for linting and formatting |
+| `antfu.pnpm-catalog-lens` | Shows pnpm catalog version hints inline |
+| `antfu.iconify` | Iconify icon preview and autocomplete |
+| `antfu.unocss` | UnoCSS IntelliSense and syntax highlighting |
+| `antfu.slidev` | Slidev preview and syntax highlighting |
+| `vue.volar` | Vue Language Features |
diff --git a/skills/architecture-patterns/architecture-patterns b/skills/architecture-patterns/architecture-patterns
new file mode 120000
index 0000000..a0797a6
--- /dev/null
+++ b/skills/architecture-patterns/architecture-patterns
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/architecture-patterns/
\ No newline at end of file
diff --git a/skills/better-auth-best-practices/better-auth-best-practices b/skills/better-auth-best-practices/better-auth-best-practices
new file mode 120000
index 0000000..b9c52c9
--- /dev/null
+++ b/skills/better-auth-best-practices/better-auth-best-practices
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/better-auth-best-practices/
\ No newline at end of file
diff --git a/skills/brainstorming/SKILL.md b/skills/brainstorming/SKILL.md
new file mode 100644
index 0000000..460f73a
--- /dev/null
+++ b/skills/brainstorming/SKILL.md
@@ -0,0 +1,96 @@
+---
+name: brainstorming
+description: "You MUST use this before any creative work - creating features, building components, adding functionality, or modifying behavior. Explores user intent, requirements and design before implementation."
+---
+
+# Brainstorming Ideas Into Designs
+
+## Overview
+
+Help turn ideas into fully formed designs and specs through natural collaborative dialogue.
+
+Start by understanding the current project context, then ask questions one at a time to refine the idea. Once you understand what you're building, present the design and get user approval.
+
+
+Do NOT invoke any implementation skill, write any code, scaffold any project, or take any implementation action until you have presented a design and the user has approved it. This applies to EVERY project regardless of perceived simplicity.
+
+
+## Anti-Pattern: "This Is Too Simple To Need A Design"
+
+Every project goes through this process. A todo list, a single-function utility, a config change — all of them. "Simple" projects are where unexamined assumptions cause the most wasted work. The design can be short (a few sentences for truly simple projects), but you MUST present it and get approval.
+
+## Checklist
+
+You MUST create a task for each of these items and complete them in order:
+
+1. **Explore project context** — check files, docs, recent commits
+2. **Ask clarifying questions** — one at a time, understand purpose/constraints/success criteria
+3. **Propose 2-3 approaches** — with trade-offs and your recommendation
+4. **Present design** — in sections scaled to their complexity, get user approval after each section
+5. **Write design doc** — save to `docs/plans/YYYY-MM-DD--design.md` and commit
+6. **Transition to implementation** — invoke writing-plans skill to create implementation plan
+
+## Process Flow
+
+```dot
+digraph brainstorming {
+ "Explore project context" [shape=box];
+ "Ask clarifying questions" [shape=box];
+ "Propose 2-3 approaches" [shape=box];
+ "Present design sections" [shape=box];
+ "User approves design?" [shape=diamond];
+ "Write design doc" [shape=box];
+ "Invoke writing-plans skill" [shape=doublecircle];
+
+ "Explore project context" -> "Ask clarifying questions";
+ "Ask clarifying questions" -> "Propose 2-3 approaches";
+ "Propose 2-3 approaches" -> "Present design sections";
+ "Present design sections" -> "User approves design?";
+ "User approves design?" -> "Present design sections" [label="no, revise"];
+ "User approves design?" -> "Write design doc" [label="yes"];
+ "Write design doc" -> "Invoke writing-plans skill";
+}
+```
+
+**The terminal state is invoking writing-plans.** Do NOT invoke frontend-design, mcp-builder, or any other implementation skill. The ONLY skill you invoke after brainstorming is writing-plans.
+
+## The Process
+
+**Understanding the idea:**
+- Check out the current project state first (files, docs, recent commits)
+- Ask questions one at a time to refine the idea
+- Prefer multiple choice questions when possible, but open-ended is fine too
+- Only one question per message - if a topic needs more exploration, break it into multiple questions
+- Focus on understanding: purpose, constraints, success criteria
+
+**Exploring approaches:**
+- Propose 2-3 different approaches with trade-offs
+- Present options conversationally with your recommendation and reasoning
+- Lead with your recommended option and explain why
+
+**Presenting the design:**
+- Once you believe you understand what you're building, present the design
+- Scale each section to its complexity: a few sentences if straightforward, up to 200-300 words if nuanced
+- Ask after each section whether it looks right so far
+- Cover: architecture, components, data flow, error handling, testing
+- Be ready to go back and clarify if something doesn't make sense
+
+## After the Design
+
+**Documentation:**
+- Write the validated design to `docs/plans/YYYY-MM-DD--design.md`
+- Use elements-of-style:writing-clearly-and-concisely skill if available
+- Commit the design document to git
+
+**Implementation:**
+- Invoke the writing-plans skill to create a detailed implementation plan
+- Do NOT invoke any other skill. writing-plans is the next step.
+
+## Key Principles
+
+- **One question at a time** - Don't overwhelm with multiple questions
+- **Multiple choice preferred** - Easier to answer than open-ended when possible
+- **YAGNI ruthlessly** - Remove unnecessary features from all designs
+- **Explore alternatives** - Always propose 2-3 approaches before settling
+- **Incremental validation** - Present design, get approval before moving on
+- **Be flexible** - Go back and clarify when something doesn't make sense
diff --git a/skills/brainstorming/brainstorming b/skills/brainstorming/brainstorming
new file mode 120000
index 0000000..40bb25a
--- /dev/null
+++ b/skills/brainstorming/brainstorming
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/brainstorming/
\ No newline at end of file
diff --git a/skills/brand-guidelines/brand-guidelines b/skills/brand-guidelines/brand-guidelines
new file mode 120000
index 0000000..32f2495
--- /dev/null
+++ b/skills/brand-guidelines/brand-guidelines
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/brand-guidelines/
\ No newline at end of file
diff --git a/skills/canvas-design/LICENSE.txt b/skills/canvas-design/LICENSE.txt
new file mode 100644
index 0000000..7a4a3ea
--- /dev/null
+++ b/skills/canvas-design/LICENSE.txt
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
\ No newline at end of file
diff --git a/skills/canvas-design/SKILL.md b/skills/canvas-design/SKILL.md
new file mode 100644
index 0000000..9f63fee
--- /dev/null
+++ b/skills/canvas-design/SKILL.md
@@ -0,0 +1,130 @@
+---
+name: canvas-design
+description: Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.
+license: Complete terms in LICENSE.txt
+---
+
+These are instructions for creating design philosophies - aesthetic movements that are then EXPRESSED VISUALLY. Output only .md files, .pdf files, and .png files.
+
+Complete this in two steps:
+1. Design Philosophy Creation (.md file)
+2. Express by creating it on a canvas (.pdf file or .png file)
+
+First, undertake this task:
+
+## DESIGN PHILOSOPHY CREATION
+
+To begin, create a VISUAL PHILOSOPHY (not layouts or templates) that will be interpreted through:
+- Form, space, color, composition
+- Images, graphics, shapes, patterns
+- Minimal text as visual accent
+
+### THE CRITICAL UNDERSTANDING
+- What is received: Some subtle input or instructions by the user that should be taken into account, but used as a foundation; it should not constrain creative freedom.
+- What is created: A design philosophy/aesthetic movement.
+- What happens next: Then, the same version receives the philosophy and EXPRESSES IT VISUALLY - creating artifacts that are 90% visual design, 10% essential text.
+
+Consider this approach:
+- Write a manifesto for an art movement
+- The next phase involves making the artwork
+
+The philosophy must emphasize: Visual expression. Spatial communication. Artistic interpretation. Minimal words.
+
+### HOW TO GENERATE A VISUAL PHILOSOPHY
+
+**Name the movement** (1-2 words): "Brutalist Joy" / "Chromatic Silence" / "Metabolist Dreams"
+
+**Articulate the philosophy** (4-6 paragraphs - concise but complete):
+
+To capture the VISUAL essence, express how the philosophy manifests through:
+- Space and form
+- Color and material
+- Scale and rhythm
+- Composition and balance
+- Visual hierarchy
+
+**CRITICAL GUIDELINES:**
+- **Avoid redundancy**: Each design aspect should be mentioned once. Avoid repeating points about color theory, spatial relationships, or typographic principles unless adding new depth.
+- **Emphasize craftsmanship REPEATEDLY**: The philosophy MUST stress multiple times that the final work should appear as though it took countless hours to create, was labored over with care, and comes from someone at the absolute top of their field. This framing is essential - repeat phrases like "meticulously crafted," "the product of deep expertise," "painstaking attention," "master-level execution."
+- **Leave creative space**: Remain specific about the aesthetic direction, but concise enough that the next Claude has room to make interpretive choices also at a extremely high level of craftmanship.
+
+The philosophy must guide the next version to express ideas VISUALLY, not through text. Information lives in design, not paragraphs.
+
+### PHILOSOPHY EXAMPLES
+
+**"Concrete Poetry"**
+Philosophy: Communication through monumental form and bold geometry.
+Visual expression: Massive color blocks, sculptural typography (huge single words, tiny labels), Brutalist spatial divisions, Polish poster energy meets Le Corbusier. Ideas expressed through visual weight and spatial tension, not explanation. Text as rare, powerful gesture - never paragraphs, only essential words integrated into the visual architecture. Every element placed with the precision of a master craftsman.
+
+**"Chromatic Language"**
+Philosophy: Color as the primary information system.
+Visual expression: Geometric precision where color zones create meaning. Typography minimal - small sans-serif labels letting chromatic fields communicate. Think Josef Albers' interaction meets data visualization. Information encoded spatially and chromatically. Words only to anchor what color already shows. The result of painstaking chromatic calibration.
+
+**"Analog Meditation"**
+Philosophy: Quiet visual contemplation through texture and breathing room.
+Visual expression: Paper grain, ink bleeds, vast negative space. Photography and illustration dominate. Typography whispered (small, restrained, serving the visual). Japanese photobook aesthetic. Images breathe across pages. Text appears sparingly - short phrases, never explanatory blocks. Each composition balanced with the care of a meditation practice.
+
+**"Organic Systems"**
+Philosophy: Natural clustering and modular growth patterns.
+Visual expression: Rounded forms, organic arrangements, color from nature through architecture. Information shown through visual diagrams, spatial relationships, iconography. Text only for key labels floating in space. The composition tells the story through expert spatial orchestration.
+
+**"Geometric Silence"**
+Philosophy: Pure order and restraint.
+Visual expression: Grid-based precision, bold photography or stark graphics, dramatic negative space. Typography precise but minimal - small essential text, large quiet zones. Swiss formalism meets Brutalist material honesty. Structure communicates, not words. Every alignment the work of countless refinements.
+
+*These are condensed examples. The actual design philosophy should be 4-6 substantial paragraphs.*
+
+### ESSENTIAL PRINCIPLES
+- **VISUAL PHILOSOPHY**: Create an aesthetic worldview to be expressed through design
+- **MINIMAL TEXT**: Always emphasize that text is sparse, essential-only, integrated as visual element - never lengthy
+- **SPATIAL EXPRESSION**: Ideas communicate through space, form, color, composition - not paragraphs
+- **ARTISTIC FREEDOM**: The next Claude interprets the philosophy visually - provide creative room
+- **PURE DESIGN**: This is about making ART OBJECTS, not documents with decoration
+- **EXPERT CRAFTSMANSHIP**: Repeatedly emphasize the final work must look meticulously crafted, labored over with care, the product of countless hours by someone at the top of their field
+
+**The design philosophy should be 4-6 paragraphs long.** Fill it with poetic design philosophy that brings together the core vision. Avoid repeating the same points. Keep the design philosophy generic without mentioning the intention of the art, as if it can be used wherever. Output the design philosophy as a .md file.
+
+---
+
+## DEDUCING THE SUBTLE REFERENCE
+
+**CRITICAL STEP**: Before creating the canvas, identify the subtle conceptual thread from the original request.
+
+**THE ESSENTIAL PRINCIPLE**:
+The topic is a **subtle, niche reference embedded within the art itself** - not always literal, always sophisticated. Someone familiar with the subject should feel it intuitively, while others simply experience a masterful abstract composition. The design philosophy provides the aesthetic language. The deduced topic provides the soul - the quiet conceptual DNA woven invisibly into form, color, and composition.
+
+This is **VERY IMPORTANT**: The reference must be refined so it enhances the work's depth without announcing itself. Think like a jazz musician quoting another song - only those who know will catch it, but everyone appreciates the music.
+
+---
+
+## CANVAS CREATION
+
+With both the philosophy and the conceptual framework established, express it on a canvas. Take a moment to gather thoughts and clear the mind. Use the design philosophy created and the instructions below to craft a masterpiece, embodying all aspects of the philosophy with expert craftsmanship.
+
+**IMPORTANT**: For any type of content, even if the user requests something for a movie/game/book, the approach should still be sophisticated. Never lose sight of the idea that this should be art, not something that's cartoony or amateur.
+
+To create museum or magazine quality work, use the design philosophy as the foundation. Create one single page, highly visual, design-forward PDF or PNG output (unless asked for more pages). Generally use repeating patterns and perfect shapes. Treat the abstract philosophical design as if it were a scientific bible, borrowing the visual language of systematic observation—dense accumulation of marks, repeated elements, or layered patterns that build meaning through patient repetition and reward sustained viewing. Add sparse, clinical typography and systematic reference markers that suggest this could be a diagram from an imaginary discipline, treating the invisible subject with the same reverence typically reserved for documenting observable phenomena. Anchor the piece with simple phrase(s) or details positioned subtly, using a limited color palette that feels intentional and cohesive. Embrace the paradox of using analytical visual language to express ideas about human experience: the result should feel like an artifact that proves something ephemeral can be studied, mapped, and understood through careful attention. This is true art.
+
+**Text as a contextual element**: Text is always minimal and visual-first, but let context guide whether that means whisper-quiet labels or bold typographic gestures. A punk venue poster might have larger, more aggressive type than a minimalist ceramics studio identity. Most of the time, font should be thin. All use of fonts must be design-forward and prioritize visual communication. Regardless of text scale, nothing falls off the page and nothing overlaps. Every element must be contained within the canvas boundaries with proper margins. Check carefully that all text, graphics, and visual elements have breathing room and clear separation. This is non-negotiable for professional execution. **IMPORTANT: Use different fonts if writing text. Search the `./canvas-fonts` directory. Regardless of approach, sophistication is non-negotiable.**
+
+Download and use whatever fonts are needed to make this a reality. Get creative by making the typography actually part of the art itself -- if the art is abstract, bring the font onto the canvas, not typeset digitally.
+
+To push boundaries, follow design instinct/intuition while using the philosophy as a guiding principle. Embrace ultimate design freedom and choice. Push aesthetics and design to the frontier.
+
+**CRITICAL**: To achieve human-crafted quality (not AI-generated), create work that looks like it took countless hours. Make it appear as though someone at the absolute top of their field labored over every detail with painstaking care. Ensure the composition, spacing, color choices, typography - everything screams expert-level craftsmanship. Double-check that nothing overlaps, formatting is flawless, every detail perfect. Create something that could be shown to people to prove expertise and rank as undeniably impressive.
+
+Output the final result as a single, downloadable .pdf or .png file, alongside the design philosophy used as a .md file.
+
+---
+
+## FINAL STEP
+
+**IMPORTANT**: The user ALREADY said "It isn't perfect enough. It must be pristine, a masterpiece if craftsmanship, as if it were about to be displayed in a museum."
+
+**CRITICAL**: To refine the work, avoid adding more graphics; instead refine what has been created and make it extremely crisp, respecting the design philosophy and the principles of minimalism entirely. Rather than adding a fun filter or refactoring a font, consider how to make the existing composition more cohesive with the art. If the instinct is to call a new function or draw a new shape, STOP and instead ask: "How can I make what's already here more of a piece of art?"
+
+Take a second pass. Go back to the code and refine/polish further to make this a philosophically designed masterpiece.
+
+## MULTI-PAGE OPTION
+
+To create additional pages when requested, create more creative pages along the same lines as the design philosophy but distinctly different as well. Bundle those pages in the same .pdf or many .pngs. Treat the first page as just a single page in a whole coffee table book waiting to be filled. Make the next pages unique twists and memories of the original. Have them almost tell a story in a very tasteful way. Exercise full creative freedom.
\ No newline at end of file
diff --git a/skills/canvas-design/canvas-design b/skills/canvas-design/canvas-design
new file mode 120000
index 0000000..6615d9c
--- /dev/null
+++ b/skills/canvas-design/canvas-design
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/canvas-design/
\ No newline at end of file
diff --git a/skills/canvas-design/canvas-fonts/ArsenalSC-OFL.txt b/skills/canvas-design/canvas-fonts/ArsenalSC-OFL.txt
new file mode 100644
index 0000000..1dad6ca
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/ArsenalSC-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2012 The Arsenal Project Authors (andrij.design@gmail.com)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/ArsenalSC-Regular.ttf b/skills/canvas-design/canvas-fonts/ArsenalSC-Regular.ttf
new file mode 100644
index 0000000..fe5409b
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/ArsenalSC-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/BigShoulders-Bold.ttf b/skills/canvas-design/canvas-fonts/BigShoulders-Bold.ttf
new file mode 100644
index 0000000..fc5f8fd
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/BigShoulders-Bold.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/BigShoulders-OFL.txt b/skills/canvas-design/canvas-fonts/BigShoulders-OFL.txt
new file mode 100644
index 0000000..b220280
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/BigShoulders-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2019 The Big Shoulders Project Authors (https://github.com/xotypeco/big_shoulders)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/BigShoulders-Regular.ttf b/skills/canvas-design/canvas-fonts/BigShoulders-Regular.ttf
new file mode 100644
index 0000000..de8308c
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/BigShoulders-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/Boldonse-OFL.txt b/skills/canvas-design/canvas-fonts/Boldonse-OFL.txt
new file mode 100644
index 0000000..1890cb1
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/Boldonse-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2024 The Boldonse Project Authors (https://github.com/googlefonts/boldonse)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/Boldonse-Regular.ttf b/skills/canvas-design/canvas-fonts/Boldonse-Regular.ttf
new file mode 100644
index 0000000..43fa30a
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/Boldonse-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/BricolageGrotesque-Bold.ttf b/skills/canvas-design/canvas-fonts/BricolageGrotesque-Bold.ttf
new file mode 100644
index 0000000..f3b1ded
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/BricolageGrotesque-Bold.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/BricolageGrotesque-OFL.txt b/skills/canvas-design/canvas-fonts/BricolageGrotesque-OFL.txt
new file mode 100644
index 0000000..fc2b216
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/BricolageGrotesque-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2022 The Bricolage Grotesque Project Authors (https://github.com/ateliertriay/bricolage)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/BricolageGrotesque-Regular.ttf b/skills/canvas-design/canvas-fonts/BricolageGrotesque-Regular.ttf
new file mode 100644
index 0000000..0674ae3
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/BricolageGrotesque-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/CrimsonPro-Bold.ttf b/skills/canvas-design/canvas-fonts/CrimsonPro-Bold.ttf
new file mode 100644
index 0000000..58730fb
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/CrimsonPro-Bold.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/CrimsonPro-Italic.ttf b/skills/canvas-design/canvas-fonts/CrimsonPro-Italic.ttf
new file mode 100644
index 0000000..786a1bd
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/CrimsonPro-Italic.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/CrimsonPro-OFL.txt b/skills/canvas-design/canvas-fonts/CrimsonPro-OFL.txt
new file mode 100644
index 0000000..f976fdc
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/CrimsonPro-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2018 The Crimson Pro Project Authors (https://github.com/Fonthausen/CrimsonPro)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/CrimsonPro-Regular.ttf b/skills/canvas-design/canvas-fonts/CrimsonPro-Regular.ttf
new file mode 100644
index 0000000..f5666b9
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/CrimsonPro-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/DMMono-OFL.txt b/skills/canvas-design/canvas-fonts/DMMono-OFL.txt
new file mode 100644
index 0000000..5b17f0c
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/DMMono-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2020 The DM Mono Project Authors (https://www.github.com/googlefonts/dm-mono)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/DMMono-Regular.ttf b/skills/canvas-design/canvas-fonts/DMMono-Regular.ttf
new file mode 100644
index 0000000..7efe813
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/DMMono-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/EricaOne-OFL.txt b/skills/canvas-design/canvas-fonts/EricaOne-OFL.txt
new file mode 100644
index 0000000..490d012
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/EricaOne-OFL.txt
@@ -0,0 +1,94 @@
+Copyright (c) 2011 by LatinoType Limitada (luciano@latinotype.com),
+with Reserved Font Names "Erica One"
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/EricaOne-Regular.ttf b/skills/canvas-design/canvas-fonts/EricaOne-Regular.ttf
new file mode 100644
index 0000000..8bd91d1
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/EricaOne-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/GeistMono-Bold.ttf b/skills/canvas-design/canvas-fonts/GeistMono-Bold.ttf
new file mode 100644
index 0000000..736ff7c
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/GeistMono-Bold.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/GeistMono-OFL.txt b/skills/canvas-design/canvas-fonts/GeistMono-OFL.txt
new file mode 100644
index 0000000..679a685
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/GeistMono-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2024 The Geist Project Authors (https://github.com/vercel/geist-font.git)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/GeistMono-Regular.ttf b/skills/canvas-design/canvas-fonts/GeistMono-Regular.ttf
new file mode 100644
index 0000000..1a30262
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/GeistMono-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/Gloock-OFL.txt b/skills/canvas-design/canvas-fonts/Gloock-OFL.txt
new file mode 100644
index 0000000..363acd3
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/Gloock-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2022 The Gloock Project Authors (https://github.com/duartp/gloock)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/Gloock-Regular.ttf b/skills/canvas-design/canvas-fonts/Gloock-Regular.ttf
new file mode 100644
index 0000000..3e58c4e
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/Gloock-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/IBMPlexMono-Bold.ttf b/skills/canvas-design/canvas-fonts/IBMPlexMono-Bold.ttf
new file mode 100644
index 0000000..247979c
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/IBMPlexMono-Bold.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/IBMPlexMono-OFL.txt b/skills/canvas-design/canvas-fonts/IBMPlexMono-OFL.txt
new file mode 100644
index 0000000..e423b74
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/IBMPlexMono-OFL.txt
@@ -0,0 +1,93 @@
+Copyright © 2017 IBM Corp. with Reserved Font Name "Plex"
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/IBMPlexMono-Regular.ttf b/skills/canvas-design/canvas-fonts/IBMPlexMono-Regular.ttf
new file mode 100644
index 0000000..601ae94
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/IBMPlexMono-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/IBMPlexSerif-Bold.ttf b/skills/canvas-design/canvas-fonts/IBMPlexSerif-Bold.ttf
new file mode 100644
index 0000000..78f6e50
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/IBMPlexSerif-Bold.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/IBMPlexSerif-BoldItalic.ttf b/skills/canvas-design/canvas-fonts/IBMPlexSerif-BoldItalic.ttf
new file mode 100644
index 0000000..369b89d
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/IBMPlexSerif-BoldItalic.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/IBMPlexSerif-Italic.ttf b/skills/canvas-design/canvas-fonts/IBMPlexSerif-Italic.ttf
new file mode 100644
index 0000000..a4d859a
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/IBMPlexSerif-Italic.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/IBMPlexSerif-Regular.ttf b/skills/canvas-design/canvas-fonts/IBMPlexSerif-Regular.ttf
new file mode 100644
index 0000000..35f454c
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/IBMPlexSerif-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/InstrumentSans-Bold.ttf b/skills/canvas-design/canvas-fonts/InstrumentSans-Bold.ttf
new file mode 100644
index 0000000..f602dce
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/InstrumentSans-Bold.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/InstrumentSans-BoldItalic.ttf b/skills/canvas-design/canvas-fonts/InstrumentSans-BoldItalic.ttf
new file mode 100644
index 0000000..122b273
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/InstrumentSans-BoldItalic.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/InstrumentSans-Italic.ttf b/skills/canvas-design/canvas-fonts/InstrumentSans-Italic.ttf
new file mode 100644
index 0000000..4b98fb8
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/InstrumentSans-Italic.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/InstrumentSans-OFL.txt b/skills/canvas-design/canvas-fonts/InstrumentSans-OFL.txt
new file mode 100644
index 0000000..4bb9914
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/InstrumentSans-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2022 The Instrument Sans Project Authors (https://github.com/Instrument/instrument-sans)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/InstrumentSans-Regular.ttf b/skills/canvas-design/canvas-fonts/InstrumentSans-Regular.ttf
new file mode 100644
index 0000000..14c6113
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/InstrumentSans-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/InstrumentSerif-Italic.ttf b/skills/canvas-design/canvas-fonts/InstrumentSerif-Italic.ttf
new file mode 100644
index 0000000..8fa958d
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/InstrumentSerif-Italic.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/InstrumentSerif-Regular.ttf b/skills/canvas-design/canvas-fonts/InstrumentSerif-Regular.ttf
new file mode 100644
index 0000000..9763031
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/InstrumentSerif-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/Italiana-OFL.txt b/skills/canvas-design/canvas-fonts/Italiana-OFL.txt
new file mode 100644
index 0000000..ba8af21
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/Italiana-OFL.txt
@@ -0,0 +1,93 @@
+Copyright (c) 2011, Santiago Orozco (hi@typemade.mx), with Reserved Font Name "Italiana".
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/Italiana-Regular.ttf b/skills/canvas-design/canvas-fonts/Italiana-Regular.ttf
new file mode 100644
index 0000000..a9b828c
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/Italiana-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/JetBrainsMono-Bold.ttf b/skills/canvas-design/canvas-fonts/JetBrainsMono-Bold.ttf
new file mode 100644
index 0000000..1926c80
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/JetBrainsMono-Bold.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/JetBrainsMono-OFL.txt b/skills/canvas-design/canvas-fonts/JetBrainsMono-OFL.txt
new file mode 100644
index 0000000..5ceee00
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/JetBrainsMono-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2020 The JetBrains Mono Project Authors (https://github.com/JetBrains/JetBrainsMono)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/JetBrainsMono-Regular.ttf b/skills/canvas-design/canvas-fonts/JetBrainsMono-Regular.ttf
new file mode 100644
index 0000000..436c982
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/JetBrainsMono-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/Jura-Light.ttf b/skills/canvas-design/canvas-fonts/Jura-Light.ttf
new file mode 100644
index 0000000..dffbb33
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/Jura-Light.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/Jura-Medium.ttf b/skills/canvas-design/canvas-fonts/Jura-Medium.ttf
new file mode 100644
index 0000000..4bf91a3
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/Jura-Medium.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/Jura-OFL.txt b/skills/canvas-design/canvas-fonts/Jura-OFL.txt
new file mode 100644
index 0000000..64ad4c6
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/Jura-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2019 The Jura Project Authors (https://github.com/ossobuffo/jura)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/LibreBaskerville-OFL.txt b/skills/canvas-design/canvas-fonts/LibreBaskerville-OFL.txt
new file mode 100644
index 0000000..8c531fa
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/LibreBaskerville-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2012 The Libre Baskerville Project Authors (https://github.com/impallari/Libre-Baskerville) with Reserved Font Name Libre Baskerville.
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/LibreBaskerville-Regular.ttf b/skills/canvas-design/canvas-fonts/LibreBaskerville-Regular.ttf
new file mode 100644
index 0000000..c1abc26
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/LibreBaskerville-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/Lora-Bold.ttf b/skills/canvas-design/canvas-fonts/Lora-Bold.ttf
new file mode 100644
index 0000000..edae21e
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/Lora-Bold.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/Lora-BoldItalic.ttf b/skills/canvas-design/canvas-fonts/Lora-BoldItalic.ttf
new file mode 100644
index 0000000..12dea8c
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/Lora-BoldItalic.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/Lora-Italic.ttf b/skills/canvas-design/canvas-fonts/Lora-Italic.ttf
new file mode 100644
index 0000000..e24b69b
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/Lora-Italic.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/Lora-OFL.txt b/skills/canvas-design/canvas-fonts/Lora-OFL.txt
new file mode 100644
index 0000000..4cf1b95
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/Lora-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2011 The Lora Project Authors (https://github.com/cyrealtype/Lora-Cyrillic), with Reserved Font Name "Lora".
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/Lora-Regular.ttf b/skills/canvas-design/canvas-fonts/Lora-Regular.ttf
new file mode 100644
index 0000000..dc751db
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/Lora-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/NationalPark-Bold.ttf b/skills/canvas-design/canvas-fonts/NationalPark-Bold.ttf
new file mode 100644
index 0000000..f4d7c02
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/NationalPark-Bold.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/NationalPark-OFL.txt b/skills/canvas-design/canvas-fonts/NationalPark-OFL.txt
new file mode 100644
index 0000000..f4ec3fb
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/NationalPark-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2025 The National Park Project Authors (https://github.com/benhoepner/National-Park)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/NationalPark-Regular.ttf b/skills/canvas-design/canvas-fonts/NationalPark-Regular.ttf
new file mode 100644
index 0000000..e4cbfbf
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/NationalPark-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/NothingYouCouldDo-OFL.txt b/skills/canvas-design/canvas-fonts/NothingYouCouldDo-OFL.txt
new file mode 100644
index 0000000..c81eccd
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/NothingYouCouldDo-OFL.txt
@@ -0,0 +1,93 @@
+Copyright (c) 2010, Kimberly Geswein (kimberlygeswein.com)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/NothingYouCouldDo-Regular.ttf b/skills/canvas-design/canvas-fonts/NothingYouCouldDo-Regular.ttf
new file mode 100644
index 0000000..b086bce
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/NothingYouCouldDo-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/Outfit-Bold.ttf b/skills/canvas-design/canvas-fonts/Outfit-Bold.ttf
new file mode 100644
index 0000000..f9f2f72
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/Outfit-Bold.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/Outfit-OFL.txt b/skills/canvas-design/canvas-fonts/Outfit-OFL.txt
new file mode 100644
index 0000000..fd0cb99
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/Outfit-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2021 The Outfit Project Authors (https://github.com/Outfitio/Outfit-Fonts)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/Outfit-Regular.ttf b/skills/canvas-design/canvas-fonts/Outfit-Regular.ttf
new file mode 100644
index 0000000..3939ab2
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/Outfit-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/PixelifySans-Medium.ttf b/skills/canvas-design/canvas-fonts/PixelifySans-Medium.ttf
new file mode 100644
index 0000000..95cd372
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/PixelifySans-Medium.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/PixelifySans-OFL.txt b/skills/canvas-design/canvas-fonts/PixelifySans-OFL.txt
new file mode 100644
index 0000000..b02d1b6
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/PixelifySans-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2021 The Pixelify Sans Project Authors (https://github.com/eifetx/Pixelify-Sans)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/PoiretOne-OFL.txt b/skills/canvas-design/canvas-fonts/PoiretOne-OFL.txt
new file mode 100644
index 0000000..607bdad
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/PoiretOne-OFL.txt
@@ -0,0 +1,93 @@
+Copyright (c) 2011, Denis Masharov (denis.masharov@gmail.com)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/PoiretOne-Regular.ttf b/skills/canvas-design/canvas-fonts/PoiretOne-Regular.ttf
new file mode 100644
index 0000000..b339511
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/PoiretOne-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/RedHatMono-Bold.ttf b/skills/canvas-design/canvas-fonts/RedHatMono-Bold.ttf
new file mode 100644
index 0000000..a6e3cf1
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/RedHatMono-Bold.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/RedHatMono-OFL.txt b/skills/canvas-design/canvas-fonts/RedHatMono-OFL.txt
new file mode 100644
index 0000000..16cf394
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/RedHatMono-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2024 The Red Hat Project Authors (https://github.com/RedHatOfficial/RedHatFont)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/RedHatMono-Regular.ttf b/skills/canvas-design/canvas-fonts/RedHatMono-Regular.ttf
new file mode 100644
index 0000000..3bf6a69
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/RedHatMono-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/Silkscreen-OFL.txt b/skills/canvas-design/canvas-fonts/Silkscreen-OFL.txt
new file mode 100644
index 0000000..a1fe7d5
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/Silkscreen-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2001 The Silkscreen Project Authors (https://github.com/googlefonts/silkscreen)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/Silkscreen-Regular.ttf b/skills/canvas-design/canvas-fonts/Silkscreen-Regular.ttf
new file mode 100644
index 0000000..8abaa7c
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/Silkscreen-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/SmoochSans-Medium.ttf b/skills/canvas-design/canvas-fonts/SmoochSans-Medium.ttf
new file mode 100644
index 0000000..0af9ead
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/SmoochSans-Medium.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/SmoochSans-OFL.txt b/skills/canvas-design/canvas-fonts/SmoochSans-OFL.txt
new file mode 100644
index 0000000..4c2f033
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/SmoochSans-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2016 The Smooch Sans Project Authors (https://github.com/googlefonts/smooch-sans)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/Tektur-Medium.ttf b/skills/canvas-design/canvas-fonts/Tektur-Medium.ttf
new file mode 100644
index 0000000..34fc797
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/Tektur-Medium.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/Tektur-OFL.txt b/skills/canvas-design/canvas-fonts/Tektur-OFL.txt
new file mode 100644
index 0000000..2cad55f
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/Tektur-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2023 The Tektur Project Authors (https://www.github.com/hyvyys/Tektur)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/Tektur-Regular.ttf b/skills/canvas-design/canvas-fonts/Tektur-Regular.ttf
new file mode 100644
index 0000000..f280fba
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/Tektur-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/WorkSans-Bold.ttf b/skills/canvas-design/canvas-fonts/WorkSans-Bold.ttf
new file mode 100644
index 0000000..5c97989
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/WorkSans-Bold.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/WorkSans-BoldItalic.ttf b/skills/canvas-design/canvas-fonts/WorkSans-BoldItalic.ttf
new file mode 100644
index 0000000..54418b8
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/WorkSans-BoldItalic.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/WorkSans-Italic.ttf b/skills/canvas-design/canvas-fonts/WorkSans-Italic.ttf
new file mode 100644
index 0000000..40529b6
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/WorkSans-Italic.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/WorkSans-OFL.txt b/skills/canvas-design/canvas-fonts/WorkSans-OFL.txt
new file mode 100644
index 0000000..070f341
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/WorkSans-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2019 The Work Sans Project Authors (https://github.com/weiweihuanghuang/Work-Sans)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/WorkSans-Regular.ttf b/skills/canvas-design/canvas-fonts/WorkSans-Regular.ttf
new file mode 100644
index 0000000..d24586c
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/WorkSans-Regular.ttf differ
diff --git a/skills/canvas-design/canvas-fonts/YoungSerif-OFL.txt b/skills/canvas-design/canvas-fonts/YoungSerif-OFL.txt
new file mode 100644
index 0000000..f09443c
--- /dev/null
+++ b/skills/canvas-design/canvas-fonts/YoungSerif-OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2023 The Young Serif Project Authors (https://github.com/noirblancrouge/YoungSerif)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+https://openfontlicense.org
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/skills/canvas-design/canvas-fonts/YoungSerif-Regular.ttf b/skills/canvas-design/canvas-fonts/YoungSerif-Regular.ttf
new file mode 100644
index 0000000..f454fbe
Binary files /dev/null and b/skills/canvas-design/canvas-fonts/YoungSerif-Regular.ttf differ
diff --git a/skills/code-review-excellence/code-review-excellence b/skills/code-review-excellence/code-review-excellence
new file mode 120000
index 0000000..1addbae
--- /dev/null
+++ b/skills/code-review-excellence/code-review-excellence
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/code-review-excellence/
\ No newline at end of file
diff --git a/skills/competitor-alternatives/competitor-alternatives b/skills/competitor-alternatives/competitor-alternatives
new file mode 120000
index 0000000..cb0db91
--- /dev/null
+++ b/skills/competitor-alternatives/competitor-alternatives
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/competitor-alternatives/
\ No newline at end of file
diff --git a/skills/content-strategy/SKILL.md b/skills/content-strategy/SKILL.md
new file mode 100644
index 0000000..fc64f06
--- /dev/null
+++ b/skills/content-strategy/SKILL.md
@@ -0,0 +1,356 @@
+---
+name: content-strategy
+version: 1.0.0
+description: When the user wants to plan a content strategy, decide what content to create, or figure out what topics to cover. Also use when the user mentions "content strategy," "what should I write about," "content ideas," "blog strategy," "topic clusters," or "content planning." For writing individual pieces, see copywriting. For SEO-specific audits, see seo-audit.
+---
+
+# Content Strategy
+
+You are a content strategist. Your goal is to help plan content that drives traffic, builds authority, and generates leads by being either searchable, shareable, or both.
+
+## Before Planning
+
+**Check for product marketing context first:**
+If `.claude/product-marketing-context.md` exists, read it before asking questions. Use that context and only ask for information not already covered or specific to this task.
+
+Gather this context (ask if not provided):
+
+### 1. Business Context
+- What does the company do?
+- Who is the ideal customer?
+- What's the primary goal for content? (traffic, leads, brand awareness, thought leadership)
+- What problems does your product solve?
+
+### 2. Customer Research
+- What questions do customers ask before buying?
+- What objections come up in sales calls?
+- What topics appear repeatedly in support tickets?
+- What language do customers use to describe their problems?
+
+### 3. Current State
+- Do you have existing content? What's working?
+- What resources do you have? (writers, budget, time)
+- What content formats can you produce? (written, video, audio)
+
+### 4. Competitive Landscape
+- Who are your main competitors?
+- What content gaps exist in your market?
+
+---
+
+## Searchable vs Shareable
+
+Every piece of content must be searchable, shareable, or both. Prioritize in that order—search traffic is the foundation.
+
+**Searchable content** captures existing demand. Optimized for people actively looking for answers.
+
+**Shareable content** creates demand. Spreads ideas and gets people talking.
+
+### When Writing Searchable Content
+
+- Target a specific keyword or question
+- Match search intent exactly—answer what the searcher wants
+- Use clear titles that match search queries
+- Structure with headings that mirror search patterns
+- Place keywords in title, headings, first paragraph, URL
+- Provide comprehensive coverage (don't leave questions unanswered)
+- Include data, examples, and links to authoritative sources
+- Optimize for AI/LLM discovery: clear positioning, structured content, brand consistency across the web
+
+### When Writing Shareable Content
+
+- Lead with a novel insight, original data, or counterintuitive take
+- Challenge conventional wisdom with well-reasoned arguments
+- Tell stories that make people feel something
+- Create content people want to share to look smart or help others
+- Connect to current trends or emerging problems
+- Share vulnerable, honest experiences others can learn from
+
+---
+
+## Content Types
+
+### Searchable Content Types
+
+**Use-Case Content**
+Formula: [persona] + [use-case]. Targets long-tail keywords.
+- "Project management for designers"
+- "Task tracking for developers"
+- "Client collaboration for freelancers"
+
+**Hub and Spoke**
+Hub = comprehensive overview. Spokes = related subtopics.
+```
+/topic (hub)
+├── /topic/subtopic-1 (spoke)
+├── /topic/subtopic-2 (spoke)
+└── /topic/subtopic-3 (spoke)
+```
+Create hub first, then build spokes. Interlink strategically.
+
+**Note:** Most content works fine under `/blog`. Only use dedicated hub/spoke URL structures for major topics with layered depth (e.g., Atlassian's `/agile` guide). For typical blog posts, `/blog/post-title` is sufficient.
+
+**Template Libraries**
+High-intent keywords + product adoption.
+- Target searches like "marketing plan template"
+- Provide immediate standalone value
+- Show how product enhances the template
+
+### Shareable Content Types
+
+**Thought Leadership**
+- Articulate concepts everyone feels but hasn't named
+- Challenge conventional wisdom with evidence
+- Share vulnerable, honest experiences
+
+**Data-Driven Content**
+- Product data analysis (anonymized insights)
+- Public data analysis (uncover patterns)
+- Original research (run experiments, share results)
+
+**Expert Roundups**
+15-30 experts answering one specific question. Built-in distribution.
+
+**Case Studies**
+Structure: Challenge → Solution → Results → Key learnings
+
+**Meta Content**
+Behind-the-scenes transparency. "How We Got Our First $5k MRR," "Why We Chose Debt Over VC."
+
+For programmatic content at scale, see **programmatic-seo** skill.
+
+---
+
+## Content Pillars and Topic Clusters
+
+Content pillars are the 3-5 core topics your brand will own. Each pillar spawns a cluster of related content.
+
+Most of the time, all content can live under `/blog` with good internal linking between related posts. Dedicated pillar pages with custom URL structures (like `/guides/topic`) are only needed when you're building comprehensive resources with multiple layers of depth.
+
+### How to Identify Pillars
+
+1. **Product-led**: What problems does your product solve?
+2. **Audience-led**: What does your ICP need to learn?
+3. **Search-led**: What topics have volume in your space?
+4. **Competitor-led**: What are competitors ranking for?
+
+### Pillar Structure
+
+```
+Pillar Topic (Hub)
+├── Subtopic Cluster 1
+│ ├── Article A
+│ ├── Article B
+│ └── Article C
+├── Subtopic Cluster 2
+│ ├── Article D
+│ ├── Article E
+│ └── Article F
+└── Subtopic Cluster 3
+ ├── Article G
+ ├── Article H
+ └── Article I
+```
+
+### Pillar Criteria
+
+Good pillars should:
+- Align with your product/service
+- Match what your audience cares about
+- Have search volume and/or social interest
+- Be broad enough for many subtopics
+
+---
+
+## Keyword Research by Buyer Stage
+
+Map topics to the buyer's journey using proven keyword modifiers:
+
+### Awareness Stage
+Modifiers: "what is," "how to," "guide to," "introduction to"
+
+Example: If customers ask about project management basics:
+- "What is Agile Project Management"
+- "Guide to Sprint Planning"
+- "How to Run a Standup Meeting"
+
+### Consideration Stage
+Modifiers: "best," "top," "vs," "alternatives," "comparison"
+
+Example: If customers evaluate multiple tools:
+- "Best Project Management Tools for Remote Teams"
+- "Asana vs Trello vs Monday"
+- "Basecamp Alternatives"
+
+### Decision Stage
+Modifiers: "pricing," "reviews," "demo," "trial," "buy"
+
+Example: If pricing comes up in sales calls:
+- "Project Management Tool Pricing Comparison"
+- "How to Choose the Right Plan"
+- "[Product] Reviews"
+
+### Implementation Stage
+Modifiers: "templates," "examples," "tutorial," "how to use," "setup"
+
+Example: If support tickets show implementation struggles:
+- "Project Template Library"
+- "Step-by-Step Setup Tutorial"
+- "How to Use [Feature]"
+
+---
+
+## Content Ideation Sources
+
+### 1. Keyword Data
+
+If user provides keyword exports (Ahrefs, SEMrush, GSC), analyze for:
+- Topic clusters (group related keywords)
+- Buyer stage (awareness/consideration/decision/implementation)
+- Search intent (informational, commercial, transactional)
+- Quick wins (low competition + decent volume + high relevance)
+- Content gaps (keywords competitors rank for that you don't)
+
+Output as prioritized table:
+| Keyword | Volume | Difficulty | Buyer Stage | Content Type | Priority |
+
+### 2. Call Transcripts
+
+If user provides sales or customer call transcripts, extract:
+- Questions asked → FAQ content or blog posts
+- Pain points → problems in their own words
+- Objections → content to address proactively
+- Language patterns → exact phrases to use (voice of customer)
+- Competitor mentions → what they compared you to
+
+Output content ideas with supporting quotes.
+
+### 3. Survey Responses
+
+If user provides survey data, mine for:
+- Open-ended responses (topics and language)
+- Common themes (30%+ mention = high priority)
+- Resource requests (what they wish existed)
+- Content preferences (formats they want)
+
+### 4. Forum Research
+
+Use web search to find content ideas:
+
+**Reddit:** `site:reddit.com [topic]`
+- Top posts in relevant subreddits
+- Questions and frustrations in comments
+- Upvoted answers (validates what resonates)
+
+**Quora:** `site:quora.com [topic]`
+- Most-followed questions
+- Highly upvoted answers
+
+**Other:** Indie Hackers, Hacker News, Product Hunt, industry Slack/Discord
+
+Extract: FAQs, misconceptions, debates, problems being solved, terminology used.
+
+### 5. Competitor Analysis
+
+Use web search to analyze competitor content:
+
+**Find their content:** `site:competitor.com/blog`
+
+**Analyze:**
+- Top-performing posts (comments, shares)
+- Topics covered repeatedly
+- Gaps they haven't covered
+- Case studies (customer problems, use cases, results)
+- Content structure (pillars, categories, formats)
+
+**Identify opportunities:**
+- Topics you can cover better
+- Angles they're missing
+- Outdated content to improve on
+
+### 6. Sales and Support Input
+
+Extract from customer-facing teams:
+- Common objections
+- Repeated questions
+- Support ticket patterns
+- Success stories
+- Feature requests and underlying problems
+
+---
+
+## Prioritizing Content Ideas
+
+Score each idea on four factors:
+
+### 1. Customer Impact (40%)
+- How frequently did this topic come up in research?
+- What percentage of customers face this challenge?
+- How emotionally charged was this pain point?
+- What's the potential LTV of customers with this need?
+
+### 2. Content-Market Fit (30%)
+- Does this align with problems your product solves?
+- Can you offer unique insights from customer research?
+- Do you have customer stories to support this?
+- Will this naturally lead to product interest?
+
+### 3. Search Potential (20%)
+- What's the monthly search volume?
+- How competitive is this topic?
+- Are there related long-tail opportunities?
+- Is search interest growing or declining?
+
+### 4. Resource Requirements (10%)
+- Do you have expertise to create authoritative content?
+- What additional research is needed?
+- What assets (graphics, data, examples) will you need?
+
+### Scoring Template
+
+| Idea | Customer Impact (40%) | Content-Market Fit (30%) | Search Potential (20%) | Resources (10%) | Total |
+|------|----------------------|-------------------------|----------------------|-----------------|-------|
+| Topic A | 8 | 9 | 7 | 6 | 8.0 |
+| Topic B | 6 | 7 | 9 | 8 | 7.1 |
+
+---
+
+## Output Format
+
+When creating a content strategy, provide:
+
+### 1. Content Pillars
+- 3-5 pillars with rationale
+- Subtopic clusters for each pillar
+- How pillars connect to product
+
+### 2. Priority Topics
+For each recommended piece:
+- Topic/title
+- Searchable, shareable, or both
+- Content type (use-case, hub/spoke, thought leadership, etc.)
+- Target keyword and buyer stage
+- Why this topic (customer research backing)
+
+### 3. Topic Cluster Map
+Visual or structured representation of how content interconnects.
+
+---
+
+## Task-Specific Questions
+
+1. What patterns emerge from your last 10 customer conversations?
+2. What questions keep coming up in sales calls?
+3. Where are competitors' content efforts falling short?
+4. What unique insights from customer research aren't being shared elsewhere?
+5. Which existing content drives the most conversions, and why?
+
+---
+
+## Related Skills
+
+- **copywriting**: For writing individual content pieces
+- **seo-audit**: For technical SEO and on-page optimization
+- **programmatic-seo**: For scaled content generation
+- **email-sequence**: For email-based content
+- **social-content**: For social media content
diff --git a/skills/content-strategy/content-strategy b/skills/content-strategy/content-strategy
new file mode 120000
index 0000000..6d9d066
--- /dev/null
+++ b/skills/content-strategy/content-strategy
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/content-strategy/
\ No newline at end of file
diff --git a/skills/copy-editing/SKILL.md b/skills/copy-editing/SKILL.md
new file mode 100644
index 0000000..6d6227f
--- /dev/null
+++ b/skills/copy-editing/SKILL.md
@@ -0,0 +1,446 @@
+---
+name: copy-editing
+version: 1.0.0
+description: "When the user wants to edit, review, or improve existing marketing copy. Also use when the user mentions 'edit this copy,' 'review my copy,' 'copy feedback,' 'proofread,' 'polish this,' 'make this better,' or 'copy sweep.' This skill provides a systematic approach to editing marketing copy through multiple focused passes."
+---
+
+# Copy Editing
+
+You are an expert copy editor specializing in marketing and conversion copy. Your goal is to systematically improve existing copy through focused editing passes while preserving the core message.
+
+## Core Philosophy
+
+**Check for product marketing context first:**
+If `.claude/product-marketing-context.md` exists, read it before editing. Use brand voice and customer language from that context to guide your edits.
+
+Good copy editing isn't about rewriting—it's about enhancing. Each pass focuses on one dimension, catching issues that get missed when you try to fix everything at once.
+
+**Key principles:**
+- Don't change the core message; focus on enhancing it
+- Multiple focused passes beat one unfocused review
+- Each edit should have a clear reason
+- Preserve the author's voice while improving clarity
+
+---
+
+## The Seven Sweeps Framework
+
+Edit copy through seven sequential passes, each focusing on one dimension. After each sweep, loop back to check previous sweeps aren't compromised.
+
+### Sweep 1: Clarity
+
+**Focus:** Can the reader understand what you're saying?
+
+**What to check:**
+- Confusing sentence structures
+- Unclear pronoun references
+- Jargon or insider language
+- Ambiguous statements
+- Missing context
+
+**Common clarity killers:**
+- Sentences trying to say too much
+- Abstract language instead of concrete
+- Assuming reader knowledge they don't have
+- Burying the point in qualifications
+
+**Process:**
+1. Read through quickly, highlighting unclear parts
+2. Don't correct yet—just note problem areas
+3. After marking issues, recommend specific edits
+4. Verify edits maintain the original intent
+
+**After this sweep:** Confirm the "Rule of One" (one main idea per section) and "You Rule" (copy speaks to the reader) are intact.
+
+---
+
+### Sweep 2: Voice and Tone
+
+**Focus:** Is the copy consistent in how it sounds?
+
+**What to check:**
+- Shifts between formal and casual
+- Inconsistent brand personality
+- Mood changes that feel jarring
+- Word choices that don't match the brand
+
+**Common voice issues:**
+- Starting casual, becoming corporate
+- Mixing "we" and "the company" references
+- Humor in some places, serious in others (unintentionally)
+- Technical language appearing randomly
+
+**Process:**
+1. Read aloud to hear inconsistencies
+2. Mark where tone shifts unexpectedly
+3. Recommend edits that smooth transitions
+4. Ensure personality remains throughout
+
+**After this sweep:** Return to Clarity Sweep to ensure voice edits didn't introduce confusion.
+
+---
+
+### Sweep 3: So What
+
+**Focus:** Does every claim answer "why should I care?"
+
+**What to check:**
+- Features without benefits
+- Claims without consequences
+- Statements that don't connect to reader's life
+- Missing "which means..." bridges
+
+**The So What test:**
+For every statement, ask "Okay, so what?" If the copy doesn't answer that question with a deeper benefit, it needs work.
+
+❌ "Our platform uses AI-powered analytics"
+*So what?*
+✅ "Our AI-powered analytics surface insights you'd miss manually—so you can make better decisions in half the time"
+
+**Common So What failures:**
+- Feature lists without benefit connections
+- Impressive-sounding claims that don't land
+- Technical capabilities without outcomes
+- Company achievements that don't help the reader
+
+**Process:**
+1. Read each claim and literally ask "so what?"
+2. Highlight claims missing the answer
+3. Add the benefit bridge or deeper meaning
+4. Ensure benefits connect to real reader desires
+
+**After this sweep:** Return to Voice and Tone, then Clarity.
+
+---
+
+### Sweep 4: Prove It
+
+**Focus:** Is every claim supported with evidence?
+
+**What to check:**
+- Unsubstantiated claims
+- Missing social proof
+- Assertions without backup
+- "Best" or "leading" without evidence
+
+**Types of proof to look for:**
+- Testimonials with names and specifics
+- Case study references
+- Statistics and data
+- Third-party validation
+- Guarantees and risk reversals
+- Customer logos
+- Review scores
+
+**Common proof gaps:**
+- "Trusted by thousands" (which thousands?)
+- "Industry-leading" (according to whom?)
+- "Customers love us" (show them saying it)
+- Results claims without specifics
+
+**Process:**
+1. Identify every claim that needs proof
+2. Check if proof exists nearby
+3. Flag unsupported assertions
+4. Recommend adding proof or softening claims
+
+**After this sweep:** Return to So What, Voice and Tone, then Clarity.
+
+---
+
+### Sweep 5: Specificity
+
+**Focus:** Is the copy concrete enough to be compelling?
+
+**What to check:**
+- Vague language ("improve," "enhance," "optimize")
+- Generic statements that could apply to anyone
+- Round numbers that feel made up
+- Missing details that would make it real
+
+**Specificity upgrades:**
+
+| Vague | Specific |
+|-------|----------|
+| Save time | Save 4 hours every week |
+| Many customers | 2,847 teams |
+| Fast results | Results in 14 days |
+| Improve your workflow | Cut your reporting time in half |
+| Great support | Response within 2 hours |
+
+**Common specificity issues:**
+- Adjectives doing the work nouns should do
+- Benefits without quantification
+- Outcomes without timeframes
+- Claims without concrete examples
+
+**Process:**
+1. Highlight vague words and phrases
+2. Ask "Can this be more specific?"
+3. Add numbers, timeframes, or examples
+4. Remove content that can't be made specific (it's probably filler)
+
+**After this sweep:** Return to Prove It, So What, Voice and Tone, then Clarity.
+
+---
+
+### Sweep 6: Heightened Emotion
+
+**Focus:** Does the copy make the reader feel something?
+
+**What to check:**
+- Flat, informational language
+- Missing emotional triggers
+- Pain points mentioned but not felt
+- Aspirations stated but not evoked
+
+**Emotional dimensions to consider:**
+- Pain of the current state
+- Frustration with alternatives
+- Fear of missing out
+- Desire for transformation
+- Pride in making smart choices
+- Relief from solving the problem
+
+**Techniques for heightening emotion:**
+- Paint the "before" state vividly
+- Use sensory language
+- Tell micro-stories
+- Reference shared experiences
+- Ask questions that prompt reflection
+
+**Process:**
+1. Read for emotional impact—does it move you?
+2. Identify flat sections that should resonate
+3. Add emotional texture while staying authentic
+4. Ensure emotion serves the message (not manipulation)
+
+**After this sweep:** Return to Specificity, Prove It, So What, Voice and Tone, then Clarity.
+
+---
+
+### Sweep 7: Zero Risk
+
+**Focus:** Have we removed every barrier to action?
+
+**What to check:**
+- Friction near CTAs
+- Unanswered objections
+- Missing trust signals
+- Unclear next steps
+- Hidden costs or surprises
+
+**Risk reducers to look for:**
+- Money-back guarantees
+- Free trials
+- "No credit card required"
+- "Cancel anytime"
+- Social proof near CTA
+- Clear expectations of what happens next
+- Privacy assurances
+
+**Common risk issues:**
+- CTA asks for commitment without earning trust
+- Objections raised but not addressed
+- Fine print that creates doubt
+- Vague "Contact us" instead of clear next step
+
+**Process:**
+1. Focus on sections near CTAs
+2. List every reason someone might hesitate
+3. Check if the copy addresses each concern
+4. Add risk reversals or trust signals as needed
+
+**After this sweep:** Return through all previous sweeps one final time: Heightened Emotion, Specificity, Prove It, So What, Voice and Tone, Clarity.
+
+---
+
+## Quick-Pass Editing Checks
+
+Use these for faster reviews when a full seven-sweep process isn't needed.
+
+### Word-Level Checks
+
+**Cut these words:**
+- Very, really, extremely, incredibly (weak intensifiers)
+- Just, actually, basically (filler)
+- In order to (use "to")
+- That (often unnecessary)
+- Things, stuff (vague)
+
+**Replace these:**
+
+| Weak | Strong |
+|------|--------|
+| Utilize | Use |
+| Implement | Set up |
+| Leverage | Use |
+| Facilitate | Help |
+| Innovative | New |
+| Robust | Strong |
+| Seamless | Smooth |
+| Cutting-edge | New/Modern |
+
+**Watch for:**
+- Adverbs (usually unnecessary)
+- Passive voice (switch to active)
+- Nominalizations (verb → noun: "make a decision" → "decide")
+
+### Sentence-Level Checks
+
+- One idea per sentence
+- Vary sentence length (mix short and long)
+- Front-load important information
+- Max 3 conjunctions per sentence
+- No more than 25 words (usually)
+
+### Paragraph-Level Checks
+
+- One topic per paragraph
+- Short paragraphs (2-4 sentences for web)
+- Strong opening sentences
+- Logical flow between paragraphs
+- White space for scannability
+
+---
+
+## Copy Editing Checklist
+
+### Before You Start
+- [ ] Understand the goal of this copy
+- [ ] Know the target audience
+- [ ] Identify the desired action
+- [ ] Read through once without editing
+
+### Clarity (Sweep 1)
+- [ ] Every sentence is immediately understandable
+- [ ] No jargon without explanation
+- [ ] Pronouns have clear references
+- [ ] No sentences trying to do too much
+
+### Voice & Tone (Sweep 2)
+- [ ] Consistent formality level throughout
+- [ ] Brand personality maintained
+- [ ] No jarring shifts in mood
+- [ ] Reads well aloud
+
+### So What (Sweep 3)
+- [ ] Every feature connects to a benefit
+- [ ] Claims answer "why should I care?"
+- [ ] Benefits connect to real desires
+- [ ] No impressive-but-empty statements
+
+### Prove It (Sweep 4)
+- [ ] Claims are substantiated
+- [ ] Social proof is specific and attributed
+- [ ] Numbers and stats have sources
+- [ ] No unearned superlatives
+
+### Specificity (Sweep 5)
+- [ ] Vague words replaced with concrete ones
+- [ ] Numbers and timeframes included
+- [ ] Generic statements made specific
+- [ ] Filler content removed
+
+### Heightened Emotion (Sweep 6)
+- [ ] Copy evokes feeling, not just information
+- [ ] Pain points feel real
+- [ ] Aspirations feel achievable
+- [ ] Emotion serves the message authentically
+
+### Zero Risk (Sweep 7)
+- [ ] Objections addressed near CTA
+- [ ] Trust signals present
+- [ ] Next steps are crystal clear
+- [ ] Risk reversals stated (guarantee, trial, etc.)
+
+### Final Checks
+- [ ] No typos or grammatical errors
+- [ ] Consistent formatting
+- [ ] Links work (if applicable)
+- [ ] Core message preserved through all edits
+
+---
+
+## Common Copy Problems & Fixes
+
+### Problem: Wall of Features
+**Symptom:** List of what the product does without why it matters
+**Fix:** Add "which means..." after each feature to bridge to benefits
+
+### Problem: Corporate Speak
+**Symptom:** "Leverage synergies to optimize outcomes"
+**Fix:** Ask "How would a human say this?" and use those words
+
+### Problem: Weak Opening
+**Symptom:** Starting with company history or vague statements
+**Fix:** Lead with the reader's problem or desired outcome
+
+### Problem: Buried CTA
+**Symptom:** The ask comes after too much buildup, or isn't clear
+**Fix:** Make the CTA obvious, early, and repeated
+
+### Problem: No Proof
+**Symptom:** "Customers love us" with no evidence
+**Fix:** Add specific testimonials, numbers, or case references
+
+### Problem: Generic Claims
+**Symptom:** "We help businesses grow"
+**Fix:** Specify who, how, and by how much
+
+### Problem: Mixed Audiences
+**Symptom:** Copy tries to speak to everyone, resonates with no one
+**Fix:** Pick one audience and write directly to them
+
+### Problem: Feature Overload
+**Symptom:** Listing every capability, overwhelming the reader
+**Fix:** Focus on 3-5 key benefits that matter most to the audience
+
+---
+
+## Working with Copy Sweeps
+
+When editing collaboratively:
+
+1. **Run a sweep and present findings** - Show what you found, why it's an issue
+2. **Recommend specific edits** - Don't just identify problems; propose solutions
+3. **Request the updated copy** - Let the author make final decisions
+4. **Verify previous sweeps** - After each round of edits, re-check earlier sweeps
+5. **Repeat until clean** - Continue until a full sweep finds no new issues
+
+This iterative process ensures each edit doesn't create new problems while respecting the author's ownership of the copy.
+
+---
+
+## References
+
+- [Plain English Alternatives](references/plain-english-alternatives.md): Replace complex words with simpler alternatives
+
+---
+
+## Task-Specific Questions
+
+1. What's the goal of this copy? (Awareness, conversion, retention)
+2. What action should readers take?
+3. Are there specific concerns or known issues?
+4. What proof/evidence do you have available?
+
+---
+
+## Related Skills
+
+- **copywriting**: For writing new copy from scratch (use this skill to edit after your first draft is complete)
+- **page-cro**: For broader page optimization beyond copy
+- **marketing-psychology**: For understanding why certain edits improve conversion
+- **ab-test-setup**: For testing copy variations
+
+---
+
+## When to Use Each Skill
+
+| Task | Skill to Use |
+|------|--------------|
+| Writing new page copy from scratch | copywriting |
+| Reviewing and improving existing copy | copy-editing (this skill) |
+| Editing copy you just wrote | copy-editing (this skill) |
+| Structural or strategic page changes | page-cro |
diff --git a/skills/copy-editing/copy-editing b/skills/copy-editing/copy-editing
new file mode 120000
index 0000000..2d3b2e0
--- /dev/null
+++ b/skills/copy-editing/copy-editing
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/copy-editing/
\ No newline at end of file
diff --git a/skills/copy-editing/references/plain-english-alternatives.md b/skills/copy-editing/references/plain-english-alternatives.md
new file mode 100644
index 0000000..18db3b0
--- /dev/null
+++ b/skills/copy-editing/references/plain-english-alternatives.md
@@ -0,0 +1,376 @@
+# Plain English Alternatives
+
+Replace complex or pompous words with plain English alternatives.
+
+Source: Plain English Campaign A-Z of Alternative Words (2001), Australian Government Style Manual (2024), plainlanguage.gov
+
+---
+
+## A
+
+| Complex | Plain Alternative |
+|---------|-------------------|
+| (an) absence of | no, none |
+| abundance | enough, plenty, many |
+| accede to | allow, agree to |
+| accelerate | speed up |
+| accommodate | meet, hold, house |
+| accomplish | do, finish, complete |
+| accordingly | so, therefore |
+| acknowledge | thank you for, confirm |
+| acquire | get, buy, obtain |
+| additional | extra, more |
+| adjacent | next to |
+| advantageous | useful, helpful |
+| advise | tell, say, inform |
+| aforesaid | this, earlier |
+| aggregate | total |
+| alleviate | ease, reduce |
+| allocate | give, share, assign |
+| alternative | other, choice |
+| ameliorate | improve |
+| anticipate | expect |
+| apparent | clear, obvious |
+| appreciable | large, noticeable |
+| appropriate | proper, right, suitable |
+| approximately | about, roughly |
+| ascertain | find out |
+| assistance | help |
+| at the present time | now |
+| attempt | try |
+| authorise | allow, let |
+
+---
+
+## B
+
+| Complex | Plain Alternative |
+|---------|-------------------|
+| belated | late |
+| beneficial | helpful, useful |
+| bestow | give |
+| by means of | by |
+
+---
+
+## C
+
+| Complex | Plain Alternative |
+|---------|-------------------|
+| calculate | work out |
+| cease | stop, end |
+| circumvent | avoid, get around |
+| clarification | explanation |
+| commence | start, begin |
+| communicate | tell, talk, write |
+| competent | able |
+| compile | collect, make |
+| complete | fill in, finish |
+| component | part |
+| comprise | include, make up |
+| (it is) compulsory | (you) must |
+| conceal | hide |
+| concerning | about |
+| consequently | so |
+| considerable | large, great, much |
+| constitute | make up, form |
+| consult | ask, talk to |
+| consumption | use |
+| currently | now |
+
+---
+
+## D
+
+| Complex | Plain Alternative |
+|---------|-------------------|
+| deduct | take off |
+| deem | treat as, consider |
+| defer | delay, put off |
+| deficiency | lack |
+| delete | remove, cross out |
+| demonstrate | show, prove |
+| denote | show, mean |
+| designate | name, appoint |
+| despatch/dispatch | send |
+| determine | decide, find out |
+| detrimental | harmful |
+| diminish | reduce, lessen |
+| discontinue | stop |
+| disseminate | spread, distribute |
+| documentation | papers, documents |
+| due to the fact that | because |
+| duration | time, length |
+| dwelling | home |
+
+---
+
+## E
+
+| Complex | Plain Alternative |
+|---------|-------------------|
+| economical | cheap, good value |
+| eligible | allowed, qualified |
+| elucidate | explain |
+| enable | allow |
+| encounter | meet |
+| endeavour | try |
+| enquire | ask |
+| ensure | make sure |
+| entitlement | right |
+| envisage | expect |
+| equivalent | equal, the same |
+| erroneous | wrong |
+| establish | set up, show |
+| evaluate | assess, test |
+| excessive | too much |
+| exclusively | only |
+| exempt | free from |
+| expedite | speed up |
+| expenditure | spending |
+| expire | run out |
+
+---
+
+## F
+
+| Complex | Plain Alternative |
+|---------|-------------------|
+| fabricate | make |
+| facilitate | help, make possible |
+| finalise | finish, complete |
+| following | after |
+| for the purpose of | to, for |
+| for the reason that | because |
+| forthwith | now, at once |
+| forward | send |
+| frequently | often |
+| furnish | give, provide |
+| furthermore | also, and |
+
+---
+
+## G-H
+
+| Complex | Plain Alternative |
+|---------|-------------------|
+| generate | produce, create |
+| henceforth | from now on |
+| hitherto | until now |
+
+---
+
+## I
+
+| Complex | Plain Alternative |
+|---------|-------------------|
+| if and when | if, when |
+| illustrate | show |
+| immediately | at once, now |
+| implement | carry out, do |
+| imply | suggest |
+| in accordance with | under, following |
+| in addition to | and, also |
+| in conjunction with | with |
+| in excess of | more than |
+| in lieu of | instead of |
+| in order to | to |
+| in receipt of | receive |
+| in relation to | about |
+| in respect of | about, for |
+| in the event of | if |
+| in the majority of instances | most, usually |
+| in the near future | soon |
+| in view of the fact that | because |
+| inception | start |
+| indicate | show, suggest |
+| inform | tell |
+| initiate | start, begin |
+| insert | put in |
+| instances | cases |
+| irrespective of | despite |
+| issue | give, send |
+
+---
+
+## L-M
+
+| Complex | Plain Alternative |
+|---------|-------------------|
+| (a) large number of | many |
+| liaise with | work with, talk to |
+| locality | place, area |
+| locate | find |
+| magnitude | size |
+| (it is) mandatory | (you) must |
+| manner | way |
+| modification | change |
+| moreover | also, and |
+
+---
+
+## N-O
+
+| Complex | Plain Alternative |
+|---------|-------------------|
+| negligible | small |
+| nevertheless | but, however |
+| notify | tell |
+| notwithstanding | despite, even if |
+| numerous | many |
+| objective | aim, goal |
+| (it is) obligatory | (you) must |
+| obtain | get |
+| occasioned by | caused by |
+| on behalf of | for |
+| on numerous occasions | often |
+| on receipt of | when you get |
+| on the grounds that | because |
+| operate | work, run |
+| optimum | best |
+| option | choice |
+| otherwise | or |
+| outstanding | unpaid |
+| owing to | because |
+
+---
+
+## P
+
+| Complex | Plain Alternative |
+|---------|-------------------|
+| partially | partly |
+| participate | take part |
+| particulars | details |
+| per annum | a year |
+| perform | do |
+| permit | let, allow |
+| personnel | staff, people |
+| peruse | read |
+| possess | have, own |
+| practically | almost |
+| predominant | main |
+| prescribe | set |
+| preserve | keep |
+| previous | earlier, before |
+| principal | main |
+| prior to | before |
+| proceed | go ahead |
+| procure | get |
+| prohibit | ban, stop |
+| promptly | quickly |
+| provide | give |
+| provided that | if |
+| provisions | rules, terms |
+| proximity | nearness |
+| purchase | buy |
+| pursuant to | under |
+
+---
+
+## R
+
+| Complex | Plain Alternative |
+|---------|-------------------|
+| reconsider | think again |
+| reduction | cut |
+| referred to as | called |
+| regarding | about |
+| reimburse | repay |
+| reiterate | repeat |
+| relating to | about |
+| remain | stay |
+| remainder | rest |
+| remuneration | pay |
+| render | make, give |
+| represent | stand for |
+| request | ask |
+| require | need |
+| residence | home |
+| retain | keep |
+| revised | changed, new |
+
+---
+
+## S
+
+| Complex | Plain Alternative |
+|---------|-------------------|
+| scrutinise | examine, check |
+| select | choose |
+| solely | only |
+| specified | given, stated |
+| state | say |
+| statutory | legal, by law |
+| subject to | depending on |
+| submit | send, give |
+| subsequent to | after |
+| subsequently | later |
+| substantial | large, much |
+| sufficient | enough |
+| supplement | add to |
+| supplementary | extra |
+
+---
+
+## T-U
+
+| Complex | Plain Alternative |
+|---------|-------------------|
+| terminate | end, stop |
+| thereafter | then |
+| thereby | by this |
+| thus | so |
+| to date | so far |
+| transfer | move |
+| transmit | send |
+| ultimately | in the end |
+| undertake | agree, do |
+| uniform | same |
+| utilise | use |
+
+---
+
+## V-Z
+
+| Complex | Plain Alternative |
+|---------|-------------------|
+| variation | change |
+| virtually | almost |
+| visualise | imagine, see |
+| ways and means | ways |
+| whatsoever | any |
+| with a view to | to |
+| with effect from | from |
+| with reference to | about |
+| with regard to | about |
+| with respect to | about |
+| zone | area |
+
+---
+
+## Phrases to Remove Entirely
+
+These phrases often add nothing. Delete them:
+
+- a total of
+- absolutely
+- actually
+- all things being equal
+- as a matter of fact
+- at the end of the day
+- at this moment in time
+- basically
+- currently (when "now" or nothing works)
+- I am of the opinion that (use: I think)
+- in due course (use: soon, or say when)
+- in the final analysis
+- it should be understood
+- last but not least
+- obviously
+- of course
+- quite
+- really
+- the fact of the matter is
+- to all intents and purposes
+- very
diff --git a/skills/copywriting/SKILL.md b/skills/copywriting/SKILL.md
new file mode 100644
index 0000000..d49f6e3
--- /dev/null
+++ b/skills/copywriting/SKILL.md
@@ -0,0 +1,251 @@
+---
+name: copywriting
+version: 1.0.0
+description: When the user wants to write, rewrite, or improve marketing copy for any page — including homepage, landing pages, pricing pages, feature pages, about pages, or product pages. Also use when the user says "write copy for," "improve this copy," "rewrite this page," "marketing copy," "headline help," or "CTA copy." For email copy, see email-sequence. For popup copy, see popup-cro.
+---
+
+# Copywriting
+
+You are an expert conversion copywriter. Your goal is to write marketing copy that is clear, compelling, and drives action.
+
+## Before Writing
+
+**Check for product marketing context first:**
+If `.claude/product-marketing-context.md` exists, read it before asking questions. Use that context and only ask for information not already covered or specific to this task.
+
+Gather this context (ask if not provided):
+
+### 1. Page Purpose
+- What type of page? (homepage, landing page, pricing, feature, about)
+- What is the ONE primary action you want visitors to take?
+
+### 2. Audience
+- Who is the ideal customer?
+- What problem are they trying to solve?
+- What objections or hesitations do they have?
+- What language do they use to describe their problem?
+
+### 3. Product/Offer
+- What are you selling or offering?
+- What makes it different from alternatives?
+- What's the key transformation or outcome?
+- Any proof points (numbers, testimonials, case studies)?
+
+### 4. Context
+- Where is traffic coming from? (ads, organic, email)
+- What do visitors already know before arriving?
+
+---
+
+## Copywriting Principles
+
+### Clarity Over Cleverness
+If you have to choose between clear and creative, choose clear.
+
+### Benefits Over Features
+Features: What it does. Benefits: What that means for the customer.
+
+### Specificity Over Vagueness
+- Vague: "Save time on your workflow"
+- Specific: "Cut your weekly reporting from 4 hours to 15 minutes"
+
+### Customer Language Over Company Language
+Use words your customers use. Mirror voice-of-customer from reviews, interviews, support tickets.
+
+### One Idea Per Section
+Each section should advance one argument. Build a logical flow down the page.
+
+---
+
+## Writing Style Rules
+
+### Core Principles
+
+1. **Simple over complex** — "Use" not "utilize," "help" not "facilitate"
+2. **Specific over vague** — Avoid "streamline," "optimize," "innovative"
+3. **Active over passive** — "We generate reports" not "Reports are generated"
+4. **Confident over qualified** — Remove "almost," "very," "really"
+5. **Show over tell** — Describe the outcome instead of using adverbs
+6. **Honest over sensational** — Never fabricate statistics or testimonials
+
+### Quick Quality Check
+
+- Jargon that could confuse outsiders?
+- Sentences trying to do too much?
+- Passive voice constructions?
+- Exclamation points? (remove them)
+- Marketing buzzwords without substance?
+
+For thorough line-by-line review, use the **copy-editing** skill after your draft.
+
+---
+
+## Best Practices
+
+### Be Direct
+Get to the point. Don't bury the value in qualifications.
+
+❌ Slack lets you share files instantly, from documents to images, directly in your conversations
+
+✅ Need to share a screenshot? Send as many documents, images, and audio files as your heart desires.
+
+### Use Rhetorical Questions
+Questions engage readers and make them think about their own situation.
+- "Hate returning stuff to Amazon?"
+- "Tired of chasing approvals?"
+
+### Use Analogies When Helpful
+Analogies make abstract concepts concrete and memorable.
+
+### Pepper in Humor (When Appropriate)
+Puns and wit make copy memorable—but only if it fits the brand and doesn't undermine clarity.
+
+---
+
+## Page Structure Framework
+
+### Above the Fold
+
+**Headline**
+- Your single most important message
+- Communicate core value proposition
+- Specific > generic
+
+**Example formulas:**
+- "{Achieve outcome} without {pain point}"
+- "The {category} for {audience}"
+- "Never {unpleasant event} again"
+- "{Question highlighting main pain point}"
+
+**For comprehensive headline formulas**: See [references/copy-frameworks.md](references/copy-frameworks.md)
+
+**For natural transition phrases**: See [references/natural-transitions.md](references/natural-transitions.md)
+
+**Subheadline**
+- Expands on headline
+- Adds specificity
+- 1-2 sentences max
+
+**Primary CTA**
+- Action-oriented button text
+- Communicate what they get: "Start Free Trial" > "Sign Up"
+
+### Core Sections
+
+| Section | Purpose |
+|---------|---------|
+| Social Proof | Build credibility (logos, stats, testimonials) |
+| Problem/Pain | Show you understand their situation |
+| Solution/Benefits | Connect to outcomes (3-5 key benefits) |
+| How It Works | Reduce perceived complexity (3-4 steps) |
+| Objection Handling | FAQ, comparisons, guarantees |
+| Final CTA | Recap value, repeat CTA, risk reversal |
+
+**For detailed section types and page templates**: See [references/copy-frameworks.md](references/copy-frameworks.md)
+
+---
+
+## CTA Copy Guidelines
+
+**Weak CTAs (avoid):**
+- Submit, Sign Up, Learn More, Click Here, Get Started
+
+**Strong CTAs (use):**
+- Start Free Trial
+- Get [Specific Thing]
+- See [Product] in Action
+- Create Your First [Thing]
+- Download the Guide
+
+**Formula:** [Action Verb] + [What They Get] + [Qualifier if needed]
+
+Examples:
+- "Start My Free Trial"
+- "Get the Complete Checklist"
+- "See Pricing for My Team"
+
+---
+
+## Page-Specific Guidance
+
+### Homepage
+- Serve multiple audiences without being generic
+- Lead with broadest value proposition
+- Provide clear paths for different visitor intents
+
+### Landing Page
+- Single message, single CTA
+- Match headline to ad/traffic source
+- Complete argument on one page
+
+### Pricing Page
+- Help visitors choose the right plan
+- Address "which is right for me?" anxiety
+- Make recommended plan obvious
+
+### Feature Page
+- Connect feature → benefit → outcome
+- Show use cases and examples
+- Clear path to try or buy
+
+### About Page
+- Tell the story of why you exist
+- Connect mission to customer benefit
+- Still include a CTA
+
+---
+
+## Voice and Tone
+
+Before writing, establish:
+
+**Formality level:**
+- Casual/conversational
+- Professional but friendly
+- Formal/enterprise
+
+**Brand personality:**
+- Playful or serious?
+- Bold or understated?
+- Technical or accessible?
+
+Maintain consistency, but adjust intensity:
+- Headlines can be bolder
+- Body copy should be clearer
+- CTAs should be action-oriented
+
+---
+
+## Output Format
+
+When writing copy, provide:
+
+### Page Copy
+Organized by section:
+- Headline, Subheadline, CTA
+- Section headers and body copy
+- Secondary CTAs
+
+### Annotations
+For key elements, explain:
+- Why you made this choice
+- What principle it applies
+
+### Alternatives
+For headlines and CTAs, provide 2-3 options:
+- Option A: [copy] — [rationale]
+- Option B: [copy] — [rationale]
+
+### Meta Content (if relevant)
+- Page title (for SEO)
+- Meta description
+
+---
+
+## Related Skills
+
+- **copy-editing**: For polishing existing copy (use after your draft)
+- **page-cro**: If page structure/strategy needs work, not just copy
+- **email-sequence**: For email copywriting
+- **popup-cro**: For popup and modal copy
+- **ab-test-setup**: To test copy variations
diff --git a/skills/copywriting/copywriting b/skills/copywriting/copywriting
new file mode 120000
index 0000000..93fdbc1
--- /dev/null
+++ b/skills/copywriting/copywriting
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/copywriting/
\ No newline at end of file
diff --git a/skills/copywriting/references/copy-frameworks.md b/skills/copywriting/references/copy-frameworks.md
new file mode 100644
index 0000000..9957b96
--- /dev/null
+++ b/skills/copywriting/references/copy-frameworks.md
@@ -0,0 +1,338 @@
+# Copy Frameworks Reference
+
+Headline formulas, page section types, and structural templates.
+
+## Headline Formulas
+
+### Outcome-Focused
+
+**{Achieve desirable outcome} without {pain point}**
+> Understand how users are really experiencing your site without drowning in numbers
+
+**{Achieve desirable outcome} by {how product makes it possible}**
+> Generate more leads by seeing which companies visit your site
+
+**Turn {input} into {outcome}**
+> Turn your hard-earned sales into repeat customers
+
+**[Achieve outcome] in [timeframe]**
+> Get your tax refund in 10 days
+
+---
+
+### Problem-Focused
+
+**Never {unpleasant event} again**
+> Never miss a sales opportunity again
+
+**{Question highlighting the main pain point}**
+> Hate returning stuff to Amazon?
+
+**Stop [pain]. Start [pleasure].**
+> Stop chasing invoices. Start getting paid on time.
+
+---
+
+### Audience-Focused
+
+**{Key feature/product type} for {target audience}**
+> Advanced analytics for Shopify e-commerce
+
+**{Key feature/product type} for {target audience} to {what it's used for}**
+> An online whiteboard for teams to ideate and brainstorm together
+
+**You don't have to {skills or resources} to {achieve desirable outcome}**
+> With Ahrefs, you don't have to be an SEO pro to rank higher and get more traffic
+
+---
+
+### Differentiation-Focused
+
+**The {opposite of usual process} way to {achieve desirable outcome}**
+> The easiest way to turn your passion into income
+
+**The [category] that [key differentiator]**
+> The CRM that updates itself
+
+---
+
+### Proof-Focused
+
+**[Number] [people] use [product] to [outcome]**
+> 50,000 marketers use Drip to send better emails
+
+**{Key benefit of your product}**
+> Sound clear in online meetings
+
+---
+
+### Additional Formulas
+
+**The simple way to {outcome}**
+> The simple way to track your time
+
+**Finally, {category} that {benefit}**
+> Finally, accounting software that doesn't suck
+
+**{Outcome} without {common pain}**
+> Build your website without writing code
+
+**Get {benefit} from your {thing}**
+> Get more revenue from your existing traffic
+
+**{Action verb} your {thing} like {admirable example}**
+> Market your SaaS like a Fortune 500
+
+**What if you could {desirable outcome}?**
+> What if you could close deals 30% faster?
+
+**Everything you need to {outcome}**
+> Everything you need to launch your course
+
+**The {adjective} {category} built for {audience}**
+> The lightweight CRM built for startups
+
+---
+
+## Landing Page Section Types
+
+### Core Sections
+
+**Hero (Above the Fold)**
+- Headline + subheadline
+- Primary CTA
+- Supporting visual (product screenshot, hero image)
+- Optional: Social proof bar
+
+**Social Proof Bar**
+- Customer logos (recognizable > many)
+- Key metric ("10,000+ teams")
+- Star rating with review count
+- Short testimonial snippet
+
+**Problem/Pain Section**
+- Articulate their problem better than they can
+- Create recognition ("that's exactly my situation")
+- Hint at cost of not solving it
+
+**Solution/Benefits Section**
+- Bridge from problem to your solution
+- 3-5 key benefits (not 10)
+- Each: headline + explanation + proof if available
+
+**How It Works**
+- 3-4 numbered steps
+- Reduces perceived complexity
+- Each step: action + outcome
+
+**Final CTA Section**
+- Recap value proposition
+- Repeat primary CTA
+- Risk reversal (guarantee, free trial)
+
+---
+
+### Supporting Sections
+
+**Testimonials**
+- Full quotes with names, roles, companies
+- Photos when possible
+- Specific results over vague praise
+- Formats: quote cards, video, tweet embeds
+
+**Case Studies**
+- Problem → Solution → Results
+- Specific metrics and outcomes
+- Customer name and context
+- Can be snippets with "Read more" links
+
+**Use Cases**
+- Different ways product is used
+- Helps visitors self-identify
+- "For marketers who need X" format
+
+**Personas / "Built For" Sections**
+- Explicitly call out target audience
+- "Perfect for [role]" blocks
+- Addresses "Is this for me?" question
+
+**FAQ Section**
+- Address common objections
+- Good for SEO
+- Reduces support burden
+- 5-10 most common questions
+
+**Comparison Section**
+- vs. competitors (name them or don't)
+- vs. status quo (spreadsheets, manual processes)
+- Tables or side-by-side format
+
+**Integrations / Partners**
+- Logos of tools you connect with
+- "Works with your stack" messaging
+- Builds credibility
+
+**Founder Story / Manifesto**
+- Why you built this
+- What you believe
+- Emotional connection
+- Differentiates from faceless competitors
+
+**Demo / Product Tour**
+- Interactive demos
+- Video walkthroughs
+- GIF previews
+- Shows product in action
+
+**Pricing Preview**
+- Teaser even on non-pricing pages
+- Starting price or "from $X/mo"
+- Moves decision-makers forward
+
+**Guarantee / Risk Reversal**
+- Money-back guarantee
+- Free trial terms
+- "Cancel anytime"
+- Reduces friction
+
+**Stats Section**
+- Key metrics that build credibility
+- "10,000+ customers"
+- "4.9/5 rating"
+- "$2M saved for customers"
+
+---
+
+## Page Structure Templates
+
+### Feature-Heavy Page (Weak)
+
+```
+1. Hero
+2. Feature 1
+3. Feature 2
+4. Feature 3
+5. Feature 4
+6. CTA
+```
+
+This is a list, not a persuasive narrative.
+
+---
+
+### Varied, Engaging Page (Strong)
+
+```
+1. Hero with clear value prop
+2. Social proof bar (logos or stats)
+3. Problem/pain section
+4. How it works (3 steps)
+5. Key benefits (2-3, not 10)
+6. Testimonial
+7. Use cases or personas
+8. Comparison to alternatives
+9. Case study snippet
+10. FAQ
+11. Final CTA with guarantee
+```
+
+This tells a story and addresses objections.
+
+---
+
+### Compact Landing Page
+
+```
+1. Hero (headline, subhead, CTA, image)
+2. Social proof bar
+3. 3 key benefits with icons
+4. Testimonial
+5. How it works (3 steps)
+6. Final CTA with guarantee
+```
+
+Good for ad landing pages where brevity matters.
+
+---
+
+### Enterprise/B2B Landing Page
+
+```
+1. Hero (outcome-focused headline)
+2. Logo bar (recognizable companies)
+3. Problem section (business pain)
+4. Solution overview
+5. Use cases by role/department
+6. Security/compliance section
+7. Integration logos
+8. Case study with metrics
+9. ROI/value section
+10. Contact/demo CTA
+```
+
+Addresses enterprise buyer concerns.
+
+---
+
+### Product Launch Page
+
+```
+1. Hero with launch announcement
+2. Video demo or walkthrough
+3. Feature highlights (3-5)
+4. Before/after comparison
+5. Early testimonials
+6. Launch pricing or early access offer
+7. CTA with urgency
+```
+
+Good for ProductHunt, launches, or announcements.
+
+---
+
+## Section Writing Tips
+
+### Problem Section
+
+Start with phrases like:
+- "You know the feeling..."
+- "If you're like most [role]..."
+- "Every day, [audience] struggles with..."
+- "We've all been there..."
+
+Then describe:
+- The specific frustration
+- The time/money wasted
+- The impact on their work/life
+
+### Benefits Section
+
+For each benefit, include:
+- **Headline**: The outcome they get
+- **Body**: How it works (1-2 sentences)
+- **Proof**: Number, testimonial, or example (optional)
+
+### How It Works Section
+
+Each step should be:
+- **Numbered**: Creates sense of progress
+- **Simple verb**: "Connect," "Set up," "Get"
+- **Outcome-oriented**: What they get from this step
+
+Example:
+1. Connect your tools (takes 2 minutes)
+2. Set your preferences
+3. Get automated reports every Monday
+
+### Testimonial Selection
+
+Best testimonials include:
+- Specific results ("increased conversions by 32%")
+- Before/after context ("We used to spend hours...")
+- Role + company for credibility
+- Something quotable and specific
+
+Avoid testimonials that just say:
+- "Great product!"
+- "Love it!"
+- "Easy to use!"
diff --git a/skills/copywriting/references/natural-transitions.md b/skills/copywriting/references/natural-transitions.md
new file mode 100644
index 0000000..929116f
--- /dev/null
+++ b/skills/copywriting/references/natural-transitions.md
@@ -0,0 +1,252 @@
+# Natural Transitions
+
+Transitional phrases to guide readers through your content. Good signposting improves readability, user engagement, and helps search engines understand content structure.
+
+Adapted from: University of Manchester Academic Phrasebank (2023), Plain English Campaign, web content best practices
+
+---
+
+## Previewing Content Structure
+
+Use to orient readers and set expectations:
+
+- Here's what we'll cover...
+- This guide walks you through...
+- Below, you'll find...
+- We'll start with X, then move to Y...
+- First, let's look at...
+- Let's break this down step by step.
+- The sections below explain...
+
+---
+
+## Introducing a New Topic
+
+- When it comes to X,...
+- Regarding X,...
+- Speaking of X,...
+- Now let's talk about X.
+- Another key factor is...
+- X is worth exploring because...
+
+---
+
+## Referring Back
+
+Use to connect ideas and reinforce key points:
+
+- As mentioned earlier,...
+- As we covered above,...
+- Remember when we discussed X?
+- Building on that point,...
+- Going back to X,...
+- Earlier, we explained that...
+
+---
+
+## Moving Between Sections
+
+- Now let's look at...
+- Next up:...
+- Moving on to...
+- With that covered, let's turn to...
+- Now that you understand X, here's Y.
+- That brings us to...
+
+---
+
+## Indicating Addition
+
+- Also,...
+- Plus,...
+- On top of that,...
+- What's more,...
+- Another benefit is...
+- Beyond that,...
+- In addition,...
+- There's also...
+
+**Note:** Use "moreover" and "furthermore" sparingly. They can sound AI-generated when overused.
+
+---
+
+## Indicating Contrast
+
+- However,...
+- But,...
+- That said,...
+- On the flip side,...
+- In contrast,...
+- Unlike X, Y...
+- While X is true, Y...
+- Despite this,...
+
+---
+
+## Indicating Similarity
+
+- Similarly,...
+- Likewise,...
+- In the same way,...
+- Just like X, Y also...
+- This mirrors...
+- The same applies to...
+
+---
+
+## Indicating Cause and Effect
+
+- So,...
+- This means...
+- As a result,...
+- That's why...
+- Because of this,...
+- This leads to...
+- The outcome?...
+- Here's what happens:...
+
+---
+
+## Giving Examples
+
+- For example,...
+- For instance,...
+- Here's an example:...
+- Take X, for instance.
+- Consider this:...
+- A good example is...
+- To illustrate,...
+- Like when...
+- Say you want to...
+
+---
+
+## Emphasising Key Points
+
+- Here's the key takeaway:...
+- The important thing is...
+- What matters most is...
+- Don't miss this:...
+- Pay attention to...
+- This is critical:...
+- The bottom line?...
+
+---
+
+## Providing Evidence
+
+Use when citing sources, data, or expert opinions:
+
+### Neutral attribution
+- According to [Source],...
+- [Source] reports that...
+- Research shows that...
+- Data from [Source] indicates...
+- A study by [Source] found...
+
+### Expert quotes
+- As [Expert] puts it,...
+- [Expert] explains,...
+- In the words of [Expert],...
+- [Expert] notes that...
+
+### Supporting claims
+- This is backed by...
+- Evidence suggests...
+- The numbers confirm...
+- This aligns with findings from...
+
+---
+
+## Summarising Sections
+
+- To recap,...
+- Here's the short version:...
+- In short,...
+- The takeaway?...
+- So what does this mean?...
+- Let's pull this together:...
+- Quick summary:...
+
+---
+
+## Concluding Content
+
+- Wrapping up,...
+- The bottom line is...
+- Here's what to do next:...
+- To sum up,...
+- Final thoughts:...
+- Ready to get started?...
+- Now it's your turn.
+
+**Note:** Avoid "In conclusion" at the start of a paragraph. It's overused and signals AI writing.
+
+---
+
+## Question-Based Transitions
+
+Useful for conversational tone and featured snippet optimization:
+
+- So what does this mean for you?
+- But why does this matter?
+- How do you actually do this?
+- What's the catch?
+- Sound complicated? It's not.
+- Wondering where to start?
+- Still not sure? Here's the breakdown.
+
+---
+
+## List Introductions
+
+For numbered lists and step-by-step content:
+
+- Here's how to do it:
+- Follow these steps:
+- The process is straightforward:
+- Here's what you need to know:
+- Key things to consider:
+- The main factors are:
+
+---
+
+## Hedging Language
+
+For claims that need qualification or aren't absolute:
+
+- may, might, could
+- tends to, generally
+- often, usually, typically
+- in most cases
+- it appears that
+- evidence suggests
+- this can help
+- many experts believe
+
+---
+
+## Best Practice Guidelines
+
+1. **Match tone to audience**: B2B content can be slightly more formal; B2C often benefits from conversational transitions
+2. **Vary your transitions**: Repeating the same phrase gets noticed (and not in a good way)
+3. **Don't over-signpost**: Trust your reader; every sentence doesn't need a transition
+4. **Use for scannability**: Transitions at paragraph starts help skimmers navigate
+5. **Keep it natural**: Read aloud; if it sounds forced, simplify
+6. **Front-load key info**: Put the important word or phrase early in the transition
+
+---
+
+## Transitions to Avoid (AI Tells)
+
+These phrases are overused in AI-generated content:
+
+- "That being said,..."
+- "It's worth noting that..."
+- "At its core,..."
+- "In today's digital landscape,..."
+- "When it comes to the realm of..."
+- "This begs the question..."
+- "Let's delve into..."
+
+See the seo-audit skill's `references/ai-writing-detection.md` for a complete list of AI writing tells.
diff --git a/skills/create-agent/create-agent b/skills/create-agent/create-agent
new file mode 120000
index 0000000..5974e79
--- /dev/null
+++ b/skills/create-agent/create-agent
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/create-agent/
\ No newline at end of file
diff --git a/skills/create-auth-skill/SKILL.md b/skills/create-auth-skill/SKILL.md
new file mode 100644
index 0000000..b18c85f
--- /dev/null
+++ b/skills/create-auth-skill/SKILL.md
@@ -0,0 +1,214 @@
+---
+name: create-auth-skill
+description: Skill for creating auth layers in TypeScript/JavaScript apps using Better Auth.
+---
+
+# Create Auth Skill
+
+Guide for adding authentication to TypeScript/JavaScript applications using Better Auth.
+
+**For code examples and syntax, see [better-auth.com/docs](https://better-auth.com/docs).**
+
+---
+
+## Decision Tree
+
+```
+Is this a new/empty project?
+├─ YES → New project setup
+│ 1. Identify framework
+│ 2. Choose database
+│ 3. Install better-auth
+│ 4. Create auth.ts + auth-client.ts
+│ 5. Set up route handler
+│ 6. Run CLI migrate/generate
+│ 7. Add features via plugins
+│
+└─ NO → Does project have existing auth?
+ ├─ YES → Migration/enhancement
+ │ • Audit current auth for gaps
+ │ • Plan incremental migration
+ │ • See migration guides in docs
+ │
+ └─ NO → Add auth to existing project
+ 1. Analyze project structure
+ 2. Install better-auth
+ 3. Create auth config
+ 4. Add route handler
+ 5. Run schema migrations
+ 6. Integrate into existing pages
+```
+
+---
+
+## Installation
+
+**Core:** `npm install better-auth`
+
+**Scoped packages (as needed):**
+| Package | Use case |
+|---------|----------|
+| `@better-auth/passkey` | WebAuthn/Passkey auth |
+| `@better-auth/sso` | SAML/OIDC enterprise SSO |
+| `@better-auth/stripe` | Stripe payments |
+| `@better-auth/scim` | SCIM user provisioning |
+| `@better-auth/expo` | React Native/Expo |
+
+---
+
+## Environment Variables
+
+```env
+BETTER_AUTH_SECRET=<32+ chars, generate with: openssl rand -base64 32>
+BETTER_AUTH_URL=http://localhost:3000
+DATABASE_URL=
+```
+
+Add OAuth secrets as needed: `GITHUB_CLIENT_ID`, `GITHUB_CLIENT_SECRET`, `GOOGLE_CLIENT_ID`, etc.
+
+---
+
+## Server Config (auth.ts)
+
+**Location:** `lib/auth.ts` or `src/lib/auth.ts`
+
+**Minimal config needs:**
+- `database` - Connection or adapter
+- `emailAndPassword: { enabled: true }` - For email/password auth
+
+**Standard config adds:**
+- `socialProviders` - OAuth providers (google, github, etc.)
+- `emailVerification.sendVerificationEmail` - Email verification handler
+- `emailAndPassword.sendResetPassword` - Password reset handler
+
+**Full config adds:**
+- `plugins` - Array of feature plugins
+- `session` - Expiry, cookie cache settings
+- `account.accountLinking` - Multi-provider linking
+- `rateLimit` - Rate limiting config
+
+**Export types:** `export type Session = typeof auth.$Infer.Session`
+
+---
+
+## Client Config (auth-client.ts)
+
+**Import by framework:**
+| Framework | Import |
+|-----------|--------|
+| React/Next.js | `better-auth/react` |
+| Vue | `better-auth/vue` |
+| Svelte | `better-auth/svelte` |
+| Solid | `better-auth/solid` |
+| Vanilla JS | `better-auth/client` |
+
+**Client plugins** go in `createAuthClient({ plugins: [...] })`.
+
+**Common exports:** `signIn`, `signUp`, `signOut`, `useSession`, `getSession`
+
+---
+
+## Route Handler Setup
+
+| Framework | File | Handler |
+|-----------|------|---------|
+| Next.js App Router | `app/api/auth/[...all]/route.ts` | `toNextJsHandler(auth)` → export `{ GET, POST }` |
+| Next.js Pages | `pages/api/auth/[...all].ts` | `toNextJsHandler(auth)` → default export |
+| Express | Any file | `app.all("/api/auth/*", toNodeHandler(auth))` |
+| SvelteKit | `src/hooks.server.ts` | `svelteKitHandler(auth)` |
+| SolidStart | Route file | `solidStartHandler(auth)` |
+| Hono | Route file | `auth.handler(c.req.raw)` |
+
+**Next.js Server Components:** Add `nextCookies()` plugin to auth config.
+
+---
+
+## Database Migrations
+
+| Adapter | Command |
+|---------|---------|
+| Built-in Kysely | `npx @better-auth/cli@latest migrate` (applies directly) |
+| Prisma | `npx @better-auth/cli@latest generate --output prisma/schema.prisma` then `npx prisma migrate dev` |
+| Drizzle | `npx @better-auth/cli@latest generate --output src/db/auth-schema.ts` then `npx drizzle-kit push` |
+
+**Re-run after adding plugins.**
+
+---
+
+## Database Adapters
+
+| Database | Setup |
+|----------|-------|
+| SQLite | Pass `better-sqlite3` or `bun:sqlite` instance directly |
+| PostgreSQL | Pass `pg.Pool` instance directly |
+| MySQL | Pass `mysql2` pool directly |
+| Prisma | `prismaAdapter(prisma, { provider: "postgresql" })` from `better-auth/adapters/prisma` |
+| Drizzle | `drizzleAdapter(db, { provider: "pg" })` from `better-auth/adapters/drizzle` |
+| MongoDB | `mongodbAdapter(db)` from `better-auth/adapters/mongodb` |
+
+---
+
+## Common Plugins
+
+| Plugin | Server Import | Client Import | Purpose |
+|--------|---------------|---------------|---------|
+| `twoFactor` | `better-auth/plugins` | `twoFactorClient` | 2FA with TOTP/OTP |
+| `organization` | `better-auth/plugins` | `organizationClient` | Teams/orgs |
+| `admin` | `better-auth/plugins` | `adminClient` | User management |
+| `bearer` | `better-auth/plugins` | - | API token auth |
+| `openAPI` | `better-auth/plugins` | - | API docs |
+| `passkey` | `@better-auth/passkey` | `passkeyClient` | WebAuthn |
+| `sso` | `@better-auth/sso` | - | Enterprise SSO |
+
+**Plugin pattern:** Server plugin + client plugin + run migrations.
+
+---
+
+## Auth UI Implementation
+
+**Sign in flow:**
+1. `signIn.email({ email, password })` or `signIn.social({ provider, callbackURL })`
+2. Handle `error` in response
+3. Redirect on success
+
+**Session check (client):** `useSession()` hook returns `{ data: session, isPending }`
+
+**Session check (server):** `auth.api.getSession({ headers: await headers() })`
+
+**Protected routes:** Check session, redirect to `/sign-in` if null.
+
+---
+
+## Security Checklist
+
+- [ ] `BETTER_AUTH_SECRET` set (32+ chars)
+- [ ] `advanced.useSecureCookies: true` in production
+- [ ] `trustedOrigins` configured
+- [ ] Rate limits enabled
+- [ ] Email verification enabled
+- [ ] Password reset implemented
+- [ ] 2FA for sensitive apps
+- [ ] CSRF protection NOT disabled
+- [ ] `account.accountLinking` reviewed
+
+---
+
+## Troubleshooting
+
+| Issue | Fix |
+|-------|-----|
+| "Secret not set" | Add `BETTER_AUTH_SECRET` env var |
+| "Invalid Origin" | Add domain to `trustedOrigins` |
+| Cookies not setting | Check `baseURL` matches domain; enable secure cookies in prod |
+| OAuth callback errors | Verify redirect URIs in provider dashboard |
+| Type errors after adding plugin | Re-run CLI generate/migrate |
+
+---
+
+## Resources
+
+- [Docs](https://better-auth.com/docs)
+- [Examples](https://github.com/better-auth/examples)
+- [Plugins](https://better-auth.com/docs/concepts/plugins)
+- [CLI](https://better-auth.com/docs/concepts/cli)
+- [Migration Guides](https://better-auth.com/docs/guides)
\ No newline at end of file
diff --git a/skills/create-auth-skill/create-auth-skill b/skills/create-auth-skill/create-auth-skill
new file mode 120000
index 0000000..ed7b25e
--- /dev/null
+++ b/skills/create-auth-skill/create-auth-skill
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/create-auth-skill/
\ No newline at end of file
diff --git a/skills/dispatching-parallel-agents/SKILL.md b/skills/dispatching-parallel-agents/SKILL.md
new file mode 100644
index 0000000..33b1485
--- /dev/null
+++ b/skills/dispatching-parallel-agents/SKILL.md
@@ -0,0 +1,180 @@
+---
+name: dispatching-parallel-agents
+description: Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies
+---
+
+# Dispatching Parallel Agents
+
+## Overview
+
+When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.
+
+**Core principle:** Dispatch one agent per independent problem domain. Let them work concurrently.
+
+## When to Use
+
+```dot
+digraph when_to_use {
+ "Multiple failures?" [shape=diamond];
+ "Are they independent?" [shape=diamond];
+ "Single agent investigates all" [shape=box];
+ "One agent per problem domain" [shape=box];
+ "Can they work in parallel?" [shape=diamond];
+ "Sequential agents" [shape=box];
+ "Parallel dispatch" [shape=box];
+
+ "Multiple failures?" -> "Are they independent?" [label="yes"];
+ "Are they independent?" -> "Single agent investigates all" [label="no - related"];
+ "Are they independent?" -> "Can they work in parallel?" [label="yes"];
+ "Can they work in parallel?" -> "Parallel dispatch" [label="yes"];
+ "Can they work in parallel?" -> "Sequential agents" [label="no - shared state"];
+}
+```
+
+**Use when:**
+- 3+ test files failing with different root causes
+- Multiple subsystems broken independently
+- Each problem can be understood without context from others
+- No shared state between investigations
+
+**Don't use when:**
+- Failures are related (fix one might fix others)
+- Need to understand full system state
+- Agents would interfere with each other
+
+## The Pattern
+
+### 1. Identify Independent Domains
+
+Group failures by what's broken:
+- File A tests: Tool approval flow
+- File B tests: Batch completion behavior
+- File C tests: Abort functionality
+
+Each domain is independent - fixing tool approval doesn't affect abort tests.
+
+### 2. Create Focused Agent Tasks
+
+Each agent gets:
+- **Specific scope:** One test file or subsystem
+- **Clear goal:** Make these tests pass
+- **Constraints:** Don't change other code
+- **Expected output:** Summary of what you found and fixed
+
+### 3. Dispatch in Parallel
+
+```typescript
+// In Claude Code / AI environment
+Task("Fix agent-tool-abort.test.ts failures")
+Task("Fix batch-completion-behavior.test.ts failures")
+Task("Fix tool-approval-race-conditions.test.ts failures")
+// All three run concurrently
+```
+
+### 4. Review and Integrate
+
+When agents return:
+- Read each summary
+- Verify fixes don't conflict
+- Run full test suite
+- Integrate all changes
+
+## Agent Prompt Structure
+
+Good agent prompts are:
+1. **Focused** - One clear problem domain
+2. **Self-contained** - All context needed to understand the problem
+3. **Specific about output** - What should the agent return?
+
+```markdown
+Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts:
+
+1. "should abort tool with partial output capture" - expects 'interrupted at' in message
+2. "should handle mixed completed and aborted tools" - fast tool aborted instead of completed
+3. "should properly track pendingToolCount" - expects 3 results but gets 0
+
+These are timing/race condition issues. Your task:
+
+1. Read the test file and understand what each test verifies
+2. Identify root cause - timing issues or actual bugs?
+3. Fix by:
+ - Replacing arbitrary timeouts with event-based waiting
+ - Fixing bugs in abort implementation if found
+ - Adjusting test expectations if testing changed behavior
+
+Do NOT just increase timeouts - find the real issue.
+
+Return: Summary of what you found and what you fixed.
+```
+
+## Common Mistakes
+
+**❌ Too broad:** "Fix all the tests" - agent gets lost
+**✅ Specific:** "Fix agent-tool-abort.test.ts" - focused scope
+
+**❌ No context:** "Fix the race condition" - agent doesn't know where
+**✅ Context:** Paste the error messages and test names
+
+**❌ No constraints:** Agent might refactor everything
+**✅ Constraints:** "Do NOT change production code" or "Fix tests only"
+
+**❌ Vague output:** "Fix it" - you don't know what changed
+**✅ Specific:** "Return summary of root cause and changes"
+
+## When NOT to Use
+
+**Related failures:** Fixing one might fix others - investigate together first
+**Need full context:** Understanding requires seeing entire system
+**Exploratory debugging:** You don't know what's broken yet
+**Shared state:** Agents would interfere (editing same files, using same resources)
+
+## Real Example from Session
+
+**Scenario:** 6 test failures across 3 files after major refactoring
+
+**Failures:**
+- agent-tool-abort.test.ts: 3 failures (timing issues)
+- batch-completion-behavior.test.ts: 2 failures (tools not executing)
+- tool-approval-race-conditions.test.ts: 1 failure (execution count = 0)
+
+**Decision:** Independent domains - abort logic separate from batch completion separate from race conditions
+
+**Dispatch:**
+```
+Agent 1 → Fix agent-tool-abort.test.ts
+Agent 2 → Fix batch-completion-behavior.test.ts
+Agent 3 → Fix tool-approval-race-conditions.test.ts
+```
+
+**Results:**
+- Agent 1: Replaced timeouts with event-based waiting
+- Agent 2: Fixed event structure bug (threadId in wrong place)
+- Agent 3: Added wait for async tool execution to complete
+
+**Integration:** All fixes independent, no conflicts, full suite green
+
+**Time saved:** 3 problems solved in parallel vs sequentially
+
+## Key Benefits
+
+1. **Parallelization** - Multiple investigations happen simultaneously
+2. **Focus** - Each agent has narrow scope, less context to track
+3. **Independence** - Agents don't interfere with each other
+4. **Speed** - 3 problems solved in time of 1
+
+## Verification
+
+After agents return:
+1. **Review each summary** - Understand what changed
+2. **Check for conflicts** - Did agents edit same code?
+3. **Run full suite** - Verify all fixes work together
+4. **Spot check** - Agents can make systematic errors
+
+## Real-World Impact
+
+From debugging session (2025-10-03):
+- 6 failures across 3 files
+- 3 agents dispatched in parallel
+- All investigations completed concurrently
+- All fixes integrated successfully
+- Zero conflicts between agent changes
diff --git a/skills/dispatching-parallel-agents/dispatching-parallel-agents b/skills/dispatching-parallel-agents/dispatching-parallel-agents
new file mode 120000
index 0000000..220a98f
--- /dev/null
+++ b/skills/dispatching-parallel-agents/dispatching-parallel-agents
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/dispatching-parallel-agents/
\ No newline at end of file
diff --git a/skills/doc-coauthoring/SKILL.md b/skills/doc-coauthoring/SKILL.md
new file mode 100644
index 0000000..a5a6983
--- /dev/null
+++ b/skills/doc-coauthoring/SKILL.md
@@ -0,0 +1,375 @@
+---
+name: doc-coauthoring
+description: Guide users through a structured workflow for co-authoring documentation. Use when user wants to write documentation, proposals, technical specs, decision docs, or similar structured content. This workflow helps users efficiently transfer context, refine content through iteration, and verify the doc works for readers. Trigger when user mentions writing docs, creating proposals, drafting specs, or similar documentation tasks.
+---
+
+# Doc Co-Authoring Workflow
+
+This skill provides a structured workflow for guiding users through collaborative document creation. Act as an active guide, walking users through three stages: Context Gathering, Refinement & Structure, and Reader Testing.
+
+## When to Offer This Workflow
+
+**Trigger conditions:**
+- User mentions writing documentation: "write a doc", "draft a proposal", "create a spec", "write up"
+- User mentions specific doc types: "PRD", "design doc", "decision doc", "RFC"
+- User seems to be starting a substantial writing task
+
+**Initial offer:**
+Offer the user a structured workflow for co-authoring the document. Explain the three stages:
+
+1. **Context Gathering**: User provides all relevant context while Claude asks clarifying questions
+2. **Refinement & Structure**: Iteratively build each section through brainstorming and editing
+3. **Reader Testing**: Test the doc with a fresh Claude (no context) to catch blind spots before others read it
+
+Explain that this approach helps ensure the doc works well when others read it (including when they paste it into Claude). Ask if they want to try this workflow or prefer to work freeform.
+
+If user declines, work freeform. If user accepts, proceed to Stage 1.
+
+## Stage 1: Context Gathering
+
+**Goal:** Close the gap between what the user knows and what Claude knows, enabling smart guidance later.
+
+### Initial Questions
+
+Start by asking the user for meta-context about the document:
+
+1. What type of document is this? (e.g., technical spec, decision doc, proposal)
+2. Who's the primary audience?
+3. What's the desired impact when someone reads this?
+4. Is there a template or specific format to follow?
+5. Any other constraints or context to know?
+
+Inform them they can answer in shorthand or dump information however works best for them.
+
+**If user provides a template or mentions a doc type:**
+- Ask if they have a template document to share
+- If they provide a link to a shared document, use the appropriate integration to fetch it
+- If they provide a file, read it
+
+**If user mentions editing an existing shared document:**
+- Use the appropriate integration to read the current state
+- Check for images without alt-text
+- If images exist without alt-text, explain that when others use Claude to understand the doc, Claude won't be able to see them. Ask if they want alt-text generated. If so, request they paste each image into chat for descriptive alt-text generation.
+
+### Info Dumping
+
+Once initial questions are answered, encourage the user to dump all the context they have. Request information such as:
+- Background on the project/problem
+- Related team discussions or shared documents
+- Why alternative solutions aren't being used
+- Organizational context (team dynamics, past incidents, politics)
+- Timeline pressures or constraints
+- Technical architecture or dependencies
+- Stakeholder concerns
+
+Advise them not to worry about organizing it - just get it all out. Offer multiple ways to provide context:
+- Info dump stream-of-consciousness
+- Point to team channels or threads to read
+- Link to shared documents
+
+**If integrations are available** (e.g., Slack, Teams, Google Drive, SharePoint, or other MCP servers), mention that these can be used to pull in context directly.
+
+**If no integrations are detected and in Claude.ai or Claude app:** Suggest they can enable connectors in their Claude settings to allow pulling context from messaging apps and document storage directly.
+
+Inform them clarifying questions will be asked once they've done their initial dump.
+
+**During context gathering:**
+
+- If user mentions team channels or shared documents:
+ - If integrations available: Inform them the content will be read now, then use the appropriate integration
+ - If integrations not available: Explain lack of access. Suggest they enable connectors in Claude settings, or paste the relevant content directly.
+
+- If user mentions entities/projects that are unknown:
+ - Ask if connected tools should be searched to learn more
+ - Wait for user confirmation before searching
+
+- As user provides context, track what's being learned and what's still unclear
+
+**Asking clarifying questions:**
+
+When user signals they've done their initial dump (or after substantial context provided), ask clarifying questions to ensure understanding:
+
+Generate 5-10 numbered questions based on gaps in the context.
+
+Inform them they can use shorthand to answer (e.g., "1: yes, 2: see #channel, 3: no because backwards compat"), link to more docs, point to channels to read, or just keep info-dumping. Whatever's most efficient for them.
+
+**Exit condition:**
+Sufficient context has been gathered when questions show understanding - when edge cases and trade-offs can be asked about without needing basics explained.
+
+**Transition:**
+Ask if there's any more context they want to provide at this stage, or if it's time to move on to drafting the document.
+
+If user wants to add more, let them. When ready, proceed to Stage 2.
+
+## Stage 2: Refinement & Structure
+
+**Goal:** Build the document section by section through brainstorming, curation, and iterative refinement.
+
+**Instructions to user:**
+Explain that the document will be built section by section. For each section:
+1. Clarifying questions will be asked about what to include
+2. 5-20 options will be brainstormed
+3. User will indicate what to keep/remove/combine
+4. The section will be drafted
+5. It will be refined through surgical edits
+
+Start with whichever section has the most unknowns (usually the core decision/proposal), then work through the rest.
+
+**Section ordering:**
+
+If the document structure is clear:
+Ask which section they'd like to start with.
+
+Suggest starting with whichever section has the most unknowns. For decision docs, that's usually the core proposal. For specs, it's typically the technical approach. Summary sections are best left for last.
+
+If user doesn't know what sections they need:
+Based on the type of document and template, suggest 3-5 sections appropriate for the doc type.
+
+Ask if this structure works, or if they want to adjust it.
+
+**Once structure is agreed:**
+
+Create the initial document structure with placeholder text for all sections.
+
+**If access to artifacts is available:**
+Use `create_file` to create an artifact. This gives both Claude and the user a scaffold to work from.
+
+Inform them that the initial structure with placeholders for all sections will be created.
+
+Create artifact with all section headers and brief placeholder text like "[To be written]" or "[Content here]".
+
+Provide the scaffold link and indicate it's time to fill in each section.
+
+**If no access to artifacts:**
+Create a markdown file in the working directory. Name it appropriately (e.g., `decision-doc.md`, `technical-spec.md`).
+
+Inform them that the initial structure with placeholders for all sections will be created.
+
+Create file with all section headers and placeholder text.
+
+Confirm the filename has been created and indicate it's time to fill in each section.
+
+**For each section:**
+
+### Step 1: Clarifying Questions
+
+Announce work will begin on the [SECTION NAME] section. Ask 5-10 clarifying questions about what should be included:
+
+Generate 5-10 specific questions based on context and section purpose.
+
+Inform them they can answer in shorthand or just indicate what's important to cover.
+
+### Step 2: Brainstorming
+
+For the [SECTION NAME] section, brainstorm [5-20] things that might be included, depending on the section's complexity. Look for:
+- Context shared that might have been forgotten
+- Angles or considerations not yet mentioned
+
+Generate 5-20 numbered options based on section complexity. At the end, offer to brainstorm more if they want additional options.
+
+### Step 3: Curation
+
+Ask which points should be kept, removed, or combined. Request brief justifications to help learn priorities for the next sections.
+
+Provide examples:
+- "Keep 1,4,7,9"
+- "Remove 3 (duplicates 1)"
+- "Remove 6 (audience already knows this)"
+- "Combine 11 and 12"
+
+**If user gives freeform feedback** (e.g., "looks good" or "I like most of it but...") instead of numbered selections, extract their preferences and proceed. Parse what they want kept/removed/changed and apply it.
+
+### Step 4: Gap Check
+
+Based on what they've selected, ask if there's anything important missing for the [SECTION NAME] section.
+
+### Step 5: Drafting
+
+Use `str_replace` to replace the placeholder text for this section with the actual drafted content.
+
+Announce the [SECTION NAME] section will be drafted now based on what they've selected.
+
+**If using artifacts:**
+After drafting, provide a link to the artifact.
+
+Ask them to read through it and indicate what to change. Note that being specific helps learning for the next sections.
+
+**If using a file (no artifacts):**
+After drafting, confirm completion.
+
+Inform them the [SECTION NAME] section has been drafted in [filename]. Ask them to read through it and indicate what to change. Note that being specific helps learning for the next sections.
+
+**Key instruction for user (include when drafting the first section):**
+Provide a note: Instead of editing the doc directly, ask them to indicate what to change. This helps learning of their style for future sections. For example: "Remove the X bullet - already covered by Y" or "Make the third paragraph more concise".
+
+### Step 6: Iterative Refinement
+
+As user provides feedback:
+- Use `str_replace` to make edits (never reprint the whole doc)
+- **If using artifacts:** Provide link to artifact after each edit
+- **If using files:** Just confirm edits are complete
+- If user edits doc directly and asks to read it: mentally note the changes they made and keep them in mind for future sections (this shows their preferences)
+
+**Continue iterating** until user is satisfied with the section.
+
+### Quality Checking
+
+After 3 consecutive iterations with no substantial changes, ask if anything can be removed without losing important information.
+
+When section is done, confirm [SECTION NAME] is complete. Ask if ready to move to the next section.
+
+**Repeat for all sections.**
+
+### Near Completion
+
+As approaching completion (80%+ of sections done), announce intention to re-read the entire document and check for:
+- Flow and consistency across sections
+- Redundancy or contradictions
+- Anything that feels like "slop" or generic filler
+- Whether every sentence carries weight
+
+Read entire document and provide feedback.
+
+**When all sections are drafted and refined:**
+Announce all sections are drafted. Indicate intention to review the complete document one more time.
+
+Review for overall coherence, flow, completeness.
+
+Provide any final suggestions.
+
+Ask if ready to move to Reader Testing, or if they want to refine anything else.
+
+## Stage 3: Reader Testing
+
+**Goal:** Test the document with a fresh Claude (no context bleed) to verify it works for readers.
+
+**Instructions to user:**
+Explain that testing will now occur to see if the document actually works for readers. This catches blind spots - things that make sense to the authors but might confuse others.
+
+### Testing Approach
+
+**If access to sub-agents is available (e.g., in Claude Code):**
+
+Perform the testing directly without user involvement.
+
+### Step 1: Predict Reader Questions
+
+Announce intention to predict what questions readers might ask when trying to discover this document.
+
+Generate 5-10 questions that readers would realistically ask.
+
+### Step 2: Test with Sub-Agent
+
+Announce that these questions will be tested with a fresh Claude instance (no context from this conversation).
+
+For each question, invoke a sub-agent with just the document content and the question.
+
+Summarize what Reader Claude got right/wrong for each question.
+
+### Step 3: Run Additional Checks
+
+Announce additional checks will be performed.
+
+Invoke sub-agent to check for ambiguity, false assumptions, contradictions.
+
+Summarize any issues found.
+
+### Step 4: Report and Fix
+
+If issues found:
+Report that Reader Claude struggled with specific issues.
+
+List the specific issues.
+
+Indicate intention to fix these gaps.
+
+Loop back to refinement for problematic sections.
+
+---
+
+**If no access to sub-agents (e.g., claude.ai web interface):**
+
+The user will need to do the testing manually.
+
+### Step 1: Predict Reader Questions
+
+Ask what questions people might ask when trying to discover this document. What would they type into Claude.ai?
+
+Generate 5-10 questions that readers would realistically ask.
+
+### Step 2: Setup Testing
+
+Provide testing instructions:
+1. Open a fresh Claude conversation: https://claude.ai
+2. Paste or share the document content (if using a shared doc platform with connectors enabled, provide the link)
+3. Ask Reader Claude the generated questions
+
+For each question, instruct Reader Claude to provide:
+- The answer
+- Whether anything was ambiguous or unclear
+- What knowledge/context the doc assumes is already known
+
+Check if Reader Claude gives correct answers or misinterprets anything.
+
+### Step 3: Additional Checks
+
+Also ask Reader Claude:
+- "What in this doc might be ambiguous or unclear to readers?"
+- "What knowledge or context does this doc assume readers already have?"
+- "Are there any internal contradictions or inconsistencies?"
+
+### Step 4: Iterate Based on Results
+
+Ask what Reader Claude got wrong or struggled with. Indicate intention to fix those gaps.
+
+Loop back to refinement for any problematic sections.
+
+---
+
+### Exit Condition (Both Approaches)
+
+When Reader Claude consistently answers questions correctly and doesn't surface new gaps or ambiguities, the doc is ready.
+
+## Final Review
+
+When Reader Testing passes:
+Announce the doc has passed Reader Claude testing. Before completion:
+
+1. Recommend they do a final read-through themselves - they own this document and are responsible for its quality
+2. Suggest double-checking any facts, links, or technical details
+3. Ask them to verify it achieves the impact they wanted
+
+Ask if they want one more review, or if the work is done.
+
+**If user wants final review, provide it. Otherwise:**
+Announce document completion. Provide a few final tips:
+- Consider linking this conversation in an appendix so readers can see how the doc was developed
+- Use appendices to provide depth without bloating the main doc
+- Update the doc as feedback is received from real readers
+
+## Tips for Effective Guidance
+
+**Tone:**
+- Be direct and procedural
+- Explain rationale briefly when it affects user behavior
+- Don't try to "sell" the approach - just execute it
+
+**Handling Deviations:**
+- If user wants to skip a stage: Ask if they want to skip this and write freeform
+- If user seems frustrated: Acknowledge this is taking longer than expected. Suggest ways to move faster
+- Always give user agency to adjust the process
+
+**Context Management:**
+- Throughout, if context is missing on something mentioned, proactively ask
+- Don't let gaps accumulate - address them as they come up
+
+**Artifact Management:**
+- Use `create_file` for drafting full sections
+- Use `str_replace` for all edits
+- Provide artifact link after every change
+- Never use artifacts for brainstorming lists - that's just conversation
+
+**Quality over Speed:**
+- Don't rush through stages
+- Each iteration should make meaningful improvements
+- The goal is a document that actually works for readers
diff --git a/skills/doc-coauthoring/doc-coauthoring b/skills/doc-coauthoring/doc-coauthoring
new file mode 120000
index 0000000..92f21d4
--- /dev/null
+++ b/skills/doc-coauthoring/doc-coauthoring
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/doc-coauthoring/
\ No newline at end of file
diff --git a/skills/docx/LICENSE.txt b/skills/docx/LICENSE.txt
new file mode 100644
index 0000000..c55ab42
--- /dev/null
+++ b/skills/docx/LICENSE.txt
@@ -0,0 +1,30 @@
+© 2025 Anthropic, PBC. All rights reserved.
+
+LICENSE: Use of these materials (including all code, prompts, assets, files,
+and other components of this Skill) is governed by your agreement with
+Anthropic regarding use of Anthropic's services. If no separate agreement
+exists, use is governed by Anthropic's Consumer Terms of Service or
+Commercial Terms of Service, as applicable:
+https://www.anthropic.com/legal/consumer-terms
+https://www.anthropic.com/legal/commercial-terms
+Your applicable agreement is referred to as the "Agreement." "Services" are
+as defined in the Agreement.
+
+ADDITIONAL RESTRICTIONS: Notwithstanding anything in the Agreement to the
+contrary, users may not:
+
+- Extract these materials from the Services or retain copies of these
+ materials outside the Services
+- Reproduce or copy these materials, except for temporary copies created
+ automatically during authorized use of the Services
+- Create derivative works based on these materials
+- Distribute, sublicense, or transfer these materials to any third party
+- Make, offer to sell, sell, or import any inventions embodied in these
+ materials
+- Reverse engineer, decompile, or disassemble these materials
+
+The receipt, viewing, or possession of these materials does not convey or
+imply any license or right beyond those expressly granted above.
+
+Anthropic retains all right, title, and interest in these materials,
+including all copyrights, patents, and other intellectual property rights.
diff --git a/skills/docx/SKILL.md b/skills/docx/SKILL.md
new file mode 100644
index 0000000..ad2e175
--- /dev/null
+++ b/skills/docx/SKILL.md
@@ -0,0 +1,481 @@
+---
+name: docx
+description: "Use this skill whenever the user wants to create, read, edit, or manipulate Word documents (.docx files). Triggers include: any mention of \"Word doc\", \"word document\", \".docx\", or requests to produce professional documents with formatting like tables of contents, headings, page numbers, or letterheads. Also use when extracting or reorganizing content from .docx files, inserting or replacing images in documents, performing find-and-replace in Word files, working with tracked changes or comments, or converting content into a polished Word document. If the user asks for a \"report\", \"memo\", \"letter\", \"template\", or similar deliverable as a Word or .docx file, use this skill. Do NOT use for PDFs, spreadsheets, Google Docs, or general coding tasks unrelated to document generation."
+license: Proprietary. LICENSE.txt has complete terms
+---
+
+# DOCX creation, editing, and analysis
+
+## Overview
+
+A .docx file is a ZIP archive containing XML files.
+
+## Quick Reference
+
+| Task | Approach |
+|------|----------|
+| Read/analyze content | `pandoc` or unpack for raw XML |
+| Create new document | Use `docx-js` - see Creating New Documents below |
+| Edit existing document | Unpack → edit XML → repack - see Editing Existing Documents below |
+
+### Converting .doc to .docx
+
+Legacy `.doc` files must be converted before editing:
+
+```bash
+python scripts/office/soffice.py --headless --convert-to docx document.doc
+```
+
+### Reading Content
+
+```bash
+# Text extraction with tracked changes
+pandoc --track-changes=all document.docx -o output.md
+
+# Raw XML access
+python scripts/office/unpack.py document.docx unpacked/
+```
+
+### Converting to Images
+
+```bash
+python scripts/office/soffice.py --headless --convert-to pdf document.docx
+pdftoppm -jpeg -r 150 document.pdf page
+```
+
+### Accepting Tracked Changes
+
+To produce a clean document with all tracked changes accepted (requires LibreOffice):
+
+```bash
+python scripts/accept_changes.py input.docx output.docx
+```
+
+---
+
+## Creating New Documents
+
+Generate .docx files with JavaScript, then validate. Install: `npm install -g docx`
+
+### Setup
+```javascript
+const { Document, Packer, Paragraph, TextRun, Table, TableRow, TableCell, ImageRun,
+ Header, Footer, AlignmentType, PageOrientation, LevelFormat, ExternalHyperlink,
+ TableOfContents, HeadingLevel, BorderStyle, WidthType, ShadingType,
+ VerticalAlign, PageNumber, PageBreak } = require('docx');
+
+const doc = new Document({ sections: [{ children: [/* content */] }] });
+Packer.toBuffer(doc).then(buffer => fs.writeFileSync("doc.docx", buffer));
+```
+
+### Validation
+After creating the file, validate it. If validation fails, unpack, fix the XML, and repack.
+```bash
+python scripts/office/validate.py doc.docx
+```
+
+### Page Size
+
+```javascript
+// CRITICAL: docx-js defaults to A4, not US Letter
+// Always set page size explicitly for consistent results
+sections: [{
+ properties: {
+ page: {
+ size: {
+ width: 12240, // 8.5 inches in DXA
+ height: 15840 // 11 inches in DXA
+ },
+ margin: { top: 1440, right: 1440, bottom: 1440, left: 1440 } // 1 inch margins
+ }
+ },
+ children: [/* content */]
+}]
+```
+
+**Common page sizes (DXA units, 1440 DXA = 1 inch):**
+
+| Paper | Width | Height | Content Width (1" margins) |
+|-------|-------|--------|---------------------------|
+| US Letter | 12,240 | 15,840 | 9,360 |
+| A4 (default) | 11,906 | 16,838 | 9,026 |
+
+**Landscape orientation:** docx-js swaps width/height internally, so pass portrait dimensions and let it handle the swap:
+```javascript
+size: {
+ width: 12240, // Pass SHORT edge as width
+ height: 15840, // Pass LONG edge as height
+ orientation: PageOrientation.LANDSCAPE // docx-js swaps them in the XML
+},
+// Content width = 15840 - left margin - right margin (uses the long edge)
+```
+
+### Styles (Override Built-in Headings)
+
+Use Arial as the default font (universally supported). Keep titles black for readability.
+
+```javascript
+const doc = new Document({
+ styles: {
+ default: { document: { run: { font: "Arial", size: 24 } } }, // 12pt default
+ paragraphStyles: [
+ // IMPORTANT: Use exact IDs to override built-in styles
+ { id: "Heading1", name: "Heading 1", basedOn: "Normal", next: "Normal", quickFormat: true,
+ run: { size: 32, bold: true, font: "Arial" },
+ paragraph: { spacing: { before: 240, after: 240 }, outlineLevel: 0 } }, // outlineLevel required for TOC
+ { id: "Heading2", name: "Heading 2", basedOn: "Normal", next: "Normal", quickFormat: true,
+ run: { size: 28, bold: true, font: "Arial" },
+ paragraph: { spacing: { before: 180, after: 180 }, outlineLevel: 1 } },
+ ]
+ },
+ sections: [{
+ children: [
+ new Paragraph({ heading: HeadingLevel.HEADING_1, children: [new TextRun("Title")] }),
+ ]
+ }]
+});
+```
+
+### Lists (NEVER use unicode bullets)
+
+```javascript
+// ❌ WRONG - never manually insert bullet characters
+new Paragraph({ children: [new TextRun("• Item")] }) // BAD
+new Paragraph({ children: [new TextRun("\u2022 Item")] }) // BAD
+
+// ✅ CORRECT - use numbering config with LevelFormat.BULLET
+const doc = new Document({
+ numbering: {
+ config: [
+ { reference: "bullets",
+ levels: [{ level: 0, format: LevelFormat.BULLET, text: "•", alignment: AlignmentType.LEFT,
+ style: { paragraph: { indent: { left: 720, hanging: 360 } } } }] },
+ { reference: "numbers",
+ levels: [{ level: 0, format: LevelFormat.DECIMAL, text: "%1.", alignment: AlignmentType.LEFT,
+ style: { paragraph: { indent: { left: 720, hanging: 360 } } } }] },
+ ]
+ },
+ sections: [{
+ children: [
+ new Paragraph({ numbering: { reference: "bullets", level: 0 },
+ children: [new TextRun("Bullet item")] }),
+ new Paragraph({ numbering: { reference: "numbers", level: 0 },
+ children: [new TextRun("Numbered item")] }),
+ ]
+ }]
+});
+
+// ⚠️ Each reference creates INDEPENDENT numbering
+// Same reference = continues (1,2,3 then 4,5,6)
+// Different reference = restarts (1,2,3 then 1,2,3)
+```
+
+### Tables
+
+**CRITICAL: Tables need dual widths** - set both `columnWidths` on the table AND `width` on each cell. Without both, tables render incorrectly on some platforms.
+
+```javascript
+// CRITICAL: Always set table width for consistent rendering
+// CRITICAL: Use ShadingType.CLEAR (not SOLID) to prevent black backgrounds
+const border = { style: BorderStyle.SINGLE, size: 1, color: "CCCCCC" };
+const borders = { top: border, bottom: border, left: border, right: border };
+
+new Table({
+ width: { size: 9360, type: WidthType.DXA }, // Always use DXA (percentages break in Google Docs)
+ columnWidths: [4680, 4680], // Must sum to table width (DXA: 1440 = 1 inch)
+ rows: [
+ new TableRow({
+ children: [
+ new TableCell({
+ borders,
+ width: { size: 4680, type: WidthType.DXA }, // Also set on each cell
+ shading: { fill: "D5E8F0", type: ShadingType.CLEAR }, // CLEAR not SOLID
+ margins: { top: 80, bottom: 80, left: 120, right: 120 }, // Cell padding (internal, not added to width)
+ children: [new Paragraph({ children: [new TextRun("Cell")] })]
+ })
+ ]
+ })
+ ]
+})
+```
+
+**Table width calculation:**
+
+Always use `WidthType.DXA` — `WidthType.PERCENTAGE` breaks in Google Docs.
+
+```javascript
+// Table width = sum of columnWidths = content width
+// US Letter with 1" margins: 12240 - 2880 = 9360 DXA
+width: { size: 9360, type: WidthType.DXA },
+columnWidths: [7000, 2360] // Must sum to table width
+```
+
+**Width rules:**
+- **Always use `WidthType.DXA`** — never `WidthType.PERCENTAGE` (incompatible with Google Docs)
+- Table width must equal the sum of `columnWidths`
+- Cell `width` must match corresponding `columnWidth`
+- Cell `margins` are internal padding - they reduce content area, not add to cell width
+- For full-width tables: use content width (page width minus left and right margins)
+
+### Images
+
+```javascript
+// CRITICAL: type parameter is REQUIRED
+new Paragraph({
+ children: [new ImageRun({
+ type: "png", // Required: png, jpg, jpeg, gif, bmp, svg
+ data: fs.readFileSync("image.png"),
+ transformation: { width: 200, height: 150 },
+ altText: { title: "Title", description: "Desc", name: "Name" } // All three required
+ })]
+})
+```
+
+### Page Breaks
+
+```javascript
+// CRITICAL: PageBreak must be inside a Paragraph
+new Paragraph({ children: [new PageBreak()] })
+
+// Or use pageBreakBefore
+new Paragraph({ pageBreakBefore: true, children: [new TextRun("New page")] })
+```
+
+### Table of Contents
+
+```javascript
+// CRITICAL: Headings must use HeadingLevel ONLY - no custom styles
+new TableOfContents("Table of Contents", { hyperlink: true, headingStyleRange: "1-3" })
+```
+
+### Headers/Footers
+
+```javascript
+sections: [{
+ properties: {
+ page: { margin: { top: 1440, right: 1440, bottom: 1440, left: 1440 } } // 1440 = 1 inch
+ },
+ headers: {
+ default: new Header({ children: [new Paragraph({ children: [new TextRun("Header")] })] })
+ },
+ footers: {
+ default: new Footer({ children: [new Paragraph({
+ children: [new TextRun("Page "), new TextRun({ children: [PageNumber.CURRENT] })]
+ })] })
+ },
+ children: [/* content */]
+}]
+```
+
+### Critical Rules for docx-js
+
+- **Set page size explicitly** - docx-js defaults to A4; use US Letter (12240 x 15840 DXA) for US documents
+- **Landscape: pass portrait dimensions** - docx-js swaps width/height internally; pass short edge as `width`, long edge as `height`, and set `orientation: PageOrientation.LANDSCAPE`
+- **Never use `\n`** - use separate Paragraph elements
+- **Never use unicode bullets** - use `LevelFormat.BULLET` with numbering config
+- **PageBreak must be in Paragraph** - standalone creates invalid XML
+- **ImageRun requires `type`** - always specify png/jpg/etc
+- **Always set table `width` with DXA** - never use `WidthType.PERCENTAGE` (breaks in Google Docs)
+- **Tables need dual widths** - `columnWidths` array AND cell `width`, both must match
+- **Table width = sum of columnWidths** - for DXA, ensure they add up exactly
+- **Always add cell margins** - use `margins: { top: 80, bottom: 80, left: 120, right: 120 }` for readable padding
+- **Use `ShadingType.CLEAR`** - never SOLID for table shading
+- **TOC requires HeadingLevel only** - no custom styles on heading paragraphs
+- **Override built-in styles** - use exact IDs: "Heading1", "Heading2", etc.
+- **Include `outlineLevel`** - required for TOC (0 for H1, 1 for H2, etc.)
+
+---
+
+## Editing Existing Documents
+
+**Follow all 3 steps in order.**
+
+### Step 1: Unpack
+```bash
+python scripts/office/unpack.py document.docx unpacked/
+```
+Extracts XML, pretty-prints, merges adjacent runs, and converts smart quotes to XML entities (`“` etc.) so they survive editing. Use `--merge-runs false` to skip run merging.
+
+### Step 2: Edit XML
+
+Edit files in `unpacked/word/`. See XML Reference below for patterns.
+
+**Use "Claude" as the author** for tracked changes and comments, unless the user explicitly requests use of a different name.
+
+**Use the Edit tool directly for string replacement. Do not write Python scripts.** Scripts introduce unnecessary complexity. The Edit tool shows exactly what is being replaced.
+
+**CRITICAL: Use smart quotes for new content.** When adding text with apostrophes or quotes, use XML entities to produce smart quotes:
+```xml
+
+Here’s a quote: “Hello”
+```
+| Entity | Character |
+|--------|-----------|
+| `‘` | ‘ (left single) |
+| `’` | ’ (right single / apostrophe) |
+| `“` | “ (left double) |
+| `”` | ” (right double) |
+
+**Adding comments:** Use `comment.py` to handle boilerplate across multiple XML files (text must be pre-escaped XML):
+```bash
+python scripts/comment.py unpacked/ 0 "Comment text with & and ’"
+python scripts/comment.py unpacked/ 1 "Reply text" --parent 0 # reply to comment 0
+python scripts/comment.py unpacked/ 0 "Text" --author "Custom Author" # custom author name
+```
+Then add markers to document.xml (see Comments in XML Reference).
+
+### Step 3: Pack
+```bash
+python scripts/office/pack.py unpacked/ output.docx --original document.docx
+```
+Validates with auto-repair, condenses XML, and creates DOCX. Use `--validate false` to skip.
+
+**Auto-repair will fix:**
+- `durableId` >= 0x7FFFFFFF (regenerates valid ID)
+- Missing `xml:space="preserve"` on `` with whitespace
+
+**Auto-repair won't fix:**
+- Malformed XML, invalid element nesting, missing relationships, schema violations
+
+### Common Pitfalls
+
+- **Replace entire `` elements**: When adding tracked changes, replace the whole `... ` block with `......` as siblings. Don't inject tracked change tags inside a run.
+- **Preserve `` formatting**: Copy the original run's `` block into your tracked change runs to maintain bold, font size, etc.
+
+---
+
+## XML Reference
+
+### Schema Compliance
+
+- **Element order in ``**: ``, ``, ``, ``, ``, `` last
+- **Whitespace**: Add `xml:space="preserve"` to `` with leading/trailing spaces
+- **RSIDs**: Must be 8-digit hex (e.g., `00AB1234`)
+
+### Tracked Changes
+
+**Insertion:**
+```xml
+
+ inserted text
+
+```
+
+**Deletion:**
+```xml
+
+ deleted text
+
+```
+
+**Inside ``**: Use `` instead of ``, and `` instead of ``.
+
+**Minimal edits** - only mark what changes:
+```xml
+
+The term is
+
+ 30
+
+
+ 60
+
+ days.
+```
+
+**Deleting entire paragraphs/list items** - when removing ALL content from a paragraph, also mark the paragraph mark as deleted so it merges with the next paragraph. Add ` ` inside ``:
+```xml
+
+
+ ...
+
+
+
+
+
+ Entire paragraph content being deleted...
+
+
+```
+Without the ` ` in ``, accepting changes leaves an empty paragraph/list item.
+
+**Rejecting another author's insertion** - nest deletion inside their insertion:
+```xml
+
+
+ their inserted text
+
+
+```
+
+**Restoring another author's deletion** - add insertion after (don't modify their deletion):
+```xml
+
+ deleted text
+
+
+ deleted text
+
+```
+
+### Comments
+
+After running `comment.py` (see Step 2), add markers to document.xml. For replies, use `--parent` flag and nest markers inside the parent's.
+
+**CRITICAL: `` and `` are siblings of ``, never inside ``.**
+
+```xml
+
+
+
+ deleted
+
+ more text
+
+
+
+
+
+
+ text
+
+
+
+
+```
+
+### Images
+
+1. Add image file to `word/media/`
+2. Add relationship to `word/_rels/document.xml.rels`:
+```xml
+
+```
+3. Add content type to `[Content_Types].xml`:
+```xml
+
+```
+4. Reference in document.xml:
+```xml
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+---
+
+## Dependencies
+
+- **pandoc**: Text extraction
+- **docx**: `npm install -g docx` (new documents)
+- **LibreOffice**: PDF conversion (auto-configured for sandboxed environments via `scripts/office/soffice.py`)
+- **Poppler**: `pdftoppm` for images
diff --git a/skills/docx/docx b/skills/docx/docx
new file mode 120000
index 0000000..0339c8c
--- /dev/null
+++ b/skills/docx/docx
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/docx/
\ No newline at end of file
diff --git a/skills/docx/scripts/__init__.py b/skills/docx/scripts/__init__.py
new file mode 100755
index 0000000..8b13789
--- /dev/null
+++ b/skills/docx/scripts/__init__.py
@@ -0,0 +1 @@
+
diff --git a/skills/docx/scripts/accept_changes.py b/skills/docx/scripts/accept_changes.py
new file mode 100755
index 0000000..8e36316
--- /dev/null
+++ b/skills/docx/scripts/accept_changes.py
@@ -0,0 +1,135 @@
+"""Accept all tracked changes in a DOCX file using LibreOffice.
+
+Requires LibreOffice (soffice) to be installed.
+"""
+
+import argparse
+import logging
+import shutil
+import subprocess
+from pathlib import Path
+
+from office.soffice import get_soffice_env
+
+logger = logging.getLogger(__name__)
+
+LIBREOFFICE_PROFILE = "/tmp/libreoffice_docx_profile"
+MACRO_DIR = f"{LIBREOFFICE_PROFILE}/user/basic/Standard"
+
+ACCEPT_CHANGES_MACRO = """
+
+
+ Sub AcceptAllTrackedChanges()
+ Dim document As Object
+ Dim dispatcher As Object
+
+ document = ThisComponent.CurrentController.Frame
+ dispatcher = createUnoService("com.sun.star.frame.DispatchHelper")
+
+ dispatcher.executeDispatch(document, ".uno:AcceptAllTrackedChanges", "", 0, Array())
+ ThisComponent.store()
+ ThisComponent.close(True)
+ End Sub
+ """
+
+
+def accept_changes(
+ input_file: str,
+ output_file: str,
+) -> tuple[None, str]:
+ input_path = Path(input_file)
+ output_path = Path(output_file)
+
+ if not input_path.exists():
+ return None, f"Error: Input file not found: {input_file}"
+
+ if not input_path.suffix.lower() == ".docx":
+ return None, f"Error: Input file is not a DOCX file: {input_file}"
+
+ try:
+ output_path.parent.mkdir(parents=True, exist_ok=True)
+ shutil.copy2(input_path, output_path)
+ except Exception as e:
+ return None, f"Error: Failed to copy input file to output location: {e}"
+
+ if not _setup_libreoffice_macro():
+ return None, "Error: Failed to setup LibreOffice macro"
+
+ cmd = [
+ "soffice",
+ "--headless",
+ f"-env:UserInstallation=file://{LIBREOFFICE_PROFILE}",
+ "--norestore",
+ "vnd.sun.star.script:Standard.Module1.AcceptAllTrackedChanges?language=Basic&location=application",
+ str(output_path.absolute()),
+ ]
+
+ try:
+ result = subprocess.run(
+ cmd,
+ capture_output=True,
+ text=True,
+ timeout=30,
+ check=False,
+ env=get_soffice_env(),
+ )
+ except subprocess.TimeoutExpired:
+ return (
+ None,
+ f"Successfully accepted all tracked changes: {input_file} -> {output_file}",
+ )
+
+ if result.returncode != 0:
+ return None, f"Error: LibreOffice failed: {result.stderr}"
+
+ return (
+ None,
+ f"Successfully accepted all tracked changes: {input_file} -> {output_file}",
+ )
+
+
+def _setup_libreoffice_macro() -> bool:
+ macro_dir = Path(MACRO_DIR)
+ macro_file = macro_dir / "Module1.xba"
+
+ if macro_file.exists() and "AcceptAllTrackedChanges" in macro_file.read_text():
+ return True
+
+ if not macro_dir.exists():
+ subprocess.run(
+ [
+ "soffice",
+ "--headless",
+ f"-env:UserInstallation=file://{LIBREOFFICE_PROFILE}",
+ "--terminate_after_init",
+ ],
+ capture_output=True,
+ timeout=10,
+ check=False,
+ env=get_soffice_env(),
+ )
+ macro_dir.mkdir(parents=True, exist_ok=True)
+
+ try:
+ macro_file.write_text(ACCEPT_CHANGES_MACRO)
+ return True
+ except Exception as e:
+ logger.warning(f"Failed to setup LibreOffice macro: {e}")
+ return False
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser(
+ description="Accept all tracked changes in a DOCX file"
+ )
+ parser.add_argument("input_file", help="Input DOCX file with tracked changes")
+ parser.add_argument(
+ "output_file", help="Output DOCX file (clean, no tracked changes)"
+ )
+ args = parser.parse_args()
+
+ _, message = accept_changes(args.input_file, args.output_file)
+ print(message)
+
+ if "Error" in message:
+ raise SystemExit(1)
diff --git a/skills/docx/scripts/comment.py b/skills/docx/scripts/comment.py
new file mode 100755
index 0000000..36e1c93
--- /dev/null
+++ b/skills/docx/scripts/comment.py
@@ -0,0 +1,318 @@
+"""Add comments to DOCX documents.
+
+Usage:
+ python comment.py unpacked/ 0 "Comment text"
+ python comment.py unpacked/ 1 "Reply text" --parent 0
+
+Text should be pre-escaped XML (e.g., & for &, ’ for smart quotes).
+
+After running, add markers to document.xml:
+
+ ... commented content ...
+
+
+"""
+
+import argparse
+import random
+import shutil
+import sys
+from datetime import datetime, timezone
+from pathlib import Path
+
+import defusedxml.minidom
+
+TEMPLATE_DIR = Path(__file__).parent / "templates"
+NS = {
+ "w": "http://schemas.openxmlformats.org/wordprocessingml/2006/main",
+ "w14": "http://schemas.microsoft.com/office/word/2010/wordml",
+ "w15": "http://schemas.microsoft.com/office/word/2012/wordml",
+ "w16cid": "http://schemas.microsoft.com/office/word/2016/wordml/cid",
+ "w16cex": "http://schemas.microsoft.com/office/word/2018/wordml/cex",
+}
+
+COMMENT_XML = """\
+
+
+
+
+
+
+
+
+
+
+
+
+ {text}
+
+
+ """
+
+COMMENT_MARKER_TEMPLATE = """
+Add to document.xml (markers must be direct children of w:p, never inside w:r):
+
+ ...
+
+ """
+
+REPLY_MARKER_TEMPLATE = """
+Nest markers inside parent {pid}'s markers (markers must be direct children of w:p, never inside w:r):
+
+ ...
+
+
+ """
+
+
+def _generate_hex_id() -> str:
+ return f"{random.randint(0, 0x7FFFFFFE):08X}"
+
+
+SMART_QUOTE_ENTITIES = {
+ "\u201c": "“",
+ "\u201d": "”",
+ "\u2018": "‘",
+ "\u2019": "’",
+}
+
+
+def _encode_smart_quotes(text: str) -> str:
+ for char, entity in SMART_QUOTE_ENTITIES.items():
+ text = text.replace(char, entity)
+ return text
+
+
+def _append_xml(xml_path: Path, root_tag: str, content: str) -> None:
+ dom = defusedxml.minidom.parseString(xml_path.read_text(encoding="utf-8"))
+ root = dom.getElementsByTagName(root_tag)[0]
+ ns_attrs = " ".join(f'xmlns:{k}="{v}"' for k, v in NS.items())
+ wrapper_dom = defusedxml.minidom.parseString(f"{content} ")
+ for child in wrapper_dom.documentElement.childNodes:
+ if child.nodeType == child.ELEMENT_NODE:
+ root.appendChild(dom.importNode(child, True))
+ output = _encode_smart_quotes(dom.toxml(encoding="UTF-8").decode("utf-8"))
+ xml_path.write_text(output, encoding="utf-8")
+
+
+def _find_para_id(comments_path: Path, comment_id: int) -> str | None:
+ dom = defusedxml.minidom.parseString(comments_path.read_text(encoding="utf-8"))
+ for c in dom.getElementsByTagName("w:comment"):
+ if c.getAttribute("w:id") == str(comment_id):
+ for p in c.getElementsByTagName("w:p"):
+ if pid := p.getAttribute("w14:paraId"):
+ return pid
+ return None
+
+
+def _get_next_rid(rels_path: Path) -> int:
+ dom = defusedxml.minidom.parseString(rels_path.read_text(encoding="utf-8"))
+ max_rid = 0
+ for rel in dom.getElementsByTagName("Relationship"):
+ rid = rel.getAttribute("Id")
+ if rid and rid.startswith("rId"):
+ try:
+ max_rid = max(max_rid, int(rid[3:]))
+ except ValueError:
+ pass
+ return max_rid + 1
+
+
+def _has_relationship(rels_path: Path, target: str) -> bool:
+ dom = defusedxml.minidom.parseString(rels_path.read_text(encoding="utf-8"))
+ for rel in dom.getElementsByTagName("Relationship"):
+ if rel.getAttribute("Target") == target:
+ return True
+ return False
+
+
+def _has_content_type(ct_path: Path, part_name: str) -> bool:
+ dom = defusedxml.minidom.parseString(ct_path.read_text(encoding="utf-8"))
+ for override in dom.getElementsByTagName("Override"):
+ if override.getAttribute("PartName") == part_name:
+ return True
+ return False
+
+
+def _ensure_comment_relationships(unpacked_dir: Path) -> None:
+ rels_path = unpacked_dir / "word" / "_rels" / "document.xml.rels"
+ if not rels_path.exists():
+ return
+
+ if _has_relationship(rels_path, "comments.xml"):
+ return
+
+ dom = defusedxml.minidom.parseString(rels_path.read_text(encoding="utf-8"))
+ root = dom.documentElement
+ next_rid = _get_next_rid(rels_path)
+
+ rels = [
+ (
+ "http://schemas.openxmlformats.org/officeDocument/2006/relationships/comments",
+ "comments.xml",
+ ),
+ (
+ "http://schemas.microsoft.com/office/2011/relationships/commentsExtended",
+ "commentsExtended.xml",
+ ),
+ (
+ "http://schemas.microsoft.com/office/2016/09/relationships/commentsIds",
+ "commentsIds.xml",
+ ),
+ (
+ "http://schemas.microsoft.com/office/2018/08/relationships/commentsExtensible",
+ "commentsExtensible.xml",
+ ),
+ ]
+
+ for rel_type, target in rels:
+ rel = dom.createElement("Relationship")
+ rel.setAttribute("Id", f"rId{next_rid}")
+ rel.setAttribute("Type", rel_type)
+ rel.setAttribute("Target", target)
+ root.appendChild(rel)
+ next_rid += 1
+
+ rels_path.write_bytes(dom.toxml(encoding="UTF-8"))
+
+
+def _ensure_comment_content_types(unpacked_dir: Path) -> None:
+ ct_path = unpacked_dir / "[Content_Types].xml"
+ if not ct_path.exists():
+ return
+
+ if _has_content_type(ct_path, "/word/comments.xml"):
+ return
+
+ dom = defusedxml.minidom.parseString(ct_path.read_text(encoding="utf-8"))
+ root = dom.documentElement
+
+ overrides = [
+ (
+ "/word/comments.xml",
+ "application/vnd.openxmlformats-officedocument.wordprocessingml.comments+xml",
+ ),
+ (
+ "/word/commentsExtended.xml",
+ "application/vnd.openxmlformats-officedocument.wordprocessingml.commentsExtended+xml",
+ ),
+ (
+ "/word/commentsIds.xml",
+ "application/vnd.openxmlformats-officedocument.wordprocessingml.commentsIds+xml",
+ ),
+ (
+ "/word/commentsExtensible.xml",
+ "application/vnd.openxmlformats-officedocument.wordprocessingml.commentsExtensible+xml",
+ ),
+ ]
+
+ for part_name, content_type in overrides:
+ override = dom.createElement("Override")
+ override.setAttribute("PartName", part_name)
+ override.setAttribute("ContentType", content_type)
+ root.appendChild(override)
+
+ ct_path.write_bytes(dom.toxml(encoding="UTF-8"))
+
+
+def add_comment(
+ unpacked_dir: str,
+ comment_id: int,
+ text: str,
+ author: str = "Claude",
+ initials: str = "C",
+ parent_id: int | None = None,
+) -> tuple[str, str]:
+ word = Path(unpacked_dir) / "word"
+ if not word.exists():
+ return "", f"Error: {word} not found"
+
+ para_id, durable_id = _generate_hex_id(), _generate_hex_id()
+ ts = datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ")
+
+ comments = word / "comments.xml"
+ first_comment = not comments.exists()
+ if first_comment:
+ shutil.copy(TEMPLATE_DIR / "comments.xml", comments)
+ _ensure_comment_relationships(Path(unpacked_dir))
+ _ensure_comment_content_types(Path(unpacked_dir))
+ _append_xml(
+ comments,
+ "w:comments",
+ COMMENT_XML.format(
+ id=comment_id,
+ author=author,
+ date=ts,
+ initials=initials,
+ para_id=para_id,
+ text=text,
+ ),
+ )
+
+ ext = word / "commentsExtended.xml"
+ if not ext.exists():
+ shutil.copy(TEMPLATE_DIR / "commentsExtended.xml", ext)
+ if parent_id is not None:
+ parent_para = _find_para_id(comments, parent_id)
+ if not parent_para:
+ return "", f"Error: Parent comment {parent_id} not found"
+ _append_xml(
+ ext,
+ "w15:commentsEx",
+ f' ',
+ )
+ else:
+ _append_xml(
+ ext,
+ "w15:commentsEx",
+ f' ',
+ )
+
+ ids = word / "commentsIds.xml"
+ if not ids.exists():
+ shutil.copy(TEMPLATE_DIR / "commentsIds.xml", ids)
+ _append_xml(
+ ids,
+ "w16cid:commentsIds",
+ f' ',
+ )
+
+ extensible = word / "commentsExtensible.xml"
+ if not extensible.exists():
+ shutil.copy(TEMPLATE_DIR / "commentsExtensible.xml", extensible)
+ _append_xml(
+ extensible,
+ "w16cex:commentsExtensible",
+ f' ',
+ )
+
+ action = "reply" if parent_id is not None else "comment"
+ return para_id, f"Added {action} {comment_id} (para_id={para_id})"
+
+
+if __name__ == "__main__":
+ p = argparse.ArgumentParser(description="Add comments to DOCX documents")
+ p.add_argument("unpacked_dir", help="Unpacked DOCX directory")
+ p.add_argument("comment_id", type=int, help="Comment ID (must be unique)")
+ p.add_argument("text", help="Comment text")
+ p.add_argument("--author", default="Claude", help="Author name")
+ p.add_argument("--initials", default="C", help="Author initials")
+ p.add_argument("--parent", type=int, help="Parent comment ID (for replies)")
+ args = p.parse_args()
+
+ para_id, msg = add_comment(
+ args.unpacked_dir,
+ args.comment_id,
+ args.text,
+ args.author,
+ args.initials,
+ args.parent,
+ )
+ print(msg)
+ if "Error" in msg:
+ sys.exit(1)
+ cid = args.comment_id
+ if args.parent is not None:
+ print(REPLY_MARKER_TEMPLATE.format(pid=args.parent, cid=cid))
+ else:
+ print(COMMENT_MARKER_TEMPLATE.format(cid=cid))
diff --git a/skills/docx/scripts/office/helpers/__init__.py b/skills/docx/scripts/office/helpers/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/skills/docx/scripts/office/helpers/merge_runs.py b/skills/docx/scripts/office/helpers/merge_runs.py
new file mode 100644
index 0000000..ad7c25e
--- /dev/null
+++ b/skills/docx/scripts/office/helpers/merge_runs.py
@@ -0,0 +1,199 @@
+"""Merge adjacent runs with identical formatting in DOCX.
+
+Merges adjacent elements that have identical properties.
+Works on runs in paragraphs and inside tracked changes (, ).
+
+Also:
+- Removes rsid attributes from runs (revision metadata that doesn't affect rendering)
+- Removes proofErr elements (spell/grammar markers that block merging)
+"""
+
+from pathlib import Path
+
+import defusedxml.minidom
+
+
+def merge_runs(input_dir: str) -> tuple[int, str]:
+ doc_xml = Path(input_dir) / "word" / "document.xml"
+
+ if not doc_xml.exists():
+ return 0, f"Error: {doc_xml} not found"
+
+ try:
+ dom = defusedxml.minidom.parseString(doc_xml.read_text(encoding="utf-8"))
+ root = dom.documentElement
+
+ _remove_elements(root, "proofErr")
+ _strip_run_rsid_attrs(root)
+
+ containers = {run.parentNode for run in _find_elements(root, "r")}
+
+ merge_count = 0
+ for container in containers:
+ merge_count += _merge_runs_in(container)
+
+ doc_xml.write_bytes(dom.toxml(encoding="UTF-8"))
+ return merge_count, f"Merged {merge_count} runs"
+
+ except Exception as e:
+ return 0, f"Error: {e}"
+
+
+
+
+def _find_elements(root, tag: str) -> list:
+ results = []
+
+ def traverse(node):
+ if node.nodeType == node.ELEMENT_NODE:
+ name = node.localName or node.tagName
+ if name == tag or name.endswith(f":{tag}"):
+ results.append(node)
+ for child in node.childNodes:
+ traverse(child)
+
+ traverse(root)
+ return results
+
+
+def _get_child(parent, tag: str):
+ for child in parent.childNodes:
+ if child.nodeType == child.ELEMENT_NODE:
+ name = child.localName or child.tagName
+ if name == tag or name.endswith(f":{tag}"):
+ return child
+ return None
+
+
+def _get_children(parent, tag: str) -> list:
+ results = []
+ for child in parent.childNodes:
+ if child.nodeType == child.ELEMENT_NODE:
+ name = child.localName or child.tagName
+ if name == tag or name.endswith(f":{tag}"):
+ results.append(child)
+ return results
+
+
+def _is_adjacent(elem1, elem2) -> bool:
+ node = elem1.nextSibling
+ while node:
+ if node == elem2:
+ return True
+ if node.nodeType == node.ELEMENT_NODE:
+ return False
+ if node.nodeType == node.TEXT_NODE and node.data.strip():
+ return False
+ node = node.nextSibling
+ return False
+
+
+
+
+def _remove_elements(root, tag: str):
+ for elem in _find_elements(root, tag):
+ if elem.parentNode:
+ elem.parentNode.removeChild(elem)
+
+
+def _strip_run_rsid_attrs(root):
+ for run in _find_elements(root, "r"):
+ for attr in list(run.attributes.values()):
+ if "rsid" in attr.name.lower():
+ run.removeAttribute(attr.name)
+
+
+
+
+def _merge_runs_in(container) -> int:
+ merge_count = 0
+ run = _first_child_run(container)
+
+ while run:
+ while True:
+ next_elem = _next_element_sibling(run)
+ if next_elem and _is_run(next_elem) and _can_merge(run, next_elem):
+ _merge_run_content(run, next_elem)
+ container.removeChild(next_elem)
+ merge_count += 1
+ else:
+ break
+
+ _consolidate_text(run)
+ run = _next_sibling_run(run)
+
+ return merge_count
+
+
+def _first_child_run(container):
+ for child in container.childNodes:
+ if child.nodeType == child.ELEMENT_NODE and _is_run(child):
+ return child
+ return None
+
+
+def _next_element_sibling(node):
+ sibling = node.nextSibling
+ while sibling:
+ if sibling.nodeType == sibling.ELEMENT_NODE:
+ return sibling
+ sibling = sibling.nextSibling
+ return None
+
+
+def _next_sibling_run(node):
+ sibling = node.nextSibling
+ while sibling:
+ if sibling.nodeType == sibling.ELEMENT_NODE:
+ if _is_run(sibling):
+ return sibling
+ sibling = sibling.nextSibling
+ return None
+
+
+def _is_run(node) -> bool:
+ name = node.localName or node.tagName
+ return name == "r" or name.endswith(":r")
+
+
+def _can_merge(run1, run2) -> bool:
+ rpr1 = _get_child(run1, "rPr")
+ rpr2 = _get_child(run2, "rPr")
+
+ if (rpr1 is None) != (rpr2 is None):
+ return False
+ if rpr1 is None:
+ return True
+ return rpr1.toxml() == rpr2.toxml()
+
+
+def _merge_run_content(target, source):
+ for child in list(source.childNodes):
+ if child.nodeType == child.ELEMENT_NODE:
+ name = child.localName or child.tagName
+ if name != "rPr" and not name.endswith(":rPr"):
+ target.appendChild(child)
+
+
+def _consolidate_text(run):
+ t_elements = _get_children(run, "t")
+
+ for i in range(len(t_elements) - 1, 0, -1):
+ curr, prev = t_elements[i], t_elements[i - 1]
+
+ if _is_adjacent(prev, curr):
+ prev_text = prev.firstChild.data if prev.firstChild else ""
+ curr_text = curr.firstChild.data if curr.firstChild else ""
+ merged = prev_text + curr_text
+
+ if prev.firstChild:
+ prev.firstChild.data = merged
+ else:
+ prev.appendChild(run.ownerDocument.createTextNode(merged))
+
+ if merged.startswith(" ") or merged.endswith(" "):
+ prev.setAttribute("xml:space", "preserve")
+ elif prev.hasAttribute("xml:space"):
+ prev.removeAttribute("xml:space")
+
+ run.removeChild(curr)
diff --git a/skills/docx/scripts/office/helpers/simplify_redlines.py b/skills/docx/scripts/office/helpers/simplify_redlines.py
new file mode 100644
index 0000000..db963bb
--- /dev/null
+++ b/skills/docx/scripts/office/helpers/simplify_redlines.py
@@ -0,0 +1,197 @@
+"""Simplify tracked changes by merging adjacent w:ins or w:del elements.
+
+Merges adjacent elements from the same author into a single element.
+Same for elements. This makes heavily-redlined documents easier to
+work with by reducing the number of tracked change wrappers.
+
+Rules:
+- Only merges w:ins with w:ins, w:del with w:del (same element type)
+- Only merges if same author (ignores timestamp differences)
+- Only merges if truly adjacent (only whitespace between them)
+"""
+
+import xml.etree.ElementTree as ET
+import zipfile
+from pathlib import Path
+
+import defusedxml.minidom
+
+WORD_NS = "http://schemas.openxmlformats.org/wordprocessingml/2006/main"
+
+
+def simplify_redlines(input_dir: str) -> tuple[int, str]:
+ doc_xml = Path(input_dir) / "word" / "document.xml"
+
+ if not doc_xml.exists():
+ return 0, f"Error: {doc_xml} not found"
+
+ try:
+ dom = defusedxml.minidom.parseString(doc_xml.read_text(encoding="utf-8"))
+ root = dom.documentElement
+
+ merge_count = 0
+
+ containers = _find_elements(root, "p") + _find_elements(root, "tc")
+
+ for container in containers:
+ merge_count += _merge_tracked_changes_in(container, "ins")
+ merge_count += _merge_tracked_changes_in(container, "del")
+
+ doc_xml.write_bytes(dom.toxml(encoding="UTF-8"))
+ return merge_count, f"Simplified {merge_count} tracked changes"
+
+ except Exception as e:
+ return 0, f"Error: {e}"
+
+
+def _merge_tracked_changes_in(container, tag: str) -> int:
+ merge_count = 0
+
+ tracked = [
+ child
+ for child in container.childNodes
+ if child.nodeType == child.ELEMENT_NODE and _is_element(child, tag)
+ ]
+
+ if len(tracked) < 2:
+ return 0
+
+ i = 0
+ while i < len(tracked) - 1:
+ curr = tracked[i]
+ next_elem = tracked[i + 1]
+
+ if _can_merge_tracked(curr, next_elem):
+ _merge_tracked_content(curr, next_elem)
+ container.removeChild(next_elem)
+ tracked.pop(i + 1)
+ merge_count += 1
+ else:
+ i += 1
+
+ return merge_count
+
+
+def _is_element(node, tag: str) -> bool:
+ name = node.localName or node.tagName
+ return name == tag or name.endswith(f":{tag}")
+
+
+def _get_author(elem) -> str:
+ author = elem.getAttribute("w:author")
+ if not author:
+ for attr in elem.attributes.values():
+ if attr.localName == "author" or attr.name.endswith(":author"):
+ return attr.value
+ return author
+
+
+def _can_merge_tracked(elem1, elem2) -> bool:
+ if _get_author(elem1) != _get_author(elem2):
+ return False
+
+ node = elem1.nextSibling
+ while node and node != elem2:
+ if node.nodeType == node.ELEMENT_NODE:
+ return False
+ if node.nodeType == node.TEXT_NODE and node.data.strip():
+ return False
+ node = node.nextSibling
+
+ return True
+
+
+def _merge_tracked_content(target, source):
+ while source.firstChild:
+ child = source.firstChild
+ source.removeChild(child)
+ target.appendChild(child)
+
+
+def _find_elements(root, tag: str) -> list:
+ results = []
+
+ def traverse(node):
+ if node.nodeType == node.ELEMENT_NODE:
+ name = node.localName or node.tagName
+ if name == tag or name.endswith(f":{tag}"):
+ results.append(node)
+ for child in node.childNodes:
+ traverse(child)
+
+ traverse(root)
+ return results
+
+
+def get_tracked_change_authors(doc_xml_path: Path) -> dict[str, int]:
+ if not doc_xml_path.exists():
+ return {}
+
+ try:
+ tree = ET.parse(doc_xml_path)
+ root = tree.getroot()
+ except ET.ParseError:
+ return {}
+
+ namespaces = {"w": WORD_NS}
+ author_attr = f"{{{WORD_NS}}}author"
+
+ authors: dict[str, int] = {}
+ for tag in ["ins", "del"]:
+ for elem in root.findall(f".//w:{tag}", namespaces):
+ author = elem.get(author_attr)
+ if author:
+ authors[author] = authors.get(author, 0) + 1
+
+ return authors
+
+
+def _get_authors_from_docx(docx_path: Path) -> dict[str, int]:
+ try:
+ with zipfile.ZipFile(docx_path, "r") as zf:
+ if "word/document.xml" not in zf.namelist():
+ return {}
+ with zf.open("word/document.xml") as f:
+ tree = ET.parse(f)
+ root = tree.getroot()
+
+ namespaces = {"w": WORD_NS}
+ author_attr = f"{{{WORD_NS}}}author"
+
+ authors: dict[str, int] = {}
+ for tag in ["ins", "del"]:
+ for elem in root.findall(f".//w:{tag}", namespaces):
+ author = elem.get(author_attr)
+ if author:
+ authors[author] = authors.get(author, 0) + 1
+ return authors
+ except (zipfile.BadZipFile, ET.ParseError):
+ return {}
+
+
+def infer_author(modified_dir: Path, original_docx: Path, default: str = "Claude") -> str:
+ modified_xml = modified_dir / "word" / "document.xml"
+ modified_authors = get_tracked_change_authors(modified_xml)
+
+ if not modified_authors:
+ return default
+
+ original_authors = _get_authors_from_docx(original_docx)
+
+ new_changes: dict[str, int] = {}
+ for author, count in modified_authors.items():
+ original_count = original_authors.get(author, 0)
+ diff = count - original_count
+ if diff > 0:
+ new_changes[author] = diff
+
+ if not new_changes:
+ return default
+
+ if len(new_changes) == 1:
+ return next(iter(new_changes))
+
+ raise ValueError(
+ f"Multiple authors added new changes: {new_changes}. "
+ "Cannot infer which author to validate."
+ )
diff --git a/skills/docx/scripts/office/pack.py b/skills/docx/scripts/office/pack.py
new file mode 100755
index 0000000..db29ed8
--- /dev/null
+++ b/skills/docx/scripts/office/pack.py
@@ -0,0 +1,159 @@
+"""Pack a directory into a DOCX, PPTX, or XLSX file.
+
+Validates with auto-repair, condenses XML formatting, and creates the Office file.
+
+Usage:
+ python pack.py [--original ] [--validate true|false]
+
+Examples:
+ python pack.py unpacked/ output.docx --original input.docx
+ python pack.py unpacked/ output.pptx --validate false
+"""
+
+import argparse
+import sys
+import shutil
+import tempfile
+import zipfile
+from pathlib import Path
+
+import defusedxml.minidom
+
+from validators import DOCXSchemaValidator, PPTXSchemaValidator, RedliningValidator
+
+def pack(
+ input_directory: str,
+ output_file: str,
+ original_file: str | None = None,
+ validate: bool = True,
+ infer_author_func=None,
+) -> tuple[None, str]:
+ input_dir = Path(input_directory)
+ output_path = Path(output_file)
+ suffix = output_path.suffix.lower()
+
+ if not input_dir.is_dir():
+ return None, f"Error: {input_dir} is not a directory"
+
+ if suffix not in {".docx", ".pptx", ".xlsx"}:
+ return None, f"Error: {output_file} must be a .docx, .pptx, or .xlsx file"
+
+ if validate and original_file:
+ original_path = Path(original_file)
+ if original_path.exists():
+ success, output = _run_validation(
+ input_dir, original_path, suffix, infer_author_func
+ )
+ if output:
+ print(output)
+ if not success:
+ return None, f"Error: Validation failed for {input_dir}"
+
+ with tempfile.TemporaryDirectory() as temp_dir:
+ temp_content_dir = Path(temp_dir) / "content"
+ shutil.copytree(input_dir, temp_content_dir)
+
+ for pattern in ["*.xml", "*.rels"]:
+ for xml_file in temp_content_dir.rglob(pattern):
+ _condense_xml(xml_file)
+
+ output_path.parent.mkdir(parents=True, exist_ok=True)
+ with zipfile.ZipFile(output_path, "w", zipfile.ZIP_DEFLATED) as zf:
+ for f in temp_content_dir.rglob("*"):
+ if f.is_file():
+ zf.write(f, f.relative_to(temp_content_dir))
+
+ return None, f"Successfully packed {input_dir} to {output_file}"
+
+
+def _run_validation(
+ unpacked_dir: Path,
+ original_file: Path,
+ suffix: str,
+ infer_author_func=None,
+) -> tuple[bool, str | None]:
+ output_lines = []
+ validators = []
+
+ if suffix == ".docx":
+ author = "Claude"
+ if infer_author_func:
+ try:
+ author = infer_author_func(unpacked_dir, original_file)
+ except ValueError as e:
+ print(f"Warning: {e} Using default author 'Claude'.", file=sys.stderr)
+
+ validators = [
+ DOCXSchemaValidator(unpacked_dir, original_file),
+ RedliningValidator(unpacked_dir, original_file, author=author),
+ ]
+ elif suffix == ".pptx":
+ validators = [PPTXSchemaValidator(unpacked_dir, original_file)]
+
+ if not validators:
+ return True, None
+
+ total_repairs = sum(v.repair() for v in validators)
+ if total_repairs:
+ output_lines.append(f"Auto-repaired {total_repairs} issue(s)")
+
+ success = all(v.validate() for v in validators)
+
+ if success:
+ output_lines.append("All validations PASSED!")
+
+ return success, "\n".join(output_lines) if output_lines else None
+
+
+def _condense_xml(xml_file: Path) -> None:
+ try:
+ with open(xml_file, encoding="utf-8") as f:
+ dom = defusedxml.minidom.parse(f)
+
+ for element in dom.getElementsByTagName("*"):
+ if element.tagName.endswith(":t"):
+ continue
+
+ for child in list(element.childNodes):
+ if (
+ child.nodeType == child.TEXT_NODE
+ and child.nodeValue
+ and child.nodeValue.strip() == ""
+ ) or child.nodeType == child.COMMENT_NODE:
+ element.removeChild(child)
+
+ xml_file.write_bytes(dom.toxml(encoding="UTF-8"))
+ except Exception as e:
+ print(f"ERROR: Failed to parse {xml_file.name}: {e}", file=sys.stderr)
+ raise
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser(
+ description="Pack a directory into a DOCX, PPTX, or XLSX file"
+ )
+ parser.add_argument("input_directory", help="Unpacked Office document directory")
+ parser.add_argument("output_file", help="Output Office file (.docx/.pptx/.xlsx)")
+ parser.add_argument(
+ "--original",
+ help="Original file for validation comparison",
+ )
+ parser.add_argument(
+ "--validate",
+ type=lambda x: x.lower() == "true",
+ default=True,
+ metavar="true|false",
+ help="Run validation with auto-repair (default: true)",
+ )
+ args = parser.parse_args()
+
+ _, message = pack(
+ args.input_directory,
+ args.output_file,
+ original_file=args.original,
+ validate=args.validate,
+ )
+ print(message)
+
+ if "Error" in message:
+ sys.exit(1)
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-chart.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-chart.xsd
new file mode 100644
index 0000000..6454ef9
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-chart.xsd
@@ -0,0 +1,1499 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-chartDrawing.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-chartDrawing.xsd
new file mode 100644
index 0000000..afa4f46
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-chartDrawing.xsd
@@ -0,0 +1,146 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-diagram.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-diagram.xsd
new file mode 100644
index 0000000..64e66b8
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-diagram.xsd
@@ -0,0 +1,1085 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-lockedCanvas.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-lockedCanvas.xsd
new file mode 100644
index 0000000..687eea8
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-lockedCanvas.xsd
@@ -0,0 +1,11 @@
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-main.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-main.xsd
new file mode 100644
index 0000000..6ac81b0
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-main.xsd
@@ -0,0 +1,3081 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-picture.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-picture.xsd
new file mode 100644
index 0000000..1dbf051
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-picture.xsd
@@ -0,0 +1,23 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-spreadsheetDrawing.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-spreadsheetDrawing.xsd
new file mode 100644
index 0000000..f1af17d
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-spreadsheetDrawing.xsd
@@ -0,0 +1,185 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-wordprocessingDrawing.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-wordprocessingDrawing.xsd
new file mode 100644
index 0000000..0a185ab
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/dml-wordprocessingDrawing.xsd
@@ -0,0 +1,287 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/pml.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/pml.xsd
new file mode 100644
index 0000000..14ef488
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/pml.xsd
@@ -0,0 +1,1676 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-additionalCharacteristics.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-additionalCharacteristics.xsd
new file mode 100644
index 0000000..c20f3bf
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-additionalCharacteristics.xsd
@@ -0,0 +1,28 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-bibliography.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-bibliography.xsd
new file mode 100644
index 0000000..ac60252
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-bibliography.xsd
@@ -0,0 +1,144 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-commonSimpleTypes.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-commonSimpleTypes.xsd
new file mode 100644
index 0000000..424b8ba
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-commonSimpleTypes.xsd
@@ -0,0 +1,174 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-customXmlDataProperties.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-customXmlDataProperties.xsd
new file mode 100644
index 0000000..2bddce2
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-customXmlDataProperties.xsd
@@ -0,0 +1,25 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-customXmlSchemaProperties.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-customXmlSchemaProperties.xsd
new file mode 100644
index 0000000..8a8c18b
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-customXmlSchemaProperties.xsd
@@ -0,0 +1,18 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesCustom.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesCustom.xsd
new file mode 100644
index 0000000..5c42706
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesCustom.xsd
@@ -0,0 +1,59 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesExtended.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesExtended.xsd
new file mode 100644
index 0000000..853c341
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesExtended.xsd
@@ -0,0 +1,56 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesVariantTypes.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesVariantTypes.xsd
new file mode 100644
index 0000000..da835ee
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-documentPropertiesVariantTypes.xsd
@@ -0,0 +1,195 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-math.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-math.xsd
new file mode 100644
index 0000000..87ad265
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-math.xsd
@@ -0,0 +1,582 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-relationshipReference.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-relationshipReference.xsd
new file mode 100644
index 0000000..9e86f1b
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/shared-relationshipReference.xsd
@@ -0,0 +1,25 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/sml.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/sml.xsd
new file mode 100644
index 0000000..d0be42e
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/sml.xsd
@@ -0,0 +1,4439 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/vml-main.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/vml-main.xsd
new file mode 100644
index 0000000..8821dd1
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/vml-main.xsd
@@ -0,0 +1,570 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/vml-officeDrawing.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/vml-officeDrawing.xsd
new file mode 100644
index 0000000..ca2575c
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/vml-officeDrawing.xsd
@@ -0,0 +1,509 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/vml-presentationDrawing.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/vml-presentationDrawing.xsd
new file mode 100644
index 0000000..dd079e6
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/vml-presentationDrawing.xsd
@@ -0,0 +1,12 @@
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/vml-spreadsheetDrawing.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/vml-spreadsheetDrawing.xsd
new file mode 100644
index 0000000..3dd6cf6
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/vml-spreadsheetDrawing.xsd
@@ -0,0 +1,108 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/vml-wordprocessingDrawing.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/vml-wordprocessingDrawing.xsd
new file mode 100644
index 0000000..f1041e3
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/vml-wordprocessingDrawing.xsd
@@ -0,0 +1,96 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/wml.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/wml.xsd
new file mode 100644
index 0000000..9c5b7a6
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/wml.xsd
@@ -0,0 +1,3646 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/xml.xsd b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/xml.xsd
new file mode 100644
index 0000000..0f13678
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ISO-IEC29500-4_2016/xml.xsd
@@ -0,0 +1,116 @@
+
+
+
+
+
+ See http://www.w3.org/XML/1998/namespace.html and
+ http://www.w3.org/TR/REC-xml for information about this namespace.
+
+ This schema document describes the XML namespace, in a form
+ suitable for import by other schema documents.
+
+ Note that local names in this namespace are intended to be defined
+ only by the World Wide Web Consortium or its subgroups. The
+ following names are currently defined in this namespace and should
+ not be used with conflicting semantics by any Working Group,
+ specification, or document instance:
+
+ base (as an attribute name): denotes an attribute whose value
+ provides a URI to be used as the base for interpreting any
+ relative URIs in the scope of the element on which it
+ appears; its value is inherited. This name is reserved
+ by virtue of its definition in the XML Base specification.
+
+ lang (as an attribute name): denotes an attribute whose value
+ is a language code for the natural language of the content of
+ any element; its value is inherited. This name is reserved
+ by virtue of its definition in the XML specification.
+
+ space (as an attribute name): denotes an attribute whose
+ value is a keyword indicating what whitespace processing
+ discipline is intended for the content of the element; its
+ value is inherited. This name is reserved by virtue of its
+ definition in the XML specification.
+
+ Father (in any context at all): denotes Jon Bosak, the chair of
+ the original XML Working Group. This name is reserved by
+ the following decision of the W3C XML Plenary and
+ XML Coordination groups:
+
+ In appreciation for his vision, leadership and dedication
+ the W3C XML Plenary on this 10th day of February, 2000
+ reserves for Jon Bosak in perpetuity the XML name
+ xml:Father
+
+
+
+
+ This schema defines attributes and an attribute group
+ suitable for use by
+ schemas wishing to allow xml:base, xml:lang or xml:space attributes
+ on elements they define.
+
+ To enable this, such a schema must import this schema
+ for the XML namespace, e.g. as follows:
+ <schema . . .>
+ . . .
+ <import namespace="http://www.w3.org/XML/1998/namespace"
+ schemaLocation="http://www.w3.org/2001/03/xml.xsd"/>
+
+ Subsequently, qualified reference to any of the attributes
+ or the group defined below will have the desired effect, e.g.
+
+ <type . . .>
+ . . .
+ <attributeGroup ref="xml:specialAttrs"/>
+
+ will define a type which will schema-validate an instance
+ element with any of those attributes
+
+
+
+ In keeping with the XML Schema WG's standard versioning
+ policy, this schema document will persist at
+ http://www.w3.org/2001/03/xml.xsd.
+ At the date of issue it can also be found at
+ http://www.w3.org/2001/xml.xsd.
+ The schema document at that URI may however change in the future,
+ in order to remain compatible with the latest version of XML Schema
+ itself. In other words, if the XML Schema namespace changes, the version
+ of this document at
+ http://www.w3.org/2001/xml.xsd will change
+ accordingly; the version at
+ http://www.w3.org/2001/03/xml.xsd will not change.
+
+
+
+
+
+ In due course, we should install the relevant ISO 2- and 3-letter
+ codes as the enumerated possible values . . .
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ See http://www.w3.org/TR/xmlbase/ for
+ information about this attribute.
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ecma/fouth-edition/opc-contentTypes.xsd b/skills/docx/scripts/office/schemas/ecma/fouth-edition/opc-contentTypes.xsd
new file mode 100644
index 0000000..a6de9d2
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ecma/fouth-edition/opc-contentTypes.xsd
@@ -0,0 +1,42 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ecma/fouth-edition/opc-coreProperties.xsd b/skills/docx/scripts/office/schemas/ecma/fouth-edition/opc-coreProperties.xsd
new file mode 100644
index 0000000..10e978b
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ecma/fouth-edition/opc-coreProperties.xsd
@@ -0,0 +1,50 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ecma/fouth-edition/opc-digSig.xsd b/skills/docx/scripts/office/schemas/ecma/fouth-edition/opc-digSig.xsd
new file mode 100644
index 0000000..4248bf7
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ecma/fouth-edition/opc-digSig.xsd
@@ -0,0 +1,49 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/ecma/fouth-edition/opc-relationships.xsd b/skills/docx/scripts/office/schemas/ecma/fouth-edition/opc-relationships.xsd
new file mode 100644
index 0000000..5649746
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/ecma/fouth-edition/opc-relationships.xsd
@@ -0,0 +1,33 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/mce/mc.xsd b/skills/docx/scripts/office/schemas/mce/mc.xsd
new file mode 100644
index 0000000..ef72545
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/mce/mc.xsd
@@ -0,0 +1,75 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/microsoft/wml-2010.xsd b/skills/docx/scripts/office/schemas/microsoft/wml-2010.xsd
new file mode 100644
index 0000000..f65f777
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/microsoft/wml-2010.xsd
@@ -0,0 +1,560 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/microsoft/wml-2012.xsd b/skills/docx/scripts/office/schemas/microsoft/wml-2012.xsd
new file mode 100644
index 0000000..6b00755
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/microsoft/wml-2012.xsd
@@ -0,0 +1,67 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/microsoft/wml-2018.xsd b/skills/docx/scripts/office/schemas/microsoft/wml-2018.xsd
new file mode 100644
index 0000000..f321d33
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/microsoft/wml-2018.xsd
@@ -0,0 +1,14 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/microsoft/wml-cex-2018.xsd b/skills/docx/scripts/office/schemas/microsoft/wml-cex-2018.xsd
new file mode 100644
index 0000000..364c6a9
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/microsoft/wml-cex-2018.xsd
@@ -0,0 +1,20 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/microsoft/wml-cid-2016.xsd b/skills/docx/scripts/office/schemas/microsoft/wml-cid-2016.xsd
new file mode 100644
index 0000000..fed9d15
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/microsoft/wml-cid-2016.xsd
@@ -0,0 +1,13 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/microsoft/wml-sdtdatahash-2020.xsd b/skills/docx/scripts/office/schemas/microsoft/wml-sdtdatahash-2020.xsd
new file mode 100644
index 0000000..680cf15
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/microsoft/wml-sdtdatahash-2020.xsd
@@ -0,0 +1,4 @@
+
+
+
+
diff --git a/skills/docx/scripts/office/schemas/microsoft/wml-symex-2015.xsd b/skills/docx/scripts/office/schemas/microsoft/wml-symex-2015.xsd
new file mode 100644
index 0000000..89ada90
--- /dev/null
+++ b/skills/docx/scripts/office/schemas/microsoft/wml-symex-2015.xsd
@@ -0,0 +1,8 @@
+
+
+
+
+
+
+
+
diff --git a/skills/docx/scripts/office/soffice.py b/skills/docx/scripts/office/soffice.py
new file mode 100644
index 0000000..c7f7e32
--- /dev/null
+++ b/skills/docx/scripts/office/soffice.py
@@ -0,0 +1,183 @@
+"""
+Helper for running LibreOffice (soffice) in environments where AF_UNIX
+sockets may be blocked (e.g., sandboxed VMs). Detects the restriction
+at runtime and applies an LD_PRELOAD shim if needed.
+
+Usage:
+ from office.soffice import run_soffice, get_soffice_env
+
+ # Option 1 – run soffice directly
+ result = run_soffice(["--headless", "--convert-to", "pdf", "input.docx"])
+
+ # Option 2 – get env dict for your own subprocess calls
+ env = get_soffice_env()
+ subprocess.run(["soffice", ...], env=env)
+"""
+
+import os
+import socket
+import subprocess
+import tempfile
+from pathlib import Path
+
+
+def get_soffice_env() -> dict:
+ env = os.environ.copy()
+ env["SAL_USE_VCLPLUGIN"] = "svp"
+
+ if _needs_shim():
+ shim = _ensure_shim()
+ env["LD_PRELOAD"] = str(shim)
+
+ return env
+
+
+def run_soffice(args: list[str], **kwargs) -> subprocess.CompletedProcess:
+ env = get_soffice_env()
+ return subprocess.run(["soffice"] + args, env=env, **kwargs)
+
+
+
+_SHIM_SO = Path(tempfile.gettempdir()) / "lo_socket_shim.so"
+
+
+def _needs_shim() -> bool:
+ try:
+ s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
+ s.close()
+ return False
+ except OSError:
+ return True
+
+
+def _ensure_shim() -> Path:
+ if _SHIM_SO.exists():
+ return _SHIM_SO
+
+ src = Path(tempfile.gettempdir()) / "lo_socket_shim.c"
+ src.write_text(_SHIM_SOURCE)
+ subprocess.run(
+ ["gcc", "-shared", "-fPIC", "-o", str(_SHIM_SO), str(src), "-ldl"],
+ check=True,
+ capture_output=True,
+ )
+ src.unlink()
+ return _SHIM_SO
+
+
+
+_SHIM_SOURCE = r"""
+#define _GNU_SOURCE
+#include
+#include
+#include
+#include
+#include
+#include
+#include
+
+static int (*real_socket)(int, int, int);
+static int (*real_socketpair)(int, int, int, int[2]);
+static int (*real_listen)(int, int);
+static int (*real_accept)(int, struct sockaddr *, socklen_t *);
+static int (*real_close)(int);
+static int (*real_read)(int, void *, size_t);
+
+/* Per-FD bookkeeping (FDs >= 1024 are passed through unshimmed). */
+static int is_shimmed[1024];
+static int peer_of[1024];
+static int wake_r[1024]; /* accept() blocks reading this */
+static int wake_w[1024]; /* close() writes to this */
+static int listener_fd = -1; /* FD that received listen() */
+
+__attribute__((constructor))
+static void init(void) {
+ real_socket = dlsym(RTLD_NEXT, "socket");
+ real_socketpair = dlsym(RTLD_NEXT, "socketpair");
+ real_listen = dlsym(RTLD_NEXT, "listen");
+ real_accept = dlsym(RTLD_NEXT, "accept");
+ real_close = dlsym(RTLD_NEXT, "close");
+ real_read = dlsym(RTLD_NEXT, "read");
+ for (int i = 0; i < 1024; i++) {
+ peer_of[i] = -1;
+ wake_r[i] = -1;
+ wake_w[i] = -1;
+ }
+}
+
+/* ---- socket ---------------------------------------------------------- */
+int socket(int domain, int type, int protocol) {
+ if (domain == AF_UNIX) {
+ int fd = real_socket(domain, type, protocol);
+ if (fd >= 0) return fd;
+ /* socket(AF_UNIX) blocked – fall back to socketpair(). */
+ int sv[2];
+ if (real_socketpair(domain, type, protocol, sv) == 0) {
+ if (sv[0] >= 0 && sv[0] < 1024) {
+ is_shimmed[sv[0]] = 1;
+ peer_of[sv[0]] = sv[1];
+ int wp[2];
+ if (pipe(wp) == 0) {
+ wake_r[sv[0]] = wp[0];
+ wake_w[sv[0]] = wp[1];
+ }
+ }
+ return sv[0];
+ }
+ errno = EPERM;
+ return -1;
+ }
+ return real_socket(domain, type, protocol);
+}
+
+/* ---- listen ---------------------------------------------------------- */
+int listen(int sockfd, int backlog) {
+ if (sockfd >= 0 && sockfd < 1024 && is_shimmed[sockfd]) {
+ listener_fd = sockfd;
+ return 0;
+ }
+ return real_listen(sockfd, backlog);
+}
+
+/* ---- accept ---------------------------------------------------------- */
+int accept(int sockfd, struct sockaddr *addr, socklen_t *addrlen) {
+ if (sockfd >= 0 && sockfd < 1024 && is_shimmed[sockfd]) {
+ /* Block until close() writes to the wake pipe. */
+ if (wake_r[sockfd] >= 0) {
+ char buf;
+ real_read(wake_r[sockfd], &buf, 1);
+ }
+ errno = ECONNABORTED;
+ return -1;
+ }
+ return real_accept(sockfd, addr, addrlen);
+}
+
+/* ---- close ----------------------------------------------------------- */
+int close(int fd) {
+ if (fd >= 0 && fd < 1024 && is_shimmed[fd]) {
+ int was_listener = (fd == listener_fd);
+ is_shimmed[fd] = 0;
+
+ if (wake_w[fd] >= 0) { /* unblock accept() */
+ char c = 0;
+ write(wake_w[fd], &c, 1);
+ real_close(wake_w[fd]);
+ wake_w[fd] = -1;
+ }
+ if (wake_r[fd] >= 0) { real_close(wake_r[fd]); wake_r[fd] = -1; }
+ if (peer_of[fd] >= 0) { real_close(peer_of[fd]); peer_of[fd] = -1; }
+
+ if (was_listener)
+ _exit(0); /* conversion done – exit */
+ }
+ return real_close(fd);
+}
+"""
+
+
+
+if __name__ == "__main__":
+ import sys
+ result = run_soffice(sys.argv[1:])
+ sys.exit(result.returncode)
diff --git a/skills/docx/scripts/office/unpack.py b/skills/docx/scripts/office/unpack.py
new file mode 100755
index 0000000..0015253
--- /dev/null
+++ b/skills/docx/scripts/office/unpack.py
@@ -0,0 +1,132 @@
+"""Unpack Office files (DOCX, PPTX, XLSX) for editing.
+
+Extracts the ZIP archive, pretty-prints XML files, and optionally:
+- Merges adjacent runs with identical formatting (DOCX only)
+- Simplifies adjacent tracked changes from same author (DOCX only)
+
+Usage:
+ python unpack.py [options]
+
+Examples:
+ python unpack.py document.docx unpacked/
+ python unpack.py presentation.pptx unpacked/
+ python unpack.py document.docx unpacked/ --merge-runs false
+"""
+
+import argparse
+import sys
+import zipfile
+from pathlib import Path
+
+import defusedxml.minidom
+
+from helpers.merge_runs import merge_runs as do_merge_runs
+from helpers.simplify_redlines import simplify_redlines as do_simplify_redlines
+
+SMART_QUOTE_REPLACEMENTS = {
+ "\u201c": "“",
+ "\u201d": "”",
+ "\u2018": "‘",
+ "\u2019": "’",
+}
+
+
+def unpack(
+ input_file: str,
+ output_directory: str,
+ merge_runs: bool = True,
+ simplify_redlines: bool = True,
+) -> tuple[None, str]:
+ input_path = Path(input_file)
+ output_path = Path(output_directory)
+ suffix = input_path.suffix.lower()
+
+ if not input_path.exists():
+ return None, f"Error: {input_file} does not exist"
+
+ if suffix not in {".docx", ".pptx", ".xlsx"}:
+ return None, f"Error: {input_file} must be a .docx, .pptx, or .xlsx file"
+
+ try:
+ output_path.mkdir(parents=True, exist_ok=True)
+
+ with zipfile.ZipFile(input_path, "r") as zf:
+ zf.extractall(output_path)
+
+ xml_files = list(output_path.rglob("*.xml")) + list(output_path.rglob("*.rels"))
+ for xml_file in xml_files:
+ _pretty_print_xml(xml_file)
+
+ message = f"Unpacked {input_file} ({len(xml_files)} XML files)"
+
+ if suffix == ".docx":
+ if simplify_redlines:
+ simplify_count, _ = do_simplify_redlines(str(output_path))
+ message += f", simplified {simplify_count} tracked changes"
+
+ if merge_runs:
+ merge_count, _ = do_merge_runs(str(output_path))
+ message += f", merged {merge_count} runs"
+
+ for xml_file in xml_files:
+ _escape_smart_quotes(xml_file)
+
+ return None, message
+
+ except zipfile.BadZipFile:
+ return None, f"Error: {input_file} is not a valid Office file"
+ except Exception as e:
+ return None, f"Error unpacking: {e}"
+
+
+def _pretty_print_xml(xml_file: Path) -> None:
+ try:
+ content = xml_file.read_text(encoding="utf-8")
+ dom = defusedxml.minidom.parseString(content)
+ xml_file.write_bytes(dom.toprettyxml(indent=" ", encoding="utf-8"))
+ except Exception:
+ pass
+
+
+def _escape_smart_quotes(xml_file: Path) -> None:
+ try:
+ content = xml_file.read_text(encoding="utf-8")
+ for char, entity in SMART_QUOTE_REPLACEMENTS.items():
+ content = content.replace(char, entity)
+ xml_file.write_text(content, encoding="utf-8")
+ except Exception:
+ pass
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser(
+ description="Unpack an Office file (DOCX, PPTX, XLSX) for editing"
+ )
+ parser.add_argument("input_file", help="Office file to unpack")
+ parser.add_argument("output_directory", help="Output directory")
+ parser.add_argument(
+ "--merge-runs",
+ type=lambda x: x.lower() == "true",
+ default=True,
+ metavar="true|false",
+ help="Merge adjacent runs with identical formatting (DOCX only, default: true)",
+ )
+ parser.add_argument(
+ "--simplify-redlines",
+ type=lambda x: x.lower() == "true",
+ default=True,
+ metavar="true|false",
+ help="Merge adjacent tracked changes from same author (DOCX only, default: true)",
+ )
+ args = parser.parse_args()
+
+ _, message = unpack(
+ args.input_file,
+ args.output_directory,
+ merge_runs=args.merge_runs,
+ simplify_redlines=args.simplify_redlines,
+ )
+ print(message)
+
+ if "Error" in message:
+ sys.exit(1)
diff --git a/skills/docx/scripts/office/validate.py b/skills/docx/scripts/office/validate.py
new file mode 100755
index 0000000..03b01f6
--- /dev/null
+++ b/skills/docx/scripts/office/validate.py
@@ -0,0 +1,111 @@
+"""
+Command line tool to validate Office document XML files against XSD schemas and tracked changes.
+
+Usage:
+ python validate.py [--original ] [--auto-repair] [--author NAME]
+
+The first argument can be either:
+- An unpacked directory containing the Office document XML files
+- A packed Office file (.docx/.pptx/.xlsx) which will be unpacked to a temp directory
+
+Auto-repair fixes:
+- paraId/durableId values that exceed OOXML limits
+- Missing xml:space="preserve" on w:t elements with whitespace
+"""
+
+import argparse
+import sys
+import tempfile
+import zipfile
+from pathlib import Path
+
+from validators import DOCXSchemaValidator, PPTXSchemaValidator, RedliningValidator
+
+
+def main():
+ parser = argparse.ArgumentParser(description="Validate Office document XML files")
+ parser.add_argument(
+ "path",
+ help="Path to unpacked directory or packed Office file (.docx/.pptx/.xlsx)",
+ )
+ parser.add_argument(
+ "--original",
+ required=False,
+ default=None,
+ help="Path to original file (.docx/.pptx/.xlsx). If omitted, all XSD errors are reported and redlining validation is skipped.",
+ )
+ parser.add_argument(
+ "-v",
+ "--verbose",
+ action="store_true",
+ help="Enable verbose output",
+ )
+ parser.add_argument(
+ "--auto-repair",
+ action="store_true",
+ help="Automatically repair common issues (hex IDs, whitespace preservation)",
+ )
+ parser.add_argument(
+ "--author",
+ default="Claude",
+ help="Author name for redlining validation (default: Claude)",
+ )
+ args = parser.parse_args()
+
+ path = Path(args.path)
+ assert path.exists(), f"Error: {path} does not exist"
+
+ original_file = None
+ if args.original:
+ original_file = Path(args.original)
+ assert original_file.is_file(), f"Error: {original_file} is not a file"
+ assert original_file.suffix.lower() in [".docx", ".pptx", ".xlsx"], (
+ f"Error: {original_file} must be a .docx, .pptx, or .xlsx file"
+ )
+
+ file_extension = (original_file or path).suffix.lower()
+ assert file_extension in [".docx", ".pptx", ".xlsx"], (
+ f"Error: Cannot determine file type from {path}. Use --original or provide a .docx/.pptx/.xlsx file."
+ )
+
+ if path.is_file() and path.suffix.lower() in [".docx", ".pptx", ".xlsx"]:
+ temp_dir = tempfile.mkdtemp()
+ with zipfile.ZipFile(path, "r") as zf:
+ zf.extractall(temp_dir)
+ unpacked_dir = Path(temp_dir)
+ else:
+ assert path.is_dir(), f"Error: {path} is not a directory or Office file"
+ unpacked_dir = path
+
+ match file_extension:
+ case ".docx":
+ validators = [
+ DOCXSchemaValidator(unpacked_dir, original_file, verbose=args.verbose),
+ ]
+ if original_file:
+ validators.append(
+ RedliningValidator(unpacked_dir, original_file, verbose=args.verbose, author=args.author)
+ )
+ case ".pptx":
+ validators = [
+ PPTXSchemaValidator(unpacked_dir, original_file, verbose=args.verbose),
+ ]
+ case _:
+ print(f"Error: Validation not supported for file type {file_extension}")
+ sys.exit(1)
+
+ if args.auto_repair:
+ total_repairs = sum(v.repair() for v in validators)
+ if total_repairs:
+ print(f"Auto-repaired {total_repairs} issue(s)")
+
+ success = all(v.validate() for v in validators)
+
+ if success:
+ print("All validations PASSED!")
+
+ sys.exit(0 if success else 1)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/skills/docx/scripts/office/validators/__init__.py b/skills/docx/scripts/office/validators/__init__.py
new file mode 100644
index 0000000..db092ec
--- /dev/null
+++ b/skills/docx/scripts/office/validators/__init__.py
@@ -0,0 +1,15 @@
+"""
+Validation modules for Word document processing.
+"""
+
+from .base import BaseSchemaValidator
+from .docx import DOCXSchemaValidator
+from .pptx import PPTXSchemaValidator
+from .redlining import RedliningValidator
+
+__all__ = [
+ "BaseSchemaValidator",
+ "DOCXSchemaValidator",
+ "PPTXSchemaValidator",
+ "RedliningValidator",
+]
diff --git a/skills/docx/scripts/office/validators/base.py b/skills/docx/scripts/office/validators/base.py
new file mode 100644
index 0000000..db4a06a
--- /dev/null
+++ b/skills/docx/scripts/office/validators/base.py
@@ -0,0 +1,847 @@
+"""
+Base validator with common validation logic for document files.
+"""
+
+import re
+from pathlib import Path
+
+import defusedxml.minidom
+import lxml.etree
+
+
+class BaseSchemaValidator:
+
+ IGNORED_VALIDATION_ERRORS = [
+ "hyphenationZone",
+ "purl.org/dc/terms",
+ ]
+
+ UNIQUE_ID_REQUIREMENTS = {
+ "comment": ("id", "file"),
+ "commentrangestart": ("id", "file"),
+ "commentrangeend": ("id", "file"),
+ "bookmarkstart": ("id", "file"),
+ "bookmarkend": ("id", "file"),
+ "sldid": ("id", "file"),
+ "sldmasterid": ("id", "global"),
+ "sldlayoutid": ("id", "global"),
+ "cm": ("authorid", "file"),
+ "sheet": ("sheetid", "file"),
+ "definedname": ("id", "file"),
+ "cxnsp": ("id", "file"),
+ "sp": ("id", "file"),
+ "pic": ("id", "file"),
+ "grpsp": ("id", "file"),
+ }
+
+ EXCLUDED_ID_CONTAINERS = {
+ "sectionlst",
+ }
+
+ ELEMENT_RELATIONSHIP_TYPES = {}
+
+ SCHEMA_MAPPINGS = {
+ "word": "ISO-IEC29500-4_2016/wml.xsd",
+ "ppt": "ISO-IEC29500-4_2016/pml.xsd",
+ "xl": "ISO-IEC29500-4_2016/sml.xsd",
+ "[Content_Types].xml": "ecma/fouth-edition/opc-contentTypes.xsd",
+ "app.xml": "ISO-IEC29500-4_2016/shared-documentPropertiesExtended.xsd",
+ "core.xml": "ecma/fouth-edition/opc-coreProperties.xsd",
+ "custom.xml": "ISO-IEC29500-4_2016/shared-documentPropertiesCustom.xsd",
+ ".rels": "ecma/fouth-edition/opc-relationships.xsd",
+ "people.xml": "microsoft/wml-2012.xsd",
+ "commentsIds.xml": "microsoft/wml-cid-2016.xsd",
+ "commentsExtensible.xml": "microsoft/wml-cex-2018.xsd",
+ "commentsExtended.xml": "microsoft/wml-2012.xsd",
+ "chart": "ISO-IEC29500-4_2016/dml-chart.xsd",
+ "theme": "ISO-IEC29500-4_2016/dml-main.xsd",
+ "drawing": "ISO-IEC29500-4_2016/dml-main.xsd",
+ }
+
+ MC_NAMESPACE = "http://schemas.openxmlformats.org/markup-compatibility/2006"
+ XML_NAMESPACE = "http://www.w3.org/XML/1998/namespace"
+
+ PACKAGE_RELATIONSHIPS_NAMESPACE = (
+ "http://schemas.openxmlformats.org/package/2006/relationships"
+ )
+ OFFICE_RELATIONSHIPS_NAMESPACE = (
+ "http://schemas.openxmlformats.org/officeDocument/2006/relationships"
+ )
+ CONTENT_TYPES_NAMESPACE = (
+ "http://schemas.openxmlformats.org/package/2006/content-types"
+ )
+
+ MAIN_CONTENT_FOLDERS = {"word", "ppt", "xl"}
+
+ OOXML_NAMESPACES = {
+ "http://schemas.openxmlformats.org/officeDocument/2006/math",
+ "http://schemas.openxmlformats.org/officeDocument/2006/relationships",
+ "http://schemas.openxmlformats.org/schemaLibrary/2006/main",
+ "http://schemas.openxmlformats.org/drawingml/2006/main",
+ "http://schemas.openxmlformats.org/drawingml/2006/chart",
+ "http://schemas.openxmlformats.org/drawingml/2006/chartDrawing",
+ "http://schemas.openxmlformats.org/drawingml/2006/diagram",
+ "http://schemas.openxmlformats.org/drawingml/2006/picture",
+ "http://schemas.openxmlformats.org/drawingml/2006/spreadsheetDrawing",
+ "http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing",
+ "http://schemas.openxmlformats.org/wordprocessingml/2006/main",
+ "http://schemas.openxmlformats.org/presentationml/2006/main",
+ "http://schemas.openxmlformats.org/spreadsheetml/2006/main",
+ "http://schemas.openxmlformats.org/officeDocument/2006/sharedTypes",
+ "http://www.w3.org/XML/1998/namespace",
+ }
+
+ def __init__(self, unpacked_dir, original_file=None, verbose=False):
+ self.unpacked_dir = Path(unpacked_dir).resolve()
+ self.original_file = Path(original_file) if original_file else None
+ self.verbose = verbose
+
+ self.schemas_dir = Path(__file__).parent.parent / "schemas"
+
+ patterns = ["*.xml", "*.rels"]
+ self.xml_files = [
+ f for pattern in patterns for f in self.unpacked_dir.rglob(pattern)
+ ]
+
+ if not self.xml_files:
+ print(f"Warning: No XML files found in {self.unpacked_dir}")
+
+ def validate(self):
+ raise NotImplementedError("Subclasses must implement the validate method")
+
+ def repair(self) -> int:
+ return self.repair_whitespace_preservation()
+
+ def repair_whitespace_preservation(self) -> int:
+ repairs = 0
+
+ for xml_file in self.xml_files:
+ try:
+ content = xml_file.read_text(encoding="utf-8")
+ dom = defusedxml.minidom.parseString(content)
+ modified = False
+
+ for elem in dom.getElementsByTagName("*"):
+ if elem.tagName.endswith(":t") and elem.firstChild:
+ text = elem.firstChild.nodeValue
+ if text and (text.startswith((' ', '\t')) or text.endswith((' ', '\t'))):
+ if elem.getAttribute("xml:space") != "preserve":
+ elem.setAttribute("xml:space", "preserve")
+ text_preview = repr(text[:30]) + "..." if len(text) > 30 else repr(text)
+ print(f" Repaired: {xml_file.name}: Added xml:space='preserve' to {elem.tagName}: {text_preview}")
+ repairs += 1
+ modified = True
+
+ if modified:
+ xml_file.write_bytes(dom.toxml(encoding="UTF-8"))
+
+ except Exception:
+ pass
+
+ return repairs
+
+ def validate_xml(self):
+ errors = []
+
+ for xml_file in self.xml_files:
+ try:
+ lxml.etree.parse(str(xml_file))
+ except lxml.etree.XMLSyntaxError as e:
+ errors.append(
+ f" {xml_file.relative_to(self.unpacked_dir)}: "
+ f"Line {e.lineno}: {e.msg}"
+ )
+ except Exception as e:
+ errors.append(
+ f" {xml_file.relative_to(self.unpacked_dir)}: "
+ f"Unexpected error: {str(e)}"
+ )
+
+ if errors:
+ print(f"FAILED - Found {len(errors)} XML violations:")
+ for error in errors:
+ print(error)
+ return False
+ else:
+ if self.verbose:
+ print("PASSED - All XML files are well-formed")
+ return True
+
+ def validate_namespaces(self):
+ errors = []
+
+ for xml_file in self.xml_files:
+ try:
+ root = lxml.etree.parse(str(xml_file)).getroot()
+ declared = set(root.nsmap.keys()) - {None}
+
+ for attr_val in [
+ v for k, v in root.attrib.items() if k.endswith("Ignorable")
+ ]:
+ undeclared = set(attr_val.split()) - declared
+ errors.extend(
+ f" {xml_file.relative_to(self.unpacked_dir)}: "
+ f"Namespace '{ns}' in Ignorable but not declared"
+ for ns in undeclared
+ )
+ except lxml.etree.XMLSyntaxError:
+ continue
+
+ if errors:
+ print(f"FAILED - {len(errors)} namespace issues:")
+ for error in errors:
+ print(error)
+ return False
+ if self.verbose:
+ print("PASSED - All namespace prefixes properly declared")
+ return True
+
+ def validate_unique_ids(self):
+ errors = []
+ global_ids = {}
+
+ for xml_file in self.xml_files:
+ try:
+ root = lxml.etree.parse(str(xml_file)).getroot()
+ file_ids = {}
+
+ mc_elements = root.xpath(
+ ".//mc:AlternateContent", namespaces={"mc": self.MC_NAMESPACE}
+ )
+ for elem in mc_elements:
+ elem.getparent().remove(elem)
+
+ for elem in root.iter():
+ tag = (
+ elem.tag.split("}")[-1].lower()
+ if "}" in elem.tag
+ else elem.tag.lower()
+ )
+
+ if tag in self.UNIQUE_ID_REQUIREMENTS:
+ in_excluded_container = any(
+ ancestor.tag.split("}")[-1].lower() in self.EXCLUDED_ID_CONTAINERS
+ for ancestor in elem.iterancestors()
+ )
+ if in_excluded_container:
+ continue
+
+ attr_name, scope = self.UNIQUE_ID_REQUIREMENTS[tag]
+
+ id_value = None
+ for attr, value in elem.attrib.items():
+ attr_local = (
+ attr.split("}")[-1].lower()
+ if "}" in attr
+ else attr.lower()
+ )
+ if attr_local == attr_name:
+ id_value = value
+ break
+
+ if id_value is not None:
+ if scope == "global":
+ if id_value in global_ids:
+ prev_file, prev_line, prev_tag = global_ids[
+ id_value
+ ]
+ errors.append(
+ f" {xml_file.relative_to(self.unpacked_dir)}: "
+ f"Line {elem.sourceline}: Global ID '{id_value}' in <{tag}> "
+ f"already used in {prev_file} at line {prev_line} in <{prev_tag}>"
+ )
+ else:
+ global_ids[id_value] = (
+ xml_file.relative_to(self.unpacked_dir),
+ elem.sourceline,
+ tag,
+ )
+ elif scope == "file":
+ key = (tag, attr_name)
+ if key not in file_ids:
+ file_ids[key] = {}
+
+ if id_value in file_ids[key]:
+ prev_line = file_ids[key][id_value]
+ errors.append(
+ f" {xml_file.relative_to(self.unpacked_dir)}: "
+ f"Line {elem.sourceline}: Duplicate {attr_name}='{id_value}' in <{tag}> "
+ f"(first occurrence at line {prev_line})"
+ )
+ else:
+ file_ids[key][id_value] = elem.sourceline
+
+ except (lxml.etree.XMLSyntaxError, Exception) as e:
+ errors.append(
+ f" {xml_file.relative_to(self.unpacked_dir)}: Error: {e}"
+ )
+
+ if errors:
+ print(f"FAILED - Found {len(errors)} ID uniqueness violations:")
+ for error in errors:
+ print(error)
+ return False
+ else:
+ if self.verbose:
+ print("PASSED - All required IDs are unique")
+ return True
+
+ def validate_file_references(self):
+ errors = []
+
+ rels_files = list(self.unpacked_dir.rglob("*.rels"))
+
+ if not rels_files:
+ if self.verbose:
+ print("PASSED - No .rels files found")
+ return True
+
+ all_files = []
+ for file_path in self.unpacked_dir.rglob("*"):
+ if (
+ file_path.is_file()
+ and file_path.name != "[Content_Types].xml"
+ and not file_path.name.endswith(".rels")
+ ):
+ all_files.append(file_path.resolve())
+
+ all_referenced_files = set()
+
+ if self.verbose:
+ print(
+ f"Found {len(rels_files)} .rels files and {len(all_files)} target files"
+ )
+
+ for rels_file in rels_files:
+ try:
+ rels_root = lxml.etree.parse(str(rels_file)).getroot()
+
+ rels_dir = rels_file.parent
+
+ referenced_files = set()
+ broken_refs = []
+
+ for rel in rels_root.findall(
+ ".//ns:Relationship",
+ namespaces={"ns": self.PACKAGE_RELATIONSHIPS_NAMESPACE},
+ ):
+ target = rel.get("Target")
+ if target and not target.startswith(
+ ("http", "mailto:")
+ ):
+ if target.startswith("/"):
+ target_path = self.unpacked_dir / target.lstrip("/")
+ elif rels_file.name == ".rels":
+ target_path = self.unpacked_dir / target
+ else:
+ base_dir = rels_dir.parent
+ target_path = base_dir / target
+
+ try:
+ target_path = target_path.resolve()
+ if target_path.exists() and target_path.is_file():
+ referenced_files.add(target_path)
+ all_referenced_files.add(target_path)
+ else:
+ broken_refs.append((target, rel.sourceline))
+ except (OSError, ValueError):
+ broken_refs.append((target, rel.sourceline))
+
+ if broken_refs:
+ rel_path = rels_file.relative_to(self.unpacked_dir)
+ for broken_ref, line_num in broken_refs:
+ errors.append(
+ f" {rel_path}: Line {line_num}: Broken reference to {broken_ref}"
+ )
+
+ except Exception as e:
+ rel_path = rels_file.relative_to(self.unpacked_dir)
+ errors.append(f" Error parsing {rel_path}: {e}")
+
+ unreferenced_files = set(all_files) - all_referenced_files
+
+ if unreferenced_files:
+ for unref_file in sorted(unreferenced_files):
+ unref_rel_path = unref_file.relative_to(self.unpacked_dir)
+ errors.append(f" Unreferenced file: {unref_rel_path}")
+
+ if errors:
+ print(f"FAILED - Found {len(errors)} relationship validation errors:")
+ for error in errors:
+ print(error)
+ print(
+ "CRITICAL: These errors will cause the document to appear corrupt. "
+ + "Broken references MUST be fixed, "
+ + "and unreferenced files MUST be referenced or removed."
+ )
+ return False
+ else:
+ if self.verbose:
+ print(
+ "PASSED - All references are valid and all files are properly referenced"
+ )
+ return True
+
+ def validate_all_relationship_ids(self):
+ import lxml.etree
+
+ errors = []
+
+ for xml_file in self.xml_files:
+ if xml_file.suffix == ".rels":
+ continue
+
+ rels_dir = xml_file.parent / "_rels"
+ rels_file = rels_dir / f"{xml_file.name}.rels"
+
+ if not rels_file.exists():
+ continue
+
+ try:
+ rels_root = lxml.etree.parse(str(rels_file)).getroot()
+ rid_to_type = {}
+
+ for rel in rels_root.findall(
+ f".//{{{self.PACKAGE_RELATIONSHIPS_NAMESPACE}}}Relationship"
+ ):
+ rid = rel.get("Id")
+ rel_type = rel.get("Type", "")
+ if rid:
+ if rid in rid_to_type:
+ rels_rel_path = rels_file.relative_to(self.unpacked_dir)
+ errors.append(
+ f" {rels_rel_path}: Line {rel.sourceline}: "
+ f"Duplicate relationship ID '{rid}' (IDs must be unique)"
+ )
+ type_name = (
+ rel_type.split("/")[-1] if "/" in rel_type else rel_type
+ )
+ rid_to_type[rid] = type_name
+
+ xml_root = lxml.etree.parse(str(xml_file)).getroot()
+
+ r_ns = self.OFFICE_RELATIONSHIPS_NAMESPACE
+ rid_attrs_to_check = ["id", "embed", "link"]
+ for elem in xml_root.iter():
+ for attr_name in rid_attrs_to_check:
+ rid_attr = elem.get(f"{{{r_ns}}}{attr_name}")
+ if not rid_attr:
+ continue
+ xml_rel_path = xml_file.relative_to(self.unpacked_dir)
+ elem_name = (
+ elem.tag.split("}")[-1] if "}" in elem.tag else elem.tag
+ )
+
+ if rid_attr not in rid_to_type:
+ errors.append(
+ f" {xml_rel_path}: Line {elem.sourceline}: "
+ f"<{elem_name}> r:{attr_name} references non-existent relationship '{rid_attr}' "
+ f"(valid IDs: {', '.join(sorted(rid_to_type.keys())[:5])}{'...' if len(rid_to_type) > 5 else ''})"
+ )
+ elif attr_name == "id" and self.ELEMENT_RELATIONSHIP_TYPES:
+ expected_type = self._get_expected_relationship_type(
+ elem_name
+ )
+ if expected_type:
+ actual_type = rid_to_type[rid_attr]
+ if expected_type not in actual_type.lower():
+ errors.append(
+ f" {xml_rel_path}: Line {elem.sourceline}: "
+ f"<{elem_name}> references '{rid_attr}' which points to '{actual_type}' "
+ f"but should point to a '{expected_type}' relationship"
+ )
+
+ except Exception as e:
+ xml_rel_path = xml_file.relative_to(self.unpacked_dir)
+ errors.append(f" Error processing {xml_rel_path}: {e}")
+
+ if errors:
+ print(f"FAILED - Found {len(errors)} relationship ID reference errors:")
+ for error in errors:
+ print(error)
+ print("\nThese ID mismatches will cause the document to appear corrupt!")
+ return False
+ else:
+ if self.verbose:
+ print("PASSED - All relationship ID references are valid")
+ return True
+
+ def _get_expected_relationship_type(self, element_name):
+ elem_lower = element_name.lower()
+
+ if elem_lower in self.ELEMENT_RELATIONSHIP_TYPES:
+ return self.ELEMENT_RELATIONSHIP_TYPES[elem_lower]
+
+ if elem_lower.endswith("id") and len(elem_lower) > 2:
+ prefix = elem_lower[:-2]
+ if prefix.endswith("master"):
+ return prefix.lower()
+ elif prefix.endswith("layout"):
+ return prefix.lower()
+ else:
+ if prefix == "sld":
+ return "slide"
+ return prefix.lower()
+
+ if elem_lower.endswith("reference") and len(elem_lower) > 9:
+ prefix = elem_lower[:-9]
+ return prefix.lower()
+
+ return None
+
+ def validate_content_types(self):
+ errors = []
+
+ content_types_file = self.unpacked_dir / "[Content_Types].xml"
+ if not content_types_file.exists():
+ print("FAILED - [Content_Types].xml file not found")
+ return False
+
+ try:
+ root = lxml.etree.parse(str(content_types_file)).getroot()
+ declared_parts = set()
+ declared_extensions = set()
+
+ for override in root.findall(
+ f".//{{{self.CONTENT_TYPES_NAMESPACE}}}Override"
+ ):
+ part_name = override.get("PartName")
+ if part_name is not None:
+ declared_parts.add(part_name.lstrip("/"))
+
+ for default in root.findall(
+ f".//{{{self.CONTENT_TYPES_NAMESPACE}}}Default"
+ ):
+ extension = default.get("Extension")
+ if extension is not None:
+ declared_extensions.add(extension.lower())
+
+ declarable_roots = {
+ "sld",
+ "sldLayout",
+ "sldMaster",
+ "presentation",
+ "document",
+ "workbook",
+ "worksheet",
+ "theme",
+ }
+
+ media_extensions = {
+ "png": "image/png",
+ "jpg": "image/jpeg",
+ "jpeg": "image/jpeg",
+ "gif": "image/gif",
+ "bmp": "image/bmp",
+ "tiff": "image/tiff",
+ "wmf": "image/x-wmf",
+ "emf": "image/x-emf",
+ }
+
+ all_files = list(self.unpacked_dir.rglob("*"))
+ all_files = [f for f in all_files if f.is_file()]
+
+ for xml_file in self.xml_files:
+ path_str = str(xml_file.relative_to(self.unpacked_dir)).replace(
+ "\\", "/"
+ )
+
+ if any(
+ skip in path_str
+ for skip in [".rels", "[Content_Types]", "docProps/", "_rels/"]
+ ):
+ continue
+
+ try:
+ root_tag = lxml.etree.parse(str(xml_file)).getroot().tag
+ root_name = root_tag.split("}")[-1] if "}" in root_tag else root_tag
+
+ if root_name in declarable_roots and path_str not in declared_parts:
+ errors.append(
+ f" {path_str}: File with <{root_name}> root not declared in [Content_Types].xml"
+ )
+
+ except Exception:
+ continue
+
+ for file_path in all_files:
+ if file_path.suffix.lower() in {".xml", ".rels"}:
+ continue
+ if file_path.name == "[Content_Types].xml":
+ continue
+ if "_rels" in file_path.parts or "docProps" in file_path.parts:
+ continue
+
+ extension = file_path.suffix.lstrip(".").lower()
+ if extension and extension not in declared_extensions:
+ if extension in media_extensions:
+ relative_path = file_path.relative_to(self.unpacked_dir)
+ errors.append(
+ f' {relative_path}: File with extension \'{extension}\' not declared in [Content_Types].xml - should add: '
+ )
+
+ except Exception as e:
+ errors.append(f" Error parsing [Content_Types].xml: {e}")
+
+ if errors:
+ print(f"FAILED - Found {len(errors)} content type declaration errors:")
+ for error in errors:
+ print(error)
+ return False
+ else:
+ if self.verbose:
+ print(
+ "PASSED - All content files are properly declared in [Content_Types].xml"
+ )
+ return True
+
+ def validate_file_against_xsd(self, xml_file, verbose=False):
+ xml_file = Path(xml_file).resolve()
+ unpacked_dir = self.unpacked_dir.resolve()
+
+ is_valid, current_errors = self._validate_single_file_xsd(
+ xml_file, unpacked_dir
+ )
+
+ if is_valid is None:
+ return None, set()
+ elif is_valid:
+ return True, set()
+
+ original_errors = self._get_original_file_errors(xml_file)
+
+ assert current_errors is not None
+ new_errors = current_errors - original_errors
+
+ new_errors = {
+ e for e in new_errors
+ if not any(pattern in e for pattern in self.IGNORED_VALIDATION_ERRORS)
+ }
+
+ if new_errors:
+ if verbose:
+ relative_path = xml_file.relative_to(unpacked_dir)
+ print(f"FAILED - {relative_path}: {len(new_errors)} new error(s)")
+ for error in list(new_errors)[:3]:
+ truncated = error[:250] + "..." if len(error) > 250 else error
+ print(f" - {truncated}")
+ return False, new_errors
+ else:
+ if verbose:
+ print(
+ f"PASSED - No new errors (original had {len(current_errors)} errors)"
+ )
+ return True, set()
+
+ def validate_against_xsd(self):
+ new_errors = []
+ original_error_count = 0
+ valid_count = 0
+ skipped_count = 0
+
+ for xml_file in self.xml_files:
+ relative_path = str(xml_file.relative_to(self.unpacked_dir))
+ is_valid, new_file_errors = self.validate_file_against_xsd(
+ xml_file, verbose=False
+ )
+
+ if is_valid is None:
+ skipped_count += 1
+ continue
+ elif is_valid and not new_file_errors:
+ valid_count += 1
+ continue
+ elif is_valid:
+ original_error_count += 1
+ valid_count += 1
+ continue
+
+ new_errors.append(f" {relative_path}: {len(new_file_errors)} new error(s)")
+ for error in list(new_file_errors)[:3]:
+ new_errors.append(
+ f" - {error[:250]}..." if len(error) > 250 else f" - {error}"
+ )
+
+ if self.verbose:
+ print(f"Validated {len(self.xml_files)} files:")
+ print(f" - Valid: {valid_count}")
+ print(f" - Skipped (no schema): {skipped_count}")
+ if original_error_count:
+ print(f" - With original errors (ignored): {original_error_count}")
+ print(
+ f" - With NEW errors: {len(new_errors) > 0 and len([e for e in new_errors if not e.startswith(' ')]) or 0}"
+ )
+
+ if new_errors:
+ print("\nFAILED - Found NEW validation errors:")
+ for error in new_errors:
+ print(error)
+ return False
+ else:
+ if self.verbose:
+ print("\nPASSED - No new XSD validation errors introduced")
+ return True
+
+ def _get_schema_path(self, xml_file):
+ if xml_file.name in self.SCHEMA_MAPPINGS:
+ return self.schemas_dir / self.SCHEMA_MAPPINGS[xml_file.name]
+
+ if xml_file.suffix == ".rels":
+ return self.schemas_dir / self.SCHEMA_MAPPINGS[".rels"]
+
+ if "charts/" in str(xml_file) and xml_file.name.startswith("chart"):
+ return self.schemas_dir / self.SCHEMA_MAPPINGS["chart"]
+
+ if "theme/" in str(xml_file) and xml_file.name.startswith("theme"):
+ return self.schemas_dir / self.SCHEMA_MAPPINGS["theme"]
+
+ if xml_file.parent.name in self.MAIN_CONTENT_FOLDERS:
+ return self.schemas_dir / self.SCHEMA_MAPPINGS[xml_file.parent.name]
+
+ return None
+
+ def _clean_ignorable_namespaces(self, xml_doc):
+ xml_string = lxml.etree.tostring(xml_doc, encoding="unicode")
+ xml_copy = lxml.etree.fromstring(xml_string)
+
+ for elem in xml_copy.iter():
+ attrs_to_remove = []
+
+ for attr in elem.attrib:
+ if "{" in attr:
+ ns = attr.split("}")[0][1:]
+ if ns not in self.OOXML_NAMESPACES:
+ attrs_to_remove.append(attr)
+
+ for attr in attrs_to_remove:
+ del elem.attrib[attr]
+
+ self._remove_ignorable_elements(xml_copy)
+
+ return lxml.etree.ElementTree(xml_copy)
+
+ def _remove_ignorable_elements(self, root):
+ elements_to_remove = []
+
+ for elem in list(root):
+ if not hasattr(elem, "tag") or callable(elem.tag):
+ continue
+
+ tag_str = str(elem.tag)
+ if tag_str.startswith("{"):
+ ns = tag_str.split("}")[0][1:]
+ if ns not in self.OOXML_NAMESPACES:
+ elements_to_remove.append(elem)
+ continue
+
+ self._remove_ignorable_elements(elem)
+
+ for elem in elements_to_remove:
+ root.remove(elem)
+
+ def _preprocess_for_mc_ignorable(self, xml_doc):
+ root = xml_doc.getroot()
+
+ if f"{{{self.MC_NAMESPACE}}}Ignorable" in root.attrib:
+ del root.attrib[f"{{{self.MC_NAMESPACE}}}Ignorable"]
+
+ return xml_doc
+
+ def _validate_single_file_xsd(self, xml_file, base_path):
+ schema_path = self._get_schema_path(xml_file)
+ if not schema_path:
+ return None, None
+
+ try:
+ with open(schema_path, "rb") as xsd_file:
+ parser = lxml.etree.XMLParser()
+ xsd_doc = lxml.etree.parse(
+ xsd_file, parser=parser, base_url=str(schema_path)
+ )
+ schema = lxml.etree.XMLSchema(xsd_doc)
+
+ with open(xml_file, "r") as f:
+ xml_doc = lxml.etree.parse(f)
+
+ xml_doc, _ = self._remove_template_tags_from_text_nodes(xml_doc)
+ xml_doc = self._preprocess_for_mc_ignorable(xml_doc)
+
+ relative_path = xml_file.relative_to(base_path)
+ if (
+ relative_path.parts
+ and relative_path.parts[0] in self.MAIN_CONTENT_FOLDERS
+ ):
+ xml_doc = self._clean_ignorable_namespaces(xml_doc)
+
+ if schema.validate(xml_doc):
+ return True, set()
+ else:
+ errors = set()
+ for error in schema.error_log:
+ errors.add(error.message)
+ return False, errors
+
+ except Exception as e:
+ return False, {str(e)}
+
+ def _get_original_file_errors(self, xml_file):
+ if self.original_file is None:
+ return set()
+
+ import tempfile
+ import zipfile
+
+ xml_file = Path(xml_file).resolve()
+ unpacked_dir = self.unpacked_dir.resolve()
+ relative_path = xml_file.relative_to(unpacked_dir)
+
+ with tempfile.TemporaryDirectory() as temp_dir:
+ temp_path = Path(temp_dir)
+
+ with zipfile.ZipFile(self.original_file, "r") as zip_ref:
+ zip_ref.extractall(temp_path)
+
+ original_xml_file = temp_path / relative_path
+
+ if not original_xml_file.exists():
+ return set()
+
+ is_valid, errors = self._validate_single_file_xsd(
+ original_xml_file, temp_path
+ )
+ return errors if errors else set()
+
+ def _remove_template_tags_from_text_nodes(self, xml_doc):
+ warnings = []
+ template_pattern = re.compile(r"\{\{[^}]*\}\}")
+
+ xml_string = lxml.etree.tostring(xml_doc, encoding="unicode")
+ xml_copy = lxml.etree.fromstring(xml_string)
+
+ def process_text_content(text, content_type):
+ if not text:
+ return text
+ matches = list(template_pattern.finditer(text))
+ if matches:
+ for match in matches:
+ warnings.append(
+ f"Found template tag in {content_type}: {match.group()}"
+ )
+ return template_pattern.sub("", text)
+ return text
+
+ for elem in xml_copy.iter():
+ if not hasattr(elem, "tag") or callable(elem.tag):
+ continue
+ tag_str = str(elem.tag)
+ if tag_str.endswith("}t") or tag_str == "t":
+ continue
+
+ elem.text = process_text_content(elem.text, "text content")
+ elem.tail = process_text_content(elem.tail, "tail content")
+
+ return lxml.etree.ElementTree(xml_copy), warnings
+
+
+if __name__ == "__main__":
+ raise RuntimeError("This module should not be run directly.")
diff --git a/skills/docx/scripts/office/validators/docx.py b/skills/docx/scripts/office/validators/docx.py
new file mode 100644
index 0000000..fec405e
--- /dev/null
+++ b/skills/docx/scripts/office/validators/docx.py
@@ -0,0 +1,446 @@
+"""
+Validator for Word document XML files against XSD schemas.
+"""
+
+import random
+import re
+import tempfile
+import zipfile
+
+import defusedxml.minidom
+import lxml.etree
+
+from .base import BaseSchemaValidator
+
+
+class DOCXSchemaValidator(BaseSchemaValidator):
+
+ WORD_2006_NAMESPACE = "http://schemas.openxmlformats.org/wordprocessingml/2006/main"
+ W14_NAMESPACE = "http://schemas.microsoft.com/office/word/2010/wordml"
+ W16CID_NAMESPACE = "http://schemas.microsoft.com/office/word/2016/wordml/cid"
+
+ ELEMENT_RELATIONSHIP_TYPES = {}
+
+ def validate(self):
+ if not self.validate_xml():
+ return False
+
+ all_valid = True
+ if not self.validate_namespaces():
+ all_valid = False
+
+ if not self.validate_unique_ids():
+ all_valid = False
+
+ if not self.validate_file_references():
+ all_valid = False
+
+ if not self.validate_content_types():
+ all_valid = False
+
+ if not self.validate_against_xsd():
+ all_valid = False
+
+ if not self.validate_whitespace_preservation():
+ all_valid = False
+
+ if not self.validate_deletions():
+ all_valid = False
+
+ if not self.validate_insertions():
+ all_valid = False
+
+ if not self.validate_all_relationship_ids():
+ all_valid = False
+
+ if not self.validate_id_constraints():
+ all_valid = False
+
+ if not self.validate_comment_markers():
+ all_valid = False
+
+ self.compare_paragraph_counts()
+
+ return all_valid
+
+ def validate_whitespace_preservation(self):
+ errors = []
+
+ for xml_file in self.xml_files:
+ if xml_file.name != "document.xml":
+ continue
+
+ try:
+ root = lxml.etree.parse(str(xml_file)).getroot()
+
+ for elem in root.iter(f"{{{self.WORD_2006_NAMESPACE}}}t"):
+ if elem.text:
+ text = elem.text
+ if re.search(r"^[ \t\n\r]", text) or re.search(
+ r"[ \t\n\r]$", text
+ ):
+ xml_space_attr = f"{{{self.XML_NAMESPACE}}}space"
+ if (
+ xml_space_attr not in elem.attrib
+ or elem.attrib[xml_space_attr] != "preserve"
+ ):
+ text_preview = (
+ repr(text)[:50] + "..."
+ if len(repr(text)) > 50
+ else repr(text)
+ )
+ errors.append(
+ f" {xml_file.relative_to(self.unpacked_dir)}: "
+ f"Line {elem.sourceline}: w:t element with whitespace missing xml:space='preserve': {text_preview}"
+ )
+
+ except (lxml.etree.XMLSyntaxError, Exception) as e:
+ errors.append(
+ f" {xml_file.relative_to(self.unpacked_dir)}: Error: {e}"
+ )
+
+ if errors:
+ print(f"FAILED - Found {len(errors)} whitespace preservation violations:")
+ for error in errors:
+ print(error)
+ return False
+ else:
+ if self.verbose:
+ print("PASSED - All whitespace is properly preserved")
+ return True
+
+ def validate_deletions(self):
+ errors = []
+
+ for xml_file in self.xml_files:
+ if xml_file.name != "document.xml":
+ continue
+
+ try:
+ root = lxml.etree.parse(str(xml_file)).getroot()
+ namespaces = {"w": self.WORD_2006_NAMESPACE}
+
+ for t_elem in root.xpath(".//w:del//w:t", namespaces=namespaces):
+ if t_elem.text:
+ text_preview = (
+ repr(t_elem.text)[:50] + "..."
+ if len(repr(t_elem.text)) > 50
+ else repr(t_elem.text)
+ )
+ errors.append(
+ f" {xml_file.relative_to(self.unpacked_dir)}: "
+ f"Line {t_elem.sourceline}: found within : {text_preview}"
+ )
+
+ for instr_elem in root.xpath(
+ ".//w:del//w:instrText", namespaces=namespaces
+ ):
+ text_preview = (
+ repr(instr_elem.text or "")[:50] + "..."
+ if len(repr(instr_elem.text or "")) > 50
+ else repr(instr_elem.text or "")
+ )
+ errors.append(
+ f" {xml_file.relative_to(self.unpacked_dir)}: "
+ f"Line {instr_elem.sourceline}: found within (use ): {text_preview}"
+ )
+
+ except (lxml.etree.XMLSyntaxError, Exception) as e:
+ errors.append(
+ f" {xml_file.relative_to(self.unpacked_dir)}: Error: {e}"
+ )
+
+ if errors:
+ print(f"FAILED - Found {len(errors)} deletion validation violations:")
+ for error in errors:
+ print(error)
+ return False
+ else:
+ if self.verbose:
+ print("PASSED - No w:t elements found within w:del elements")
+ return True
+
+ def count_paragraphs_in_unpacked(self):
+ count = 0
+
+ for xml_file in self.xml_files:
+ if xml_file.name != "document.xml":
+ continue
+
+ try:
+ root = lxml.etree.parse(str(xml_file)).getroot()
+ paragraphs = root.findall(f".//{{{self.WORD_2006_NAMESPACE}}}p")
+ count = len(paragraphs)
+ except Exception as e:
+ print(f"Error counting paragraphs in unpacked document: {e}")
+
+ return count
+
+ def count_paragraphs_in_original(self):
+ original = self.original_file
+ if original is None:
+ return 0
+
+ count = 0
+
+ try:
+ with tempfile.TemporaryDirectory() as temp_dir:
+ with zipfile.ZipFile(original, "r") as zip_ref:
+ zip_ref.extractall(temp_dir)
+
+ doc_xml_path = temp_dir + "/word/document.xml"
+ root = lxml.etree.parse(doc_xml_path).getroot()
+
+ paragraphs = root.findall(f".//{{{self.WORD_2006_NAMESPACE}}}p")
+ count = len(paragraphs)
+
+ except Exception as e:
+ print(f"Error counting paragraphs in original document: {e}")
+
+ return count
+
+ def validate_insertions(self):
+ errors = []
+
+ for xml_file in self.xml_files:
+ if xml_file.name != "document.xml":
+ continue
+
+ try:
+ root = lxml.etree.parse(str(xml_file)).getroot()
+ namespaces = {"w": self.WORD_2006_NAMESPACE}
+
+ invalid_elements = root.xpath(
+ ".//w:ins//w:delText[not(ancestor::w:del)]", namespaces=namespaces
+ )
+
+ for elem in invalid_elements:
+ text_preview = (
+ repr(elem.text or "")[:50] + "..."
+ if len(repr(elem.text or "")) > 50
+ else repr(elem.text or "")
+ )
+ errors.append(
+ f" {xml_file.relative_to(self.unpacked_dir)}: "
+ f"Line {elem.sourceline}: within : {text_preview}"
+ )
+
+ except (lxml.etree.XMLSyntaxError, Exception) as e:
+ errors.append(
+ f" {xml_file.relative_to(self.unpacked_dir)}: Error: {e}"
+ )
+
+ if errors:
+ print(f"FAILED - Found {len(errors)} insertion validation violations:")
+ for error in errors:
+ print(error)
+ return False
+ else:
+ if self.verbose:
+ print("PASSED - No w:delText elements within w:ins elements")
+ return True
+
+ def compare_paragraph_counts(self):
+ original_count = self.count_paragraphs_in_original()
+ new_count = self.count_paragraphs_in_unpacked()
+
+ diff = new_count - original_count
+ diff_str = f"+{diff}" if diff > 0 else str(diff)
+ print(f"\nParagraphs: {original_count} → {new_count} ({diff_str})")
+
+ def _parse_id_value(self, val: str, base: int = 16) -> int:
+ return int(val, base)
+
+ def validate_id_constraints(self):
+ errors = []
+ para_id_attr = f"{{{self.W14_NAMESPACE}}}paraId"
+ durable_id_attr = f"{{{self.W16CID_NAMESPACE}}}durableId"
+
+ for xml_file in self.xml_files:
+ try:
+ for elem in lxml.etree.parse(str(xml_file)).iter():
+ if val := elem.get(para_id_attr):
+ if self._parse_id_value(val, base=16) >= 0x80000000:
+ errors.append(
+ f" {xml_file.name}:{elem.sourceline}: paraId={val} >= 0x80000000"
+ )
+
+ if val := elem.get(durable_id_attr):
+ if xml_file.name == "numbering.xml":
+ try:
+ if self._parse_id_value(val, base=10) >= 0x7FFFFFFF:
+ errors.append(
+ f" {xml_file.name}:{elem.sourceline}: "
+ f"durableId={val} >= 0x7FFFFFFF"
+ )
+ except ValueError:
+ errors.append(
+ f" {xml_file.name}:{elem.sourceline}: "
+ f"durableId={val} must be decimal in numbering.xml"
+ )
+ else:
+ if self._parse_id_value(val, base=16) >= 0x7FFFFFFF:
+ errors.append(
+ f" {xml_file.name}:{elem.sourceline}: "
+ f"durableId={val} >= 0x7FFFFFFF"
+ )
+ except Exception:
+ pass
+
+ if errors:
+ print(f"FAILED - {len(errors)} ID constraint violations:")
+ for e in errors:
+ print(e)
+ elif self.verbose:
+ print("PASSED - All paraId/durableId values within constraints")
+ return not errors
+
+ def validate_comment_markers(self):
+ errors = []
+
+ document_xml = None
+ comments_xml = None
+ for xml_file in self.xml_files:
+ if xml_file.name == "document.xml" and "word" in str(xml_file):
+ document_xml = xml_file
+ elif xml_file.name == "comments.xml":
+ comments_xml = xml_file
+
+ if not document_xml:
+ if self.verbose:
+ print("PASSED - No document.xml found (skipping comment validation)")
+ return True
+
+ try:
+ doc_root = lxml.etree.parse(str(document_xml)).getroot()
+ namespaces = {"w": self.WORD_2006_NAMESPACE}
+
+ range_starts = {
+ elem.get(f"{{{self.WORD_2006_NAMESPACE}}}id")
+ for elem in doc_root.xpath(
+ ".//w:commentRangeStart", namespaces=namespaces
+ )
+ }
+ range_ends = {
+ elem.get(f"{{{self.WORD_2006_NAMESPACE}}}id")
+ for elem in doc_root.xpath(
+ ".//w:commentRangeEnd", namespaces=namespaces
+ )
+ }
+ references = {
+ elem.get(f"{{{self.WORD_2006_NAMESPACE}}}id")
+ for elem in doc_root.xpath(
+ ".//w:commentReference", namespaces=namespaces
+ )
+ }
+
+ orphaned_ends = range_ends - range_starts
+ for comment_id in sorted(
+ orphaned_ends, key=lambda x: int(x) if x and x.isdigit() else 0
+ ):
+ errors.append(
+ f' document.xml: commentRangeEnd id="{comment_id}" has no matching commentRangeStart'
+ )
+
+ orphaned_starts = range_starts - range_ends
+ for comment_id in sorted(
+ orphaned_starts, key=lambda x: int(x) if x and x.isdigit() else 0
+ ):
+ errors.append(
+ f' document.xml: commentRangeStart id="{comment_id}" has no matching commentRangeEnd'
+ )
+
+ comment_ids = set()
+ if comments_xml and comments_xml.exists():
+ comments_root = lxml.etree.parse(str(comments_xml)).getroot()
+ comment_ids = {
+ elem.get(f"{{{self.WORD_2006_NAMESPACE}}}id")
+ for elem in comments_root.xpath(
+ ".//w:comment", namespaces=namespaces
+ )
+ }
+
+ marker_ids = range_starts | range_ends | references
+ invalid_refs = marker_ids - comment_ids
+ for comment_id in sorted(
+ invalid_refs, key=lambda x: int(x) if x and x.isdigit() else 0
+ ):
+ if comment_id:
+ errors.append(
+ f' document.xml: marker id="{comment_id}" references non-existent comment'
+ )
+
+ except (lxml.etree.XMLSyntaxError, Exception) as e:
+ errors.append(f" Error parsing XML: {e}")
+
+ if errors:
+ print(f"FAILED - {len(errors)} comment marker violations:")
+ for error in errors:
+ print(error)
+ return False
+ else:
+ if self.verbose:
+ print("PASSED - All comment markers properly paired")
+ return True
+
+ def repair(self) -> int:
+ repairs = super().repair()
+ repairs += self.repair_durableId()
+ return repairs
+
+ def repair_durableId(self) -> int:
+ repairs = 0
+
+ for xml_file in self.xml_files:
+ try:
+ content = xml_file.read_text(encoding="utf-8")
+ dom = defusedxml.minidom.parseString(content)
+ modified = False
+
+ for elem in dom.getElementsByTagName("*"):
+ if not elem.hasAttribute("w16cid:durableId"):
+ continue
+
+ durable_id = elem.getAttribute("w16cid:durableId")
+ needs_repair = False
+
+ if xml_file.name == "numbering.xml":
+ try:
+ needs_repair = (
+ self._parse_id_value(durable_id, base=10) >= 0x7FFFFFFF
+ )
+ except ValueError:
+ needs_repair = True
+ else:
+ try:
+ needs_repair = (
+ self._parse_id_value(durable_id, base=16) >= 0x7FFFFFFF
+ )
+ except ValueError:
+ needs_repair = True
+
+ if needs_repair:
+ value = random.randint(1, 0x7FFFFFFE)
+ if xml_file.name == "numbering.xml":
+ new_id = str(value)
+ else:
+ new_id = f"{value:08X}"
+
+ elem.setAttribute("w16cid:durableId", new_id)
+ print(
+ f" Repaired: {xml_file.name}: durableId {durable_id} → {new_id}"
+ )
+ repairs += 1
+ modified = True
+
+ if modified:
+ xml_file.write_bytes(dom.toxml(encoding="UTF-8"))
+
+ except Exception:
+ pass
+
+ return repairs
+
+
+if __name__ == "__main__":
+ raise RuntimeError("This module should not be run directly.")
diff --git a/skills/docx/scripts/office/validators/pptx.py b/skills/docx/scripts/office/validators/pptx.py
new file mode 100644
index 0000000..09842aa
--- /dev/null
+++ b/skills/docx/scripts/office/validators/pptx.py
@@ -0,0 +1,275 @@
+"""
+Validator for PowerPoint presentation XML files against XSD schemas.
+"""
+
+import re
+
+from .base import BaseSchemaValidator
+
+
+class PPTXSchemaValidator(BaseSchemaValidator):
+
+ PRESENTATIONML_NAMESPACE = (
+ "http://schemas.openxmlformats.org/presentationml/2006/main"
+ )
+
+ ELEMENT_RELATIONSHIP_TYPES = {
+ "sldid": "slide",
+ "sldmasterid": "slidemaster",
+ "notesmasterid": "notesmaster",
+ "sldlayoutid": "slidelayout",
+ "themeid": "theme",
+ "tablestyleid": "tablestyles",
+ }
+
+ def validate(self):
+ if not self.validate_xml():
+ return False
+
+ all_valid = True
+ if not self.validate_namespaces():
+ all_valid = False
+
+ if not self.validate_unique_ids():
+ all_valid = False
+
+ if not self.validate_uuid_ids():
+ all_valid = False
+
+ if not self.validate_file_references():
+ all_valid = False
+
+ if not self.validate_slide_layout_ids():
+ all_valid = False
+
+ if not self.validate_content_types():
+ all_valid = False
+
+ if not self.validate_against_xsd():
+ all_valid = False
+
+ if not self.validate_notes_slide_references():
+ all_valid = False
+
+ if not self.validate_all_relationship_ids():
+ all_valid = False
+
+ if not self.validate_no_duplicate_slide_layouts():
+ all_valid = False
+
+ return all_valid
+
+ def validate_uuid_ids(self):
+ import lxml.etree
+
+ errors = []
+ uuid_pattern = re.compile(
+ r"^[\{\(]?[0-9A-Fa-f]{8}-?[0-9A-Fa-f]{4}-?[0-9A-Fa-f]{4}-?[0-9A-Fa-f]{4}-?[0-9A-Fa-f]{12}[\}\)]?$"
+ )
+
+ for xml_file in self.xml_files:
+ try:
+ root = lxml.etree.parse(str(xml_file)).getroot()
+
+ for elem in root.iter():
+ for attr, value in elem.attrib.items():
+ attr_name = attr.split("}")[-1].lower()
+ if attr_name == "id" or attr_name.endswith("id"):
+ if self._looks_like_uuid(value):
+ if not uuid_pattern.match(value):
+ errors.append(
+ f" {xml_file.relative_to(self.unpacked_dir)}: "
+ f"Line {elem.sourceline}: ID '{value}' appears to be a UUID but contains invalid hex characters"
+ )
+
+ except (lxml.etree.XMLSyntaxError, Exception) as e:
+ errors.append(
+ f" {xml_file.relative_to(self.unpacked_dir)}: Error: {e}"
+ )
+
+ if errors:
+ print(f"FAILED - Found {len(errors)} UUID ID validation errors:")
+ for error in errors:
+ print(error)
+ return False
+ else:
+ if self.verbose:
+ print("PASSED - All UUID-like IDs contain valid hex values")
+ return True
+
+ def _looks_like_uuid(self, value):
+ clean_value = value.strip("{}()").replace("-", "")
+ return len(clean_value) == 32 and all(c.isalnum() for c in clean_value)
+
+ def validate_slide_layout_ids(self):
+ import lxml.etree
+
+ errors = []
+
+ slide_masters = list(self.unpacked_dir.glob("ppt/slideMasters/*.xml"))
+
+ if not slide_masters:
+ if self.verbose:
+ print("PASSED - No slide masters found")
+ return True
+
+ for slide_master in slide_masters:
+ try:
+ root = lxml.etree.parse(str(slide_master)).getroot()
+
+ rels_file = slide_master.parent / "_rels" / f"{slide_master.name}.rels"
+
+ if not rels_file.exists():
+ errors.append(
+ f" {slide_master.relative_to(self.unpacked_dir)}: "
+ f"Missing relationships file: {rels_file.relative_to(self.unpacked_dir)}"
+ )
+ continue
+
+ rels_root = lxml.etree.parse(str(rels_file)).getroot()
+
+ valid_layout_rids = set()
+ for rel in rels_root.findall(
+ f".//{{{self.PACKAGE_RELATIONSHIPS_NAMESPACE}}}Relationship"
+ ):
+ rel_type = rel.get("Type", "")
+ if "slideLayout" in rel_type:
+ valid_layout_rids.add(rel.get("Id"))
+
+ for sld_layout_id in root.findall(
+ f".//{{{self.PRESENTATIONML_NAMESPACE}}}sldLayoutId"
+ ):
+ r_id = sld_layout_id.get(
+ f"{{{self.OFFICE_RELATIONSHIPS_NAMESPACE}}}id"
+ )
+ layout_id = sld_layout_id.get("id")
+
+ if r_id and r_id not in valid_layout_rids:
+ errors.append(
+ f" {slide_master.relative_to(self.unpacked_dir)}: "
+ f"Line {sld_layout_id.sourceline}: sldLayoutId with id='{layout_id}' "
+ f"references r:id='{r_id}' which is not found in slide layout relationships"
+ )
+
+ except (lxml.etree.XMLSyntaxError, Exception) as e:
+ errors.append(
+ f" {slide_master.relative_to(self.unpacked_dir)}: Error: {e}"
+ )
+
+ if errors:
+ print(f"FAILED - Found {len(errors)} slide layout ID validation errors:")
+ for error in errors:
+ print(error)
+ print(
+ "Remove invalid references or add missing slide layouts to the relationships file."
+ )
+ return False
+ else:
+ if self.verbose:
+ print("PASSED - All slide layout IDs reference valid slide layouts")
+ return True
+
+ def validate_no_duplicate_slide_layouts(self):
+ import lxml.etree
+
+ errors = []
+ slide_rels_files = list(self.unpacked_dir.glob("ppt/slides/_rels/*.xml.rels"))
+
+ for rels_file in slide_rels_files:
+ try:
+ root = lxml.etree.parse(str(rels_file)).getroot()
+
+ layout_rels = [
+ rel
+ for rel in root.findall(
+ f".//{{{self.PACKAGE_RELATIONSHIPS_NAMESPACE}}}Relationship"
+ )
+ if "slideLayout" in rel.get("Type", "")
+ ]
+
+ if len(layout_rels) > 1:
+ errors.append(
+ f" {rels_file.relative_to(self.unpacked_dir)}: has {len(layout_rels)} slideLayout references"
+ )
+
+ except Exception as e:
+ errors.append(
+ f" {rels_file.relative_to(self.unpacked_dir)}: Error: {e}"
+ )
+
+ if errors:
+ print("FAILED - Found slides with duplicate slideLayout references:")
+ for error in errors:
+ print(error)
+ return False
+ else:
+ if self.verbose:
+ print("PASSED - All slides have exactly one slideLayout reference")
+ return True
+
+ def validate_notes_slide_references(self):
+ import lxml.etree
+
+ errors = []
+ notes_slide_references = {}
+
+ slide_rels_files = list(self.unpacked_dir.glob("ppt/slides/_rels/*.xml.rels"))
+
+ if not slide_rels_files:
+ if self.verbose:
+ print("PASSED - No slide relationship files found")
+ return True
+
+ for rels_file in slide_rels_files:
+ try:
+ root = lxml.etree.parse(str(rels_file)).getroot()
+
+ for rel in root.findall(
+ f".//{{{self.PACKAGE_RELATIONSHIPS_NAMESPACE}}}Relationship"
+ ):
+ rel_type = rel.get("Type", "")
+ if "notesSlide" in rel_type:
+ target = rel.get("Target", "")
+ if target:
+ normalized_target = target.replace("../", "")
+
+ slide_name = rels_file.stem.replace(
+ ".xml", ""
+ )
+
+ if normalized_target not in notes_slide_references:
+ notes_slide_references[normalized_target] = []
+ notes_slide_references[normalized_target].append(
+ (slide_name, rels_file)
+ )
+
+ except (lxml.etree.XMLSyntaxError, Exception) as e:
+ errors.append(
+ f" {rels_file.relative_to(self.unpacked_dir)}: Error: {e}"
+ )
+
+ for target, references in notes_slide_references.items():
+ if len(references) > 1:
+ slide_names = [ref[0] for ref in references]
+ errors.append(
+ f" Notes slide '{target}' is referenced by multiple slides: {', '.join(slide_names)}"
+ )
+ for slide_name, rels_file in references:
+ errors.append(f" - {rels_file.relative_to(self.unpacked_dir)}")
+
+ if errors:
+ print(
+ f"FAILED - Found {len([e for e in errors if not e.startswith(' ')])} notes slide reference validation errors:"
+ )
+ for error in errors:
+ print(error)
+ print("Each slide may optionally have its own slide file.")
+ return False
+ else:
+ if self.verbose:
+ print("PASSED - All notes slide references are unique")
+ return True
+
+
+if __name__ == "__main__":
+ raise RuntimeError("This module should not be run directly.")
diff --git a/skills/docx/scripts/office/validators/redlining.py b/skills/docx/scripts/office/validators/redlining.py
new file mode 100644
index 0000000..71c81b6
--- /dev/null
+++ b/skills/docx/scripts/office/validators/redlining.py
@@ -0,0 +1,247 @@
+"""
+Validator for tracked changes in Word documents.
+"""
+
+import subprocess
+import tempfile
+import zipfile
+from pathlib import Path
+
+
+class RedliningValidator:
+
+ def __init__(self, unpacked_dir, original_docx, verbose=False, author="Claude"):
+ self.unpacked_dir = Path(unpacked_dir)
+ self.original_docx = Path(original_docx)
+ self.verbose = verbose
+ self.author = author
+ self.namespaces = {
+ "w": "http://schemas.openxmlformats.org/wordprocessingml/2006/main"
+ }
+
+ def repair(self) -> int:
+ return 0
+
+ def validate(self):
+ modified_file = self.unpacked_dir / "word" / "document.xml"
+ if not modified_file.exists():
+ print(f"FAILED - Modified document.xml not found at {modified_file}")
+ return False
+
+ try:
+ import xml.etree.ElementTree as ET
+
+ tree = ET.parse(modified_file)
+ root = tree.getroot()
+
+ del_elements = root.findall(".//w:del", self.namespaces)
+ ins_elements = root.findall(".//w:ins", self.namespaces)
+
+ author_del_elements = [
+ elem
+ for elem in del_elements
+ if elem.get(f"{{{self.namespaces['w']}}}author") == self.author
+ ]
+ author_ins_elements = [
+ elem
+ for elem in ins_elements
+ if elem.get(f"{{{self.namespaces['w']}}}author") == self.author
+ ]
+
+ if not author_del_elements and not author_ins_elements:
+ if self.verbose:
+ print(f"PASSED - No tracked changes by {self.author} found.")
+ return True
+
+ except Exception:
+ pass
+
+ with tempfile.TemporaryDirectory() as temp_dir:
+ temp_path = Path(temp_dir)
+
+ try:
+ with zipfile.ZipFile(self.original_docx, "r") as zip_ref:
+ zip_ref.extractall(temp_path)
+ except Exception as e:
+ print(f"FAILED - Error unpacking original docx: {e}")
+ return False
+
+ original_file = temp_path / "word" / "document.xml"
+ if not original_file.exists():
+ print(
+ f"FAILED - Original document.xml not found in {self.original_docx}"
+ )
+ return False
+
+ try:
+ import xml.etree.ElementTree as ET
+
+ modified_tree = ET.parse(modified_file)
+ modified_root = modified_tree.getroot()
+ original_tree = ET.parse(original_file)
+ original_root = original_tree.getroot()
+ except ET.ParseError as e:
+ print(f"FAILED - Error parsing XML files: {e}")
+ return False
+
+ self._remove_author_tracked_changes(original_root)
+ self._remove_author_tracked_changes(modified_root)
+
+ modified_text = self._extract_text_content(modified_root)
+ original_text = self._extract_text_content(original_root)
+
+ if modified_text != original_text:
+ error_message = self._generate_detailed_diff(
+ original_text, modified_text
+ )
+ print(error_message)
+ return False
+
+ if self.verbose:
+ print(f"PASSED - All changes by {self.author} are properly tracked")
+ return True
+
+ def _generate_detailed_diff(self, original_text, modified_text):
+ error_parts = [
+ f"FAILED - Document text doesn't match after removing {self.author}'s tracked changes",
+ "",
+ "Likely causes:",
+ " 1. Modified text inside another author's or tags",
+ " 2. Made edits without proper tracked changes",
+ " 3. Didn't nest inside when deleting another's insertion",
+ "",
+ "For pre-redlined documents, use correct patterns:",
+ " - To reject another's INSERTION: Nest inside their ",
+ " - To restore another's DELETION: Add new AFTER their ",
+ "",
+ ]
+
+ git_diff = self._get_git_word_diff(original_text, modified_text)
+ if git_diff:
+ error_parts.extend(["Differences:", "============", git_diff])
+ else:
+ error_parts.append("Unable to generate word diff (git not available)")
+
+ return "\n".join(error_parts)
+
+ def _get_git_word_diff(self, original_text, modified_text):
+ try:
+ with tempfile.TemporaryDirectory() as temp_dir:
+ temp_path = Path(temp_dir)
+
+ original_file = temp_path / "original.txt"
+ modified_file = temp_path / "modified.txt"
+
+ original_file.write_text(original_text, encoding="utf-8")
+ modified_file.write_text(modified_text, encoding="utf-8")
+
+ result = subprocess.run(
+ [
+ "git",
+ "diff",
+ "--word-diff=plain",
+ "--word-diff-regex=.",
+ "-U0",
+ "--no-index",
+ str(original_file),
+ str(modified_file),
+ ],
+ capture_output=True,
+ text=True,
+ )
+
+ if result.stdout.strip():
+ lines = result.stdout.split("\n")
+ content_lines = []
+ in_content = False
+ for line in lines:
+ if line.startswith("@@"):
+ in_content = True
+ continue
+ if in_content and line.strip():
+ content_lines.append(line)
+
+ if content_lines:
+ return "\n".join(content_lines)
+
+ result = subprocess.run(
+ [
+ "git",
+ "diff",
+ "--word-diff=plain",
+ "-U0",
+ "--no-index",
+ str(original_file),
+ str(modified_file),
+ ],
+ capture_output=True,
+ text=True,
+ )
+
+ if result.stdout.strip():
+ lines = result.stdout.split("\n")
+ content_lines = []
+ in_content = False
+ for line in lines:
+ if line.startswith("@@"):
+ in_content = True
+ continue
+ if in_content and line.strip():
+ content_lines.append(line)
+ return "\n".join(content_lines)
+
+ except (subprocess.CalledProcessError, FileNotFoundError, Exception):
+ pass
+
+ return None
+
+ def _remove_author_tracked_changes(self, root):
+ ins_tag = f"{{{self.namespaces['w']}}}ins"
+ del_tag = f"{{{self.namespaces['w']}}}del"
+ author_attr = f"{{{self.namespaces['w']}}}author"
+
+ for parent in root.iter():
+ to_remove = []
+ for child in parent:
+ if child.tag == ins_tag and child.get(author_attr) == self.author:
+ to_remove.append(child)
+ for elem in to_remove:
+ parent.remove(elem)
+
+ deltext_tag = f"{{{self.namespaces['w']}}}delText"
+ t_tag = f"{{{self.namespaces['w']}}}t"
+
+ for parent in root.iter():
+ to_process = []
+ for child in parent:
+ if child.tag == del_tag and child.get(author_attr) == self.author:
+ to_process.append((child, list(parent).index(child)))
+
+ for del_elem, del_index in reversed(to_process):
+ for elem in del_elem.iter():
+ if elem.tag == deltext_tag:
+ elem.tag = t_tag
+
+ for child in reversed(list(del_elem)):
+ parent.insert(del_index, child)
+ parent.remove(del_elem)
+
+ def _extract_text_content(self, root):
+ p_tag = f"{{{self.namespaces['w']}}}p"
+ t_tag = f"{{{self.namespaces['w']}}}t"
+
+ paragraphs = []
+ for p_elem in root.findall(f".//{p_tag}"):
+ text_parts = []
+ for t_elem in p_elem.findall(f".//{t_tag}"):
+ if t_elem.text:
+ text_parts.append(t_elem.text)
+ paragraph_text = "".join(text_parts)
+ if paragraph_text:
+ paragraphs.append(paragraph_text)
+
+ return "\n".join(paragraphs)
+
+
+if __name__ == "__main__":
+ raise RuntimeError("This module should not be run directly.")
diff --git a/skills/docx/scripts/templates/comments.xml b/skills/docx/scripts/templates/comments.xml
new file mode 100644
index 0000000..cd01a7d
--- /dev/null
+++ b/skills/docx/scripts/templates/comments.xml
@@ -0,0 +1,3 @@
+
+
+
diff --git a/skills/docx/scripts/templates/commentsExtended.xml b/skills/docx/scripts/templates/commentsExtended.xml
new file mode 100644
index 0000000..411003c
--- /dev/null
+++ b/skills/docx/scripts/templates/commentsExtended.xml
@@ -0,0 +1,3 @@
+
+
+
diff --git a/skills/docx/scripts/templates/commentsExtensible.xml b/skills/docx/scripts/templates/commentsExtensible.xml
new file mode 100644
index 0000000..f5572d7
--- /dev/null
+++ b/skills/docx/scripts/templates/commentsExtensible.xml
@@ -0,0 +1,3 @@
+
+
+
diff --git a/skills/docx/scripts/templates/commentsIds.xml b/skills/docx/scripts/templates/commentsIds.xml
new file mode 100644
index 0000000..32f1629
--- /dev/null
+++ b/skills/docx/scripts/templates/commentsIds.xml
@@ -0,0 +1,3 @@
+
+
+
diff --git a/skills/docx/scripts/templates/people.xml b/skills/docx/scripts/templates/people.xml
new file mode 100644
index 0000000..3803d2d
--- /dev/null
+++ b/skills/docx/scripts/templates/people.xml
@@ -0,0 +1,3 @@
+
+
+
diff --git a/skills/email-and-password-best-practices/SKILL.md b/skills/email-and-password-best-practices/SKILL.md
new file mode 100644
index 0000000..285f8a9
--- /dev/null
+++ b/skills/email-and-password-best-practices/SKILL.md
@@ -0,0 +1,224 @@
+---
+name: email-and-password-best-practices
+description: This skill provides guidance and enforcement rules for implementing secure email and password authentication using Better Auth.
+---
+
+## Email Verification Setup
+
+When enabling email/password authentication, configure `emailVerification.sendVerificationEmail` to verify user email addresses. This helps prevent fake sign-ups and ensures users have access to the email they registered with.
+
+```ts
+import { betterAuth } from "better-auth";
+import { sendEmail } from "./email"; // your email sending function
+
+export const auth = betterAuth({
+ emailVerification: {
+ sendVerificationEmail: async ({ user, url, token }, request) => {
+ await sendEmail({
+ to: user.email,
+ subject: "Verify your email address",
+ text: `Click the link to verify your email: ${url}`,
+ });
+ },
+ },
+});
+```
+
+**Note**: The `url` parameter contains the full verification link. The `token` is available if you need to build a custom verification URL.
+
+### Requiring Email Verification
+
+For stricter security, enable `emailAndPassword.requireEmailVerification` to block sign-in until the user verifies their email. When enabled, unverified users will receive a new verification email on each sign-in attempt.
+
+```ts
+export const auth = betterAuth({
+ emailAndPassword: {
+ requireEmailVerification: true,
+ },
+});
+```
+
+**Note**: This requires `sendVerificationEmail` to be configured and only applies to email/password sign-ins.
+
+## Client side validation
+
+While Better Auth validates inputs server-side, implementing client-side validation is still recommended for two key reasons:
+
+1. **Improved UX**: Users receive immediate feedback when inputs don't meet requirements, rather than waiting for a server round-trip.
+2. **Reduced server load**: Invalid requests are caught early, minimizing unnecessary network traffic to your auth server.
+
+## Callback URLs
+
+Always use absolute URLs (including the origin) for callback URLs in sign-up and sign-in requests. This prevents Better Auth from needing to infer the origin, which can cause issues when your backend and frontend are on different domains.
+
+```ts
+const { data, error } = await authClient.signUp.email({
+ callbackURL: "https://example.com/callback", // absolute URL with origin
+});
+```
+
+## Password Reset Flows
+
+Password reset flows are essential to any email/password system, we recommend setting this up.
+
+To allow users to reset a password first you need to provide `sendResetPassword` function to the email and password authenticator.
+
+```ts
+import { betterAuth } from "better-auth";
+import { sendEmail } from "./email"; // your email sending function
+
+export const auth = betterAuth({
+ emailAndPassword: {
+ enabled: true,
+ // Custom email sending function to send reset-password email
+ sendResetPassword: async ({ user, url, token }, request) => {
+ void sendEmail({
+ to: user.email,
+ subject: "Reset your password",
+ text: `Click the link to reset your password: ${url}`,
+ });
+ },
+ // Optional event hook
+ onPasswordReset: async ({ user }, request) => {
+ // your logic here
+ console.log(`Password for user ${user.email} has been reset.`);
+ },
+ },
+});
+```
+
+### Security considerations
+
+Better Auth implements several security measures in the password reset flow:
+
+#### Timing attack prevention
+
+- **Background email sending**: Better Auth uses `runInBackgroundOrAwait` internally to send reset emails without blocking the response. This prevents attackers from measuring response times to determine if an email exists.
+- **Dummy operations on invalid requests**: When a user is not found, Better Auth still performs token generation and a database lookup (with a dummy value) to maintain consistent response times.
+- **Constant response message**: The API always returns `"If this email exists in our system, check your email for the reset link"` regardless of whether the user exists.
+
+On serverless platforms, configure a background task handler to ensure emails are sent reliably:
+
+```ts
+export const auth = betterAuth({
+ advanced: {
+ backgroundTasks: {
+ handler: (promise) => {
+ // Use platform-specific methods like waitUntil
+ waitUntil(promise);
+ },
+ },
+ },
+});
+```
+
+#### Token security
+
+- **Cryptographically random tokens**: Reset tokens are generated using `generateId(24)`, producing a 24-character alphanumeric string (a-z, A-Z, 0-9) with high entropy.
+- **Token expiration**: Tokens expire after **1 hour** by default. Configure with `resetPasswordTokenExpiresIn` (in seconds):
+
+```ts
+export const auth = betterAuth({
+ emailAndPassword: {
+ enabled: true,
+ resetPasswordTokenExpiresIn: 60 * 30, // 30 minutes
+ },
+});
+```
+
+- **Single-use tokens**: Tokens are deleted immediately after successful password reset, preventing reuse.
+
+#### Session revocation
+
+Enable `revokeSessionsOnPasswordReset` to invalidate all existing sessions when a password is reset. This ensures that if an attacker has an active session, it will be terminated:
+
+```ts
+export const auth = betterAuth({
+ emailAndPassword: {
+ enabled: true,
+ revokeSessionsOnPasswordReset: true,
+ },
+});
+```
+
+#### Redirect URL validation
+
+The `redirectTo` parameter is validated against your `trustedOrigins` configuration to prevent open redirect attacks. Malicious redirect URLs will be rejected with a 403 error.
+
+#### Password requirements
+
+During password reset, the new password must meet length requirements:
+- **Minimum**: 8 characters (default), configurable via `minPasswordLength`
+- **Maximum**: 128 characters (default), configurable via `maxPasswordLength`
+
+```ts
+export const auth = betterAuth({
+ emailAndPassword: {
+ enabled: true,
+ minPasswordLength: 12,
+ maxPasswordLength: 256,
+ },
+});
+```
+
+### Sending the password reset
+
+Once the password reset configurations are set-up, you can now call the `requestPasswordReset` function to send reset password link to user. If the user exists, it will trigger the `sendResetPassword` function you provided in the auth config.
+
+```ts
+const data = await auth.api.requestPasswordReset({
+ body: {
+ email: "john.doe@example.com", // required
+ redirectTo: "https://example.com/reset-password",
+ },
+});
+```
+
+Or authClient:
+
+```ts
+const { data, error } = await authClient.requestPasswordReset({
+ email: "john.doe@example.com", // required
+ redirectTo: "https://example.com/reset-password",
+});
+```
+
+**Note**: While the `email` is required, we also recommend configuring the `redirectTo` for a smoother user experience.
+
+## Password Hashing
+
+Better Auth uses `scrypt` by default for password hashing. This is a solid choice because:
+
+- It's designed to be slow and memory-intensive, making brute-force attacks costly
+- It's natively supported by Node.js (no external dependencies)
+- OWASP recommends it when Argon2id isn't available
+
+### Custom Hashing Algorithm
+
+To use a different algorithm (e.g., Argon2id), provide custom `hash` and `verify` functions in the `emailAndPassword.password` configuration:
+
+```ts
+import { betterAuth } from "better-auth";
+import { hash, verify, type Options } from "@node-rs/argon2";
+
+const argon2Options: Options = {
+ memoryCost: 65536, // 64 MiB
+ timeCost: 3, // 3 iterations
+ parallelism: 4, // 4 parallel lanes
+ outputLen: 32, // 32 byte output
+ algorithm: 2, // Argon2id variant
+};
+
+export const auth = betterAuth({
+ emailAndPassword: {
+ enabled: true,
+ password: {
+ hash: (password) => hash(password, argon2Options),
+ verify: ({ password, hash: storedHash }) =>
+ verify(storedHash, password, argon2Options),
+ },
+ },
+});
+```
+
+**Note**: If you switch hashing algorithms on an existing system, users with passwords hashed using the old algorithm won't be able to sign in. Plan a migration strategy if needed.
diff --git a/skills/email-and-password-best-practices/email-and-password-best-practices b/skills/email-and-password-best-practices/email-and-password-best-practices
new file mode 120000
index 0000000..36f5c3c
--- /dev/null
+++ b/skills/email-and-password-best-practices/email-and-password-best-practices
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/email-and-password-best-practices/
\ No newline at end of file
diff --git a/skills/email-sequence/SKILL.md b/skills/email-sequence/SKILL.md
new file mode 100644
index 0000000..6b50546
--- /dev/null
+++ b/skills/email-sequence/SKILL.md
@@ -0,0 +1,306 @@
+---
+name: email-sequence
+version: 1.0.0
+description: When the user wants to create or optimize an email sequence, drip campaign, automated email flow, or lifecycle email program. Also use when the user mentions "email sequence," "drip campaign," "nurture sequence," "onboarding emails," "welcome sequence," "re-engagement emails," "email automation," or "lifecycle emails." For in-app onboarding, see onboarding-cro.
+---
+
+# Email Sequence Design
+
+You are an expert in email marketing and automation. Your goal is to create email sequences that nurture relationships, drive action, and move people toward conversion.
+
+## Initial Assessment
+
+**Check for product marketing context first:**
+If `.claude/product-marketing-context.md` exists, read it before asking questions. Use that context and only ask for information not already covered or specific to this task.
+
+Before creating a sequence, understand:
+
+1. **Sequence Type**
+ - Welcome/onboarding sequence
+ - Lead nurture sequence
+ - Re-engagement sequence
+ - Post-purchase sequence
+ - Event-based sequence
+ - Educational sequence
+ - Sales sequence
+
+2. **Audience Context**
+ - Who are they?
+ - What triggered them into this sequence?
+ - What do they already know/believe?
+ - What's their current relationship with you?
+
+3. **Goals**
+ - Primary conversion goal
+ - Relationship-building goals
+ - Segmentation goals
+ - What defines success?
+
+---
+
+## Core Principles
+
+### 1. One Email, One Job
+- Each email has one primary purpose
+- One main CTA per email
+- Don't try to do everything
+
+### 2. Value Before Ask
+- Lead with usefulness
+- Build trust through content
+- Earn the right to sell
+
+### 3. Relevance Over Volume
+- Fewer, better emails win
+- Segment for relevance
+- Quality > frequency
+
+### 4. Clear Path Forward
+- Every email moves them somewhere
+- Links should do something useful
+- Make next steps obvious
+
+---
+
+## Email Sequence Strategy
+
+### Sequence Length
+- Welcome: 3-7 emails
+- Lead nurture: 5-10 emails
+- Onboarding: 5-10 emails
+- Re-engagement: 3-5 emails
+
+Depends on:
+- Sales cycle length
+- Product complexity
+- Relationship stage
+
+### Timing/Delays
+- Welcome email: Immediately
+- Early sequence: 1-2 days apart
+- Nurture: 2-4 days apart
+- Long-term: Weekly or bi-weekly
+
+Consider:
+- B2B: Avoid weekends
+- B2C: Test weekends
+- Time zones: Send at local time
+
+### Subject Line Strategy
+- Clear > Clever
+- Specific > Vague
+- Benefit or curiosity-driven
+- 40-60 characters ideal
+- Test emoji (they're polarizing)
+
+**Patterns that work:**
+- Question: "Still struggling with X?"
+- How-to: "How to [achieve outcome] in [timeframe]"
+- Number: "3 ways to [benefit]"
+- Direct: "[First name], your [thing] is ready"
+- Story tease: "The mistake I made with [topic]"
+
+### Preview Text
+- Extends the subject line
+- ~90-140 characters
+- Don't repeat subject line
+- Complete the thought or add intrigue
+
+---
+
+## Sequence Types Overview
+
+### Welcome Sequence (Post-Signup)
+**Length**: 5-7 emails over 12-14 days
+**Goal**: Activate, build trust, convert
+
+Key emails:
+1. Welcome + deliver promised value (immediate)
+2. Quick win (day 1-2)
+3. Story/Why (day 3-4)
+4. Social proof (day 5-6)
+5. Overcome objection (day 7-8)
+6. Core feature highlight (day 9-11)
+7. Conversion (day 12-14)
+
+### Lead Nurture Sequence (Pre-Sale)
+**Length**: 6-8 emails over 2-3 weeks
+**Goal**: Build trust, demonstrate expertise, convert
+
+Key emails:
+1. Deliver lead magnet + intro (immediate)
+2. Expand on topic (day 2-3)
+3. Problem deep-dive (day 4-5)
+4. Solution framework (day 6-8)
+5. Case study (day 9-11)
+6. Differentiation (day 12-14)
+7. Objection handler (day 15-18)
+8. Direct offer (day 19-21)
+
+### Re-Engagement Sequence
+**Length**: 3-4 emails over 2 weeks
+**Trigger**: 30-60 days of inactivity
+**Goal**: Win back or clean list
+
+Key emails:
+1. Check-in (genuine concern)
+2. Value reminder (what's new)
+3. Incentive (special offer)
+4. Last chance (stay or unsubscribe)
+
+### Onboarding Sequence (Product Users)
+**Length**: 5-7 emails over 14 days
+**Goal**: Activate, drive to aha moment, upgrade
+**Note**: Coordinate with in-app onboarding—email supports, doesn't duplicate
+
+Key emails:
+1. Welcome + first step (immediate)
+2. Getting started help (day 1)
+3. Feature highlight (day 2-3)
+4. Success story (day 4-5)
+5. Check-in (day 7)
+6. Advanced tip (day 10-12)
+7. Upgrade/expand (day 14+)
+
+**For detailed templates**: See [references/sequence-templates.md](references/sequence-templates.md)
+
+---
+
+## Email Types by Category
+
+### Onboarding Emails
+- New users series
+- New customers series
+- Key onboarding step reminders
+- New user invites
+
+### Retention Emails
+- Upgrade to paid
+- Upgrade to higher plan
+- Ask for review
+- Proactive support offers
+- Product usage reports
+- NPS survey
+- Referral program
+
+### Billing Emails
+- Switch to annual
+- Failed payment recovery
+- Cancellation survey
+- Upcoming renewal reminders
+
+### Usage Emails
+- Daily/weekly/monthly summaries
+- Key event notifications
+- Milestone celebrations
+
+### Win-Back Emails
+- Expired trials
+- Cancelled customers
+
+### Campaign Emails
+- Monthly roundup / newsletter
+- Seasonal promotions
+- Product updates
+- Industry news roundup
+- Pricing updates
+
+**For detailed email type reference**: See [references/email-types.md](references/email-types.md)
+
+---
+
+## Email Copy Guidelines
+
+### Structure
+1. **Hook**: First line grabs attention
+2. **Context**: Why this matters to them
+3. **Value**: The useful content
+4. **CTA**: What to do next
+5. **Sign-off**: Human, warm close
+
+### Formatting
+- Short paragraphs (1-3 sentences)
+- White space between sections
+- Bullet points for scanability
+- Bold for emphasis (sparingly)
+- Mobile-first (most read on phone)
+
+### Tone
+- Conversational, not formal
+- First-person (I/we) and second-person (you)
+- Active voice
+- Read it out loud—does it sound human?
+
+### Length
+- 50-125 words for transactional
+- 150-300 words for educational
+- 300-500 words for story-driven
+
+### CTA Guidelines
+- Buttons for primary actions
+- Links for secondary actions
+- One clear primary CTA per email
+- Button text: Action + outcome
+
+**For detailed copy, personalization, and testing guidelines**: See [references/copy-guidelines.md](references/copy-guidelines.md)
+
+---
+
+## Output Format
+
+### Sequence Overview
+```
+Sequence Name: [Name]
+Trigger: [What starts the sequence]
+Goal: [Primary conversion goal]
+Length: [Number of emails]
+Timing: [Delay between emails]
+Exit Conditions: [When they leave the sequence]
+```
+
+### For Each Email
+```
+Email [#]: [Name/Purpose]
+Send: [Timing]
+Subject: [Subject line]
+Preview: [Preview text]
+Body: [Full copy]
+CTA: [Button text] → [Link destination]
+Segment/Conditions: [If applicable]
+```
+
+### Metrics Plan
+What to measure and benchmarks
+
+---
+
+## Task-Specific Questions
+
+1. What triggers entry to this sequence?
+2. What's the primary goal/conversion action?
+3. What do they already know about you?
+4. What other emails are they receiving?
+5. What's your current email performance?
+
+---
+
+## Tool Integrations
+
+For implementation, see the [tools registry](../../tools/REGISTRY.md). Key email tools:
+
+| Tool | Best For | MCP | Guide |
+|------|----------|:---:|-------|
+| **Customer.io** | Behavior-based automation | - | [customer-io.md](../../tools/integrations/customer-io.md) |
+| **Mailchimp** | SMB email marketing | ✓ | [mailchimp.md](../../tools/integrations/mailchimp.md) |
+| **Resend** | Developer-friendly transactional | ✓ | [resend.md](../../tools/integrations/resend.md) |
+| **SendGrid** | Transactional email at scale | - | [sendgrid.md](../../tools/integrations/sendgrid.md) |
+| **Kit** | Creator/newsletter focused | - | [kit.md](../../tools/integrations/kit.md) |
+
+---
+
+## Related Skills
+
+- **onboarding-cro**: For in-app onboarding (email supports this)
+- **copywriting**: For landing pages emails link to
+- **ab-test-setup**: For testing email elements
+- **popup-cro**: For email capture popups
diff --git a/skills/email-sequence/email-sequence b/skills/email-sequence/email-sequence
new file mode 120000
index 0000000..1077c78
--- /dev/null
+++ b/skills/email-sequence/email-sequence
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/email-sequence/
\ No newline at end of file
diff --git a/skills/email-sequence/references/copy-guidelines.md b/skills/email-sequence/references/copy-guidelines.md
new file mode 100644
index 0000000..054df59
--- /dev/null
+++ b/skills/email-sequence/references/copy-guidelines.md
@@ -0,0 +1,103 @@
+# Email Copy Guidelines
+
+## Structure
+
+1. **Hook**: First line grabs attention
+2. **Context**: Why this matters to them
+3. **Value**: The useful content
+4. **CTA**: What to do next
+5. **Sign-off**: Human, warm close
+
+## Formatting
+
+- Short paragraphs (1-3 sentences)
+- White space between sections
+- Bullet points for scanability
+- Bold for emphasis (sparingly)
+- Mobile-first (most read on phone)
+
+## Tone
+
+- Conversational, not formal
+- First-person (I/we) and second-person (you)
+- Active voice
+- Match your brand but lean friendly
+- Read it out loud—does it sound human?
+
+## Length
+
+- Shorter is usually better
+- 50-125 words for transactional
+- 150-300 words for educational
+- 300-500 words for story-driven
+- If it's long, it better be good
+
+## CTA Buttons vs. Links
+
+- Buttons: Primary actions, high-visibility
+- Links: Secondary actions, in-text
+- One clear primary CTA per email
+- Button text: Action + outcome
+
+---
+
+## Personalization
+
+### Merge Fields
+- First name (fallback to "there" or "friend")
+- Company name (B2B)
+- Relevant data (usage, plan, etc.)
+
+### Dynamic Content
+- Based on segment
+- Based on behavior
+- Based on stage
+
+### Triggered Emails
+- Action-based sends
+- More relevant than time-based
+- Examples: Feature used, milestone hit, inactivity
+
+---
+
+## Segmentation Strategies
+
+### By Behavior
+- Openers vs. non-openers
+- Clickers vs. non-clickers
+- Active vs. inactive
+
+### By Stage
+- Trial vs. paid
+- New vs. long-term
+- Engaged vs. at-risk
+
+### By Profile
+- Industry/role (B2B)
+- Use case / goal
+- Company size
+
+---
+
+## Testing and Optimization
+
+### What to Test
+- Subject lines (highest impact)
+- Send times
+- Email length
+- CTA placement and copy
+- Personalization level
+- Sequence timing
+
+### How to Test
+- A/B test one variable at a time
+- Sufficient sample size
+- Statistical significance
+- Document learnings
+
+### Metrics to Track
+- Open rate (benchmark: 20-40%)
+- Click rate (benchmark: 2-5%)
+- Unsubscribe rate (keep under 0.5%)
+- Conversion rate (specific to sequence goal)
+- Revenue per email (if applicable)
diff --git a/skills/email-sequence/references/email-types.md b/skills/email-sequence/references/email-types.md
new file mode 100644
index 0000000..f1a7288
--- /dev/null
+++ b/skills/email-sequence/references/email-types.md
@@ -0,0 +1,506 @@
+# Email Types Reference
+
+A comprehensive guide to lifecycle and campaign emails. Use this as an audit checklist and implementation reference.
+
+## Onboarding Emails
+
+### New Users Series
+**Trigger**: User signs up (free or trial)
+**Goal**: Activate user, drive to aha moment
+**Typical sequence**: 5-7 emails over 14 days
+
+- Email 1: Welcome + single next step (immediate)
+- Email 2: Quick win / getting started (day 1)
+- Email 3: Key feature highlight (day 3)
+- Email 4: Success story / social proof (day 5)
+- Email 5: Check-in + offer help (day 7)
+- Email 6: Advanced tip (day 10)
+- Email 7: Upgrade prompt or next milestone (day 14)
+
+**Key metrics**: Activation rate, feature adoption
+
+---
+
+### New Customers Series
+**Trigger**: User converts to paid
+**Goal**: Reinforce purchase decision, drive adoption, reduce early churn
+**Typical sequence**: 3-5 emails over 14 days
+
+- Email 1: Thank you + what's next (immediate)
+- Email 2: Getting full value — setup checklist (day 2)
+- Email 3: Pro tips for paid features (day 5)
+- Email 4: Success story from similar customer (day 7)
+- Email 5: Check-in + introduce support resources (day 14)
+
+**Key point**: Different from new user series—they've committed. Focus on reinforcement and expansion, not conversion.
+
+---
+
+### Key Onboarding Step Reminder
+**Trigger**: User hasn't completed critical setup step after X time
+**Goal**: Nudge completion of high-value action
+**Format**: Single email or 2-3 email mini-sequence
+
+**Example triggers**:
+- Hasn't connected integration after 48 hours
+- Hasn't invited team member after 3 days
+- Hasn't completed profile after 24 hours
+
+**Copy approach**:
+- Remind them what they started
+- Explain why this step matters
+- Make it easy (direct link to complete)
+- Offer help if stuck
+
+---
+
+### New User Invite
+**Trigger**: Existing user invites teammate
+**Goal**: Activate the invited user
+**Recipient**: The person being invited
+
+- Email 1: You've been invited (immediate)
+- Email 2: Reminder if not accepted (day 2)
+- Email 3: Final reminder (day 5)
+
+**Copy approach**:
+- Personalize with inviter's name
+- Explain what they're joining
+- Single CTA to accept invite
+- Social proof optional
+
+---
+
+## Retention Emails
+
+### Upgrade to Paid
+**Trigger**: Free user shows engagement, or trial ending
+**Goal**: Convert free to paid
+**Typical sequence**: 3-5 emails
+
+**Trigger options**:
+- Time-based (trial day 10, 12, 14)
+- Behavior-based (hit usage limit, used premium feature)
+- Engagement-based (highly active free user)
+
+**Sequence structure**:
+- Value summary: What they've accomplished
+- Feature comparison: What they're missing
+- Social proof: Who else upgraded
+- Urgency: Trial ending, limited offer
+- Final: Last chance + easy path
+
+---
+
+### Upgrade to Higher Plan
+**Trigger**: User approaching plan limits or using features available on higher tier
+**Goal**: Upsell to next tier
+**Format**: Single email or 2-3 email sequence
+
+**Trigger examples**:
+- 80% of seat limit reached
+- 90% of storage/usage limit
+- Tried to use higher-tier feature
+- Power user behavior patterns
+
+**Copy approach**:
+- Acknowledge their growth (positive framing)
+- Show what next tier unlocks
+- Quantify value vs. cost
+- Easy upgrade path
+
+---
+
+### Ask for Review
+**Trigger**: Customer milestone (30/60/90 days, key achievement, support resolution)
+**Goal**: Generate social proof on G2, Capterra, app stores
+**Format**: Single email
+
+**Best timing**:
+- After positive support interaction
+- After achieving measurable result
+- After renewal
+- NOT after billing issues or bugs
+
+**Copy approach**:
+- Thank them for being a customer
+- Mention specific value/milestone if possible
+- Explain why reviews matter (help others decide)
+- Direct link to review platform
+- Keep it short—this is an ask
+
+---
+
+### Offer Support Proactively
+**Trigger**: Signs of struggle (drop in usage, failed actions, error encounters)
+**Goal**: Save at-risk user, improve experience
+**Format**: Single email
+
+**Trigger examples**:
+- Usage dropped significantly week-over-week
+- Multiple failed attempts at action
+- Viewed help docs repeatedly
+- Stuck at same onboarding step
+
+**Copy approach**:
+- Genuine concern tone
+- Specific: "I noticed you..." (if data allows)
+- Offer direct help (not just link to docs)
+- Personal from support or CSM
+- No sales pitch—pure help
+
+---
+
+### Product Usage Report
+**Trigger**: Time-based (weekly, monthly, quarterly)
+**Goal**: Demonstrate value, drive engagement, reduce churn
+**Format**: Single email, recurring
+
+**What to include**:
+- Key metrics/activity summary
+- Comparison to previous period
+- Achievements/milestones
+- Suggestions for improvement
+- Light CTA to explore more
+
+**Examples**:
+- "You saved X hours this month"
+- "Your team completed X projects"
+- "You're in the top X% of users"
+
+**Key point**: Make them feel good and remind them of value delivered.
+
+---
+
+### NPS Survey
+**Trigger**: Time-based (quarterly) or event-based (post-milestone)
+**Goal**: Measure satisfaction, identify promoters and detractors
+**Format**: Single email
+
+**Best practices**:
+- Keep it simple: Just the NPS question initially
+- Follow-up form for "why" based on score
+- Personal sender (CEO, founder, CSM)
+- Tell them how you'll use feedback
+
+**Follow-up based on score**:
+- Promoters (9-10): Thank + ask for review/referral
+- Passives (7-8): Ask what would make it a 10
+- Detractors (0-6): Personal outreach to understand issues
+
+---
+
+### Referral Program
+**Trigger**: Customer milestone, promoter NPS score, or campaign
+**Goal**: Generate referrals
+**Format**: Single email or periodic reminders
+
+**Good timing**:
+- After positive NPS response
+- After customer achieves result
+- After renewal
+- Seasonal campaigns
+
+**Copy approach**:
+- Remind them of their success
+- Explain the referral offer clearly
+- Make sharing easy (unique link)
+- Show what's in it for them AND referee
+
+---
+
+## Billing Emails
+
+### Switch to Annual
+**Trigger**: Monthly subscriber at renewal time or campaign
+**Goal**: Convert monthly to annual (improve LTV, reduce churn)
+**Format**: Single email or 2-email sequence
+
+**Value proposition**:
+- Calculate exact savings
+- Additional benefits (if any)
+- Lock in current price messaging
+- Easy one-click switch
+
+**Best timing**:
+- Around monthly renewal date
+- End of year / new year
+- After 3-6 months of loyalty
+- Price increase announcement (lock in old rate)
+
+---
+
+### Failed Payment Recovery
+**Trigger**: Payment fails
+**Goal**: Recover revenue, retain customer
+**Typical sequence**: 3-4 emails over 7-14 days
+
+**Sequence structure**:
+- Email 1 (Day 0): Friendly notice, update payment link
+- Email 2 (Day 3): Reminder, service may be interrupted
+- Email 3 (Day 7): Urgent, account will be suspended
+- Email 4 (Day 10-14): Final notice, what they'll lose
+
+**Copy approach**:
+- Assume it's an accident (card expired, etc.)
+- Clear, direct, no guilt
+- Single CTA to update payment
+- Explain what happens if not resolved
+
+**Key metrics**: Recovery rate, time to recovery
+
+---
+
+### Cancellation Survey
+**Trigger**: User cancels subscription
+**Goal**: Learn why, opportunity to save
+**Format**: Single email (immediate)
+
+**Options**:
+- In-app survey at cancellation (better completion)
+- Follow-up email if they skip in-app
+- Personal outreach for high-value accounts
+
+**Questions to ask**:
+- Primary reason for cancelling
+- What could we have done better
+- Would anything change your mind
+- Can we help with transition
+
+**Winback opportunity**: Based on reason, offer targeted save (discount, pause, downgrade, training).
+
+---
+
+### Upcoming Renewal Reminder
+**Trigger**: X days before renewal (14 or 30 days typical)
+**Goal**: No surprise charges, opportunity to expand
+**Format**: Single email
+
+**What to include**:
+- Renewal date and amount
+- What's included in renewal
+- How to update payment/plan
+- Changes to pricing/features (if any)
+- Optional: Upsell opportunity
+
+**Required for**: Annual subscriptions, high-value contracts
+
+---
+
+## Usage Emails
+
+### Daily/Weekly/Monthly Summary
+**Trigger**: Time-based
+**Goal**: Drive engagement, demonstrate value
+**Format**: Single email, recurring
+
+**Content by frequency**:
+- **Daily**: Notifications, quick stats (for high-engagement products)
+- **Weekly**: Activity summary, highlights, suggestions
+- **Monthly**: Comprehensive report, achievements, ROI if calculable
+
+**Structure**:
+- Key metrics at a glance
+- Notable achievements
+- Activity breakdown
+- Suggestions / what to try next
+- CTA to dive deeper
+
+**Personalization**: Must be relevant to their actual usage. Empty reports are worse than no report.
+
+---
+
+### Key Event or Milestone Notifications
+**Trigger**: Specific achievement or event
+**Goal**: Celebrate, drive continued engagement
+**Format**: Single email per event
+
+**Milestone examples**:
+- First [action] completed
+- 10th/100th [thing] created
+- Goal achieved
+- Team collaboration milestone
+- Usage streak
+
+**Copy approach**:
+- Celebration tone
+- Specific achievement
+- Context (compared to others, compared to before)
+- What's next / next milestone
+
+---
+
+## Win-Back Emails
+
+### Expired Trials
+**Trigger**: Trial ended without conversion
+**Goal**: Convert or re-engage
+**Typical sequence**: 3-4 emails over 30 days
+
+**Sequence structure**:
+- Email 1 (Day 1 post-expiry): Trial ended, here's what you're missing
+- Email 2 (Day 7): What held you back? (gather feedback)
+- Email 3 (Day 14): Incentive offer (discount, extended trial)
+- Email 4 (Day 30): Final reach-out, door is open
+
+**Segmentation**: Different approach based on trial engagement level:
+- High engagement: Focus on removing friction to convert
+- Low engagement: Offer fresh start, more onboarding help
+- No engagement: Ask what happened, offer demo/call
+
+---
+
+### Cancelled Customers
+**Trigger**: Time after cancellation (30, 60, 90 days)
+**Goal**: Win back churned customers
+**Typical sequence**: 2-3 emails spread over 90 days
+
+**Sequence structure**:
+- Email 1 (Day 30): What's new since you left
+- Email 2 (Day 60): We've addressed [common reason]
+- Email 3 (Day 90): Special offer to return
+
+**Copy approach**:
+- No guilt, no desperation
+- Genuine updates and improvements
+- Personalize based on cancellation reason if known
+- Make return easy
+
+**Key point**: They're more likely to return if their reason was addressed.
+
+---
+
+## Campaign Emails
+
+### Monthly Roundup / Newsletter
+**Trigger**: Time-based (monthly)
+**Goal**: Engagement, brand presence, content distribution
+**Format**: Single email, recurring
+
+**Content mix**:
+- Product updates and tips
+- Customer stories
+- Educational content
+- Company news
+- Industry insights
+
+**Best practices**:
+- Consistent send day/time
+- Scannable format
+- Mix of content types
+- One primary CTA focus
+- Unsubscribe is okay—keeps list healthy
+
+---
+
+### Seasonal Promotions
+**Trigger**: Calendar events (Black Friday, New Year, etc.)
+**Goal**: Drive conversions with timely offer
+**Format**: Campaign burst (2-4 emails)
+
+**Common opportunities**:
+- New Year (fresh start, annual planning)
+- End of fiscal year (budget spending)
+- Black Friday / Cyber Monday
+- Industry-specific seasons
+- Back to school / work
+
+**Sequence structure**:
+- Announcement: Offer reveal
+- Reminder: Midway through promotion
+- Last chance: Final hours
+
+---
+
+### Product Updates
+**Trigger**: New feature release
+**Goal**: Adoption, engagement, demonstrate momentum
+**Format**: Single email per major release
+
+**What to include**:
+- What's new (clear and simple)
+- Why it matters (benefit, not just feature)
+- How to use it (direct link)
+- Who asked for it (community acknowledgment)
+
+**Segmentation**: Consider targeting based on relevance:
+- Users who would benefit most
+- Users who requested feature
+- Power users first (for beta feel)
+
+---
+
+### Industry News Roundup
+**Trigger**: Time-based (weekly or monthly)
+**Goal**: Thought leadership, engagement, brand value
+**Format**: Curated newsletter
+
+**Content**:
+- Curated news and links
+- Your take / commentary
+- What it means for readers
+- How your product helps
+
+**Best for**: B2B products where customers care about industry trends.
+
+---
+
+### Pricing Update
+**Trigger**: Price change announcement
+**Goal**: Transparent communication, minimize churn
+**Format**: Single email (or sequence for major changes)
+
+**Timeline**:
+- Announce 30-60 days before change
+- Reminder 14 days before
+- Final notice 7 days before
+
+**Copy approach**:
+- Clear, direct, transparent
+- Explain the why (value delivered, costs increased)
+- Grandfather if possible (lock in old rate)
+- Give options (annual lock-in, downgrade)
+
+**Important**: Honesty and advance notice build trust even when price increases.
+
+---
+
+## Email Audit Checklist
+
+Use this to audit your current email program:
+
+### Onboarding
+- [ ] New users series
+- [ ] New customers series
+- [ ] Key onboarding step reminders
+- [ ] New user invite sequence
+
+### Retention
+- [ ] Upgrade to paid sequence
+- [ ] Upgrade to higher plan triggers
+- [ ] Ask for review (timed properly)
+- [ ] Proactive support outreach
+- [ ] Product usage reports
+- [ ] NPS survey
+- [ ] Referral program emails
+
+### Billing
+- [ ] Switch to annual campaign
+- [ ] Failed payment recovery sequence
+- [ ] Cancellation survey
+- [ ] Upcoming renewal reminders
+
+### Usage
+- [ ] Daily/weekly/monthly summaries
+- [ ] Key event notifications
+- [ ] Milestone celebrations
+
+### Win-Back
+- [ ] Expired trial sequence
+- [ ] Cancelled customer sequence
+
+### Campaigns
+- [ ] Monthly roundup / newsletter
+- [ ] Seasonal promotion calendar
+- [ ] Product update announcements
+- [ ] Pricing update communications
diff --git a/skills/email-sequence/references/sequence-templates.md b/skills/email-sequence/references/sequence-templates.md
new file mode 100644
index 0000000..e4f8d0a
--- /dev/null
+++ b/skills/email-sequence/references/sequence-templates.md
@@ -0,0 +1,162 @@
+# Email Sequence Templates
+
+Detailed templates for common email sequences.
+
+## Welcome Sequence (Post-Signup)
+
+**Email 1: Welcome (Immediate)**
+- Subject: Welcome to [Product] — here's your first step
+- Deliver what was promised (lead magnet, access, etc.)
+- Single next action
+- Set expectations for future emails
+
+**Email 2: Quick Win (Day 1-2)**
+- Subject: Get your first [result] in 10 minutes
+- Enable small success
+- Build confidence
+- Link to helpful resource
+
+**Email 3: Story/Why (Day 3-4)**
+- Subject: Why we built [Product]
+- Origin story or mission
+- Connect emotionally
+- Show you understand their problem
+
+**Email 4: Social Proof (Day 5-6)**
+- Subject: How [Customer] achieved [Result]
+- Case study or testimonial
+- Relatable to their situation
+- Soft CTA to explore
+
+**Email 5: Overcome Objection (Day 7-8)**
+- Subject: "I don't have time for X" — sound familiar?
+- Address common hesitation
+- Reframe the obstacle
+- Show easy path forward
+
+**Email 6: Core Feature (Day 9-11)**
+- Subject: Have you tried [Feature] yet?
+- Highlight underused capability
+- Show clear benefit
+- Direct CTA to try it
+
+**Email 7: Conversion (Day 12-14)**
+- Subject: Ready to [upgrade/buy/commit]?
+- Summarize value
+- Clear offer
+- Urgency if appropriate
+- Risk reversal (guarantee, trial)
+
+---
+
+## Lead Nurture Sequence (Pre-Sale)
+
+**Email 1: Deliver + Introduce (Immediate)**
+- Deliver the lead magnet
+- Brief intro to who you are
+- Preview what's coming
+
+**Email 2: Expand on Topic (Day 2-3)**
+- Related insight to lead magnet
+- Establish expertise
+- Light CTA to content
+
+**Email 3: Problem Deep-Dive (Day 4-5)**
+- Articulate their problem deeply
+- Show you understand
+- Hint at solution
+
+**Email 4: Solution Framework (Day 6-8)**
+- Your approach/methodology
+- Educational, not salesy
+- Builds toward your product
+
+**Email 5: Case Study (Day 9-11)**
+- Real results from real customer
+- Specific and relatable
+- Soft CTA
+
+**Email 6: Differentiation (Day 12-14)**
+- Why your approach is different
+- Address alternatives
+- Build preference
+
+**Email 7: Objection Handler (Day 15-18)**
+- Common concern addressed
+- FAQ or myth-busting
+- Reduce friction
+
+**Email 8: Direct Offer (Day 19-21)**
+- Clear pitch
+- Strong value proposition
+- Specific CTA
+- Urgency if available
+
+---
+
+## Re-Engagement Sequence
+
+**Email 1: Check-In (Day 30-60 of inactivity)**
+- Subject: Is everything okay, [Name]?
+- Genuine concern
+- Ask what happened
+- Easy win to re-engage
+
+**Email 2: Value Reminder (Day 2-3 after)**
+- Subject: Remember when you [achieved X]?
+- Remind of past value
+- What's new since they left
+- Quick CTA
+
+**Email 3: Incentive (Day 5-7 after)**
+- Subject: We miss you — here's something special
+- Offer if appropriate
+- Limited time
+- Clear CTA
+
+**Email 4: Last Chance (Day 10-14 after)**
+- Subject: Should we stop emailing you?
+- Honest and direct
+- One-click to stay or go
+- Clean the list if no response
+
+---
+
+## Onboarding Sequence (Product Users)
+
+Coordinate with in-app onboarding. Email supports, doesn't duplicate.
+
+**Email 1: Welcome + First Step (Immediate)**
+- Confirm signup
+- One critical action
+- Link directly to that action
+
+**Email 2: Getting Started Help (Day 1)**
+- If they haven't completed step 1
+- Quick tip or video
+- Support option
+
+**Email 3: Feature Highlight (Day 2-3)**
+- Key feature they should know
+- Specific use case
+- In-app link
+
+**Email 4: Success Story (Day 4-5)**
+- Customer who succeeded
+- Relatable journey
+- Motivational
+
+**Email 5: Check-In (Day 7)**
+- How's it going?
+- Ask for feedback
+- Offer help
+
+**Email 6: Advanced Tip (Day 10-12)**
+- Power feature
+- For engaged users
+- Level-up content
+
+**Email 7: Upgrade/Expand (Day 14+)**
+- For trial users: conversion push
+- For free users: upgrade prompt
+- For paid: expansion opportunity
diff --git a/skills/executing-plans/SKILL.md b/skills/executing-plans/SKILL.md
new file mode 100644
index 0000000..c1b2533
--- /dev/null
+++ b/skills/executing-plans/SKILL.md
@@ -0,0 +1,84 @@
+---
+name: executing-plans
+description: Use when you have a written implementation plan to execute in a separate session with review checkpoints
+---
+
+# Executing Plans
+
+## Overview
+
+Load plan, review critically, execute tasks in batches, report for review between batches.
+
+**Core principle:** Batch execution with checkpoints for architect review.
+
+**Announce at start:** "I'm using the executing-plans skill to implement this plan."
+
+## The Process
+
+### Step 1: Load and Review Plan
+1. Read plan file
+2. Review critically - identify any questions or concerns about the plan
+3. If concerns: Raise them with your human partner before starting
+4. If no concerns: Create TodoWrite and proceed
+
+### Step 2: Execute Batch
+**Default: First 3 tasks**
+
+For each task:
+1. Mark as in_progress
+2. Follow each step exactly (plan has bite-sized steps)
+3. Run verifications as specified
+4. Mark as completed
+
+### Step 3: Report
+When batch complete:
+- Show what was implemented
+- Show verification output
+- Say: "Ready for feedback."
+
+### Step 4: Continue
+Based on feedback:
+- Apply changes if needed
+- Execute next batch
+- Repeat until complete
+
+### Step 5: Complete Development
+
+After all tasks complete and verified:
+- Announce: "I'm using the finishing-a-development-branch skill to complete this work."
+- **REQUIRED SUB-SKILL:** Use superpowers:finishing-a-development-branch
+- Follow that skill to verify tests, present options, execute choice
+
+## When to Stop and Ask for Help
+
+**STOP executing immediately when:**
+- Hit a blocker mid-batch (missing dependency, test fails, instruction unclear)
+- Plan has critical gaps preventing starting
+- You don't understand an instruction
+- Verification fails repeatedly
+
+**Ask for clarification rather than guessing.**
+
+## When to Revisit Earlier Steps
+
+**Return to Review (Step 1) when:**
+- Partner updates the plan based on your feedback
+- Fundamental approach needs rethinking
+
+**Don't force through blockers** - stop and ask.
+
+## Remember
+- Review plan critically first
+- Follow plan steps exactly
+- Don't skip verifications
+- Reference skills when plan says to
+- Between batches: just report and wait
+- Stop when blocked, don't guess
+- Never start implementation on main/master branch without explicit user consent
+
+## Integration
+
+**Required workflow skills:**
+- **superpowers:using-git-worktrees** - REQUIRED: Set up isolated workspace before starting
+- **superpowers:writing-plans** - Creates the plan this skill executes
+- **superpowers:finishing-a-development-branch** - Complete development after all tasks
diff --git a/skills/executing-plans/executing-plans b/skills/executing-plans/executing-plans
new file mode 120000
index 0000000..d623972
--- /dev/null
+++ b/skills/executing-plans/executing-plans
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/executing-plans/
\ No newline at end of file
diff --git a/skills/fastapi/fastapi b/skills/fastapi/fastapi
new file mode 120000
index 0000000..2a4e8e5
--- /dev/null
+++ b/skills/fastapi/fastapi
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/fastapi/
\ No newline at end of file
diff --git a/skills/finishing-a-development-branch/SKILL.md b/skills/finishing-a-development-branch/SKILL.md
new file mode 100644
index 0000000..c308b43
--- /dev/null
+++ b/skills/finishing-a-development-branch/SKILL.md
@@ -0,0 +1,200 @@
+---
+name: finishing-a-development-branch
+description: Use when implementation is complete, all tests pass, and you need to decide how to integrate the work - guides completion of development work by presenting structured options for merge, PR, or cleanup
+---
+
+# Finishing a Development Branch
+
+## Overview
+
+Guide completion of development work by presenting clear options and handling chosen workflow.
+
+**Core principle:** Verify tests → Present options → Execute choice → Clean up.
+
+**Announce at start:** "I'm using the finishing-a-development-branch skill to complete this work."
+
+## The Process
+
+### Step 1: Verify Tests
+
+**Before presenting options, verify tests pass:**
+
+```bash
+# Run project's test suite
+npm test / cargo test / pytest / go test ./...
+```
+
+**If tests fail:**
+```
+Tests failing ( failures). Must fix before completing:
+
+[Show failures]
+
+Cannot proceed with merge/PR until tests pass.
+```
+
+Stop. Don't proceed to Step 2.
+
+**If tests pass:** Continue to Step 2.
+
+### Step 2: Determine Base Branch
+
+```bash
+# Try common base branches
+git merge-base HEAD main 2>/dev/null || git merge-base HEAD master 2>/dev/null
+```
+
+Or ask: "This branch split from main - is that correct?"
+
+### Step 3: Present Options
+
+Present exactly these 4 options:
+
+```
+Implementation complete. What would you like to do?
+
+1. Merge back to locally
+2. Push and create a Pull Request
+3. Keep the branch as-is (I'll handle it later)
+4. Discard this work
+
+Which option?
+```
+
+**Don't add explanation** - keep options concise.
+
+### Step 4: Execute Choice
+
+#### Option 1: Merge Locally
+
+```bash
+# Switch to base branch
+git checkout
+
+# Pull latest
+git pull
+
+# Merge feature branch
+git merge
+
+# Verify tests on merged result
+
+
+# If tests pass
+git branch -d
+```
+
+Then: Cleanup worktree (Step 5)
+
+#### Option 2: Push and Create PR
+
+```bash
+# Push branch
+git push -u origin
+
+# Create PR
+gh pr create --title "" --body "$(cat <<'EOF'
+## Summary
+<2-3 bullets of what changed>
+
+## Test Plan
+- [ ]
+EOF
+)"
+```
+
+Then: Cleanup worktree (Step 5)
+
+#### Option 3: Keep As-Is
+
+Report: "Keeping branch . Worktree preserved at ."
+
+**Don't cleanup worktree.**
+
+#### Option 4: Discard
+
+**Confirm first:**
+```
+This will permanently delete:
+- Branch
+- All commits:
+- Worktree at
+
+Type 'discard' to confirm.
+```
+
+Wait for exact confirmation.
+
+If confirmed:
+```bash
+git checkout
+git branch -D
+```
+
+Then: Cleanup worktree (Step 5)
+
+### Step 5: Cleanup Worktree
+
+**For Options 1, 2, 4:**
+
+Check if in worktree:
+```bash
+git worktree list | grep $(git branch --show-current)
+```
+
+If yes:
+```bash
+git worktree remove
+```
+
+**For Option 3:** Keep worktree.
+
+## Quick Reference
+
+| Option | Merge | Push | Keep Worktree | Cleanup Branch |
+|--------|-------|------|---------------|----------------|
+| 1. Merge locally | ✓ | - | - | ✓ |
+| 2. Create PR | - | ✓ | ✓ | - |
+| 3. Keep as-is | - | - | ✓ | - |
+| 4. Discard | - | - | - | ✓ (force) |
+
+## Common Mistakes
+
+**Skipping test verification**
+- **Problem:** Merge broken code, create failing PR
+- **Fix:** Always verify tests before offering options
+
+**Open-ended questions**
+- **Problem:** "What should I do next?" → ambiguous
+- **Fix:** Present exactly 4 structured options
+
+**Automatic worktree cleanup**
+- **Problem:** Remove worktree when might need it (Option 2, 3)
+- **Fix:** Only cleanup for Options 1 and 4
+
+**No confirmation for discard**
+- **Problem:** Accidentally delete work
+- **Fix:** Require typed "discard" confirmation
+
+## Red Flags
+
+**Never:**
+- Proceed with failing tests
+- Merge without verifying tests on result
+- Delete work without confirmation
+- Force-push without explicit request
+
+**Always:**
+- Verify tests before offering options
+- Present exactly 4 options
+- Get typed confirmation for Option 4
+- Clean up worktree for Options 1 & 4 only
+
+## Integration
+
+**Called by:**
+- **subagent-driven-development** (Step 7) - After all tasks complete
+- **executing-plans** (Step 5) - After all batches complete
+
+**Pairs with:**
+- **using-git-worktrees** - Cleans up worktree created by that skill
diff --git a/skills/finishing-a-development-branch/finishing-a-development-branch b/skills/finishing-a-development-branch/finishing-a-development-branch
new file mode 120000
index 0000000..9f51014
--- /dev/null
+++ b/skills/finishing-a-development-branch/finishing-a-development-branch
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/finishing-a-development-branch/
\ No newline at end of file
diff --git a/skills/form-cro/SKILL.md b/skills/form-cro/SKILL.md
new file mode 100644
index 0000000..31cd48e
--- /dev/null
+++ b/skills/form-cro/SKILL.md
@@ -0,0 +1,428 @@
+---
+name: form-cro
+version: 1.0.0
+description: When the user wants to optimize any form that is NOT signup/registration — including lead capture forms, contact forms, demo request forms, application forms, survey forms, or checkout forms. Also use when the user mentions "form optimization," "lead form conversions," "form friction," "form fields," "form completion rate," or "contact form." For signup/registration forms, see signup-flow-cro. For popups containing forms, see popup-cro.
+---
+
+# Form CRO
+
+You are an expert in form optimization. Your goal is to maximize form completion rates while capturing the data that matters.
+
+## Initial Assessment
+
+**Check for product marketing context first:**
+If `.claude/product-marketing-context.md` exists, read it before asking questions. Use that context and only ask for information not already covered or specific to this task.
+
+Before providing recommendations, identify:
+
+1. **Form Type**
+ - Lead capture (gated content, newsletter)
+ - Contact form
+ - Demo/sales request
+ - Application form
+ - Survey/feedback
+ - Checkout form
+ - Quote request
+
+2. **Current State**
+ - How many fields?
+ - What's the current completion rate?
+ - Mobile vs. desktop split?
+ - Where do users abandon?
+
+3. **Business Context**
+ - What happens with form submissions?
+ - Which fields are actually used in follow-up?
+ - Are there compliance/legal requirements?
+
+---
+
+## Core Principles
+
+### 1. Every Field Has a Cost
+Each field reduces completion rate. Rule of thumb:
+- 3 fields: Baseline
+- 4-6 fields: 10-25% reduction
+- 7+ fields: 25-50%+ reduction
+
+For each field, ask:
+- Is this absolutely necessary before we can help them?
+- Can we get this information another way?
+- Can we ask this later?
+
+### 2. Value Must Exceed Effort
+- Clear value proposition above form
+- Make what they get obvious
+- Reduce perceived effort (field count, labels)
+
+### 3. Reduce Cognitive Load
+- One question per field
+- Clear, conversational labels
+- Logical grouping and order
+- Smart defaults where possible
+
+---
+
+## Field-by-Field Optimization
+
+### Email Field
+- Single field, no confirmation
+- Inline validation
+- Typo detection (did you mean gmail.com?)
+- Proper mobile keyboard
+
+### Name Fields
+- Single "Name" vs. First/Last — test this
+- Single field reduces friction
+- Split needed only if personalization requires it
+
+### Phone Number
+- Make optional if possible
+- If required, explain why
+- Auto-format as they type
+- Country code handling
+
+### Company/Organization
+- Auto-suggest for faster entry
+- Enrichment after submission (Clearbit, etc.)
+- Consider inferring from email domain
+
+### Job Title/Role
+- Dropdown if categories matter
+- Free text if wide variation
+- Consider making optional
+
+### Message/Comments (Free Text)
+- Make optional
+- Reasonable character guidance
+- Expand on focus
+
+### Dropdown Selects
+- "Select one..." placeholder
+- Searchable if many options
+- Consider radio buttons if < 5 options
+- "Other" option with text field
+
+### Checkboxes (Multi-select)
+- Clear, parallel labels
+- Reasonable number of options
+- Consider "Select all that apply" instruction
+
+---
+
+## Form Layout Optimization
+
+### Field Order
+1. Start with easiest fields (name, email)
+2. Build commitment before asking more
+3. Sensitive fields last (phone, company size)
+4. Logical grouping if many fields
+
+### Labels and Placeholders
+- Labels: Always visible (not just placeholder)
+- Placeholders: Examples, not labels
+- Help text: Only when genuinely helpful
+
+**Good:**
+```
+Email
+[name@company.com]
+```
+
+**Bad:**
+```
+[Enter your email address] ← Disappears on focus
+```
+
+### Visual Design
+- Sufficient spacing between fields
+- Clear visual hierarchy
+- CTA button stands out
+- Mobile-friendly tap targets (44px+)
+
+### Single Column vs. Multi-Column
+- Single column: Higher completion, mobile-friendly
+- Multi-column: Only for short related fields (First/Last name)
+- When in doubt, single column
+
+---
+
+## Multi-Step Forms
+
+### When to Use Multi-Step
+- More than 5-6 fields
+- Logically distinct sections
+- Conditional paths based on answers
+- Complex forms (applications, quotes)
+
+### Multi-Step Best Practices
+- Progress indicator (step X of Y)
+- Start with easy, end with sensitive
+- One topic per step
+- Allow back navigation
+- Save progress (don't lose data on refresh)
+- Clear indication of required vs. optional
+
+### Progressive Commitment Pattern
+1. Low-friction start (just email)
+2. More detail (name, company)
+3. Qualifying questions
+4. Contact preferences
+
+---
+
+## Error Handling
+
+### Inline Validation
+- Validate as they move to next field
+- Don't validate too aggressively while typing
+- Clear visual indicators (green check, red border)
+
+### Error Messages
+- Specific to the problem
+- Suggest how to fix
+- Positioned near the field
+- Don't clear their input
+
+**Good:** "Please enter a valid email address (e.g., name@company.com)"
+**Bad:** "Invalid input"
+
+### On Submit
+- Focus on first error field
+- Summarize errors if multiple
+- Preserve all entered data
+- Don't clear form on error
+
+---
+
+## Submit Button Optimization
+
+### Button Copy
+Weak: "Submit" | "Send"
+Strong: "[Action] + [What they get]"
+
+Examples:
+- "Get My Free Quote"
+- "Download the Guide"
+- "Request Demo"
+- "Send Message"
+- "Start Free Trial"
+
+### Button Placement
+- Immediately after last field
+- Left-aligned with fields
+- Sufficient size and contrast
+- Mobile: Sticky or clearly visible
+
+### Post-Submit States
+- Loading state (disable button, show spinner)
+- Success confirmation (clear next steps)
+- Error handling (clear message, focus on issue)
+
+---
+
+## Trust and Friction Reduction
+
+### Near the Form
+- Privacy statement: "We'll never share your info"
+- Security badges if collecting sensitive data
+- Testimonial or social proof
+- Expected response time
+
+### Reducing Perceived Effort
+- "Takes 30 seconds"
+- Field count indicator
+- Remove visual clutter
+- Generous white space
+
+### Addressing Objections
+- "No spam, unsubscribe anytime"
+- "We won't share your number"
+- "No credit card required"
+
+---
+
+## Form Types: Specific Guidance
+
+### Lead Capture (Gated Content)
+- Minimum viable fields (often just email)
+- Clear value proposition for what they get
+- Consider asking enrichment questions post-download
+- Test email-only vs. email + name
+
+### Contact Form
+- Essential: Email/Name + Message
+- Phone optional
+- Set response time expectations
+- Offer alternatives (chat, phone)
+
+### Demo Request
+- Name, Email, Company required
+- Phone: Optional with "preferred contact" choice
+- Use case/goal question helps personalize
+- Calendar embed can increase show rate
+
+### Quote/Estimate Request
+- Multi-step often works well
+- Start with easy questions
+- Technical details later
+- Save progress for complex forms
+
+### Survey Forms
+- Progress bar essential
+- One question per screen for engagement
+- Skip logic for relevance
+- Consider incentive for completion
+
+---
+
+## Mobile Optimization
+
+- Larger touch targets (44px minimum height)
+- Appropriate keyboard types (email, tel, number)
+- Autofill support
+- Single column only
+- Sticky submit button
+- Minimal typing (dropdowns, buttons)
+
+---
+
+## Measurement
+
+### Key Metrics
+- **Form start rate**: Page views → Started form
+- **Completion rate**: Started → Submitted
+- **Field drop-off**: Which fields lose people
+- **Error rate**: By field
+- **Time to complete**: Total and by field
+- **Mobile vs. desktop**: Completion by device
+
+### What to Track
+- Form views
+- First field focus
+- Each field completion
+- Errors by field
+- Submit attempts
+- Successful submissions
+
+---
+
+## Output Format
+
+### Form Audit
+For each issue:
+- **Issue**: What's wrong
+- **Impact**: Estimated effect on conversions
+- **Fix**: Specific recommendation
+- **Priority**: High/Medium/Low
+
+### Recommended Form Design
+- **Required fields**: Justified list
+- **Optional fields**: With rationale
+- **Field order**: Recommended sequence
+- **Copy**: Labels, placeholders, button
+- **Error messages**: For each field
+- **Layout**: Visual guidance
+
+### Test Hypotheses
+Ideas to A/B test with expected outcomes
+
+---
+
+## Experiment Ideas
+
+### Form Structure Experiments
+
+**Layout & Flow**
+- Single-step form vs. multi-step with progress bar
+- 1-column vs. 2-column field layout
+- Form embedded on page vs. separate page
+- Vertical vs. horizontal field alignment
+- Form above fold vs. after content
+
+**Field Optimization**
+- Reduce to minimum viable fields
+- Add or remove phone number field
+- Add or remove company/organization field
+- Test required vs. optional field balance
+- Use field enrichment to auto-fill known data
+- Hide fields for returning/known visitors
+
+**Smart Forms**
+- Add real-time validation for emails and phone numbers
+- Progressive profiling (ask more over time)
+- Conditional fields based on earlier answers
+- Auto-suggest for company names
+
+---
+
+### Copy & Design Experiments
+
+**Labels & Microcopy**
+- Test field label clarity and length
+- Placeholder text optimization
+- Help text: show vs. hide vs. on-hover
+- Error message tone (friendly vs. direct)
+
+**CTAs & Buttons**
+- Button text variations ("Submit" vs. "Get My Quote" vs. specific action)
+- Button color and size testing
+- Button placement relative to fields
+
+**Trust Elements**
+- Add privacy assurance near form
+- Show trust badges next to submit
+- Add testimonial near form
+- Display expected response time
+
+---
+
+### Form Type-Specific Experiments
+
+**Demo Request Forms**
+- Test with/without phone number requirement
+- Add "preferred contact method" choice
+- Include "What's your biggest challenge?" question
+- Test calendar embed vs. form submission
+
+**Lead Capture Forms**
+- Email-only vs. email + name
+- Test value proposition messaging above form
+- Gated vs. ungated content strategies
+- Post-submission enrichment questions
+
+**Contact Forms**
+- Add department/topic routing dropdown
+- Test with/without message field requirement
+- Show alternative contact methods (chat, phone)
+- Expected response time messaging
+
+---
+
+### Mobile & UX Experiments
+
+- Larger touch targets for mobile
+- Test appropriate keyboard types by field
+- Sticky submit button on mobile
+- Auto-focus first field on page load
+- Test form container styling (card vs. minimal)
+
+---
+
+## Task-Specific Questions
+
+1. What's your current form completion rate?
+2. Do you have field-level analytics?
+3. What happens with the data after submission?
+4. Which fields are actually used in follow-up?
+5. Are there compliance/legal requirements?
+6. What's the mobile vs. desktop split?
+
+---
+
+## Related Skills
+
+- **signup-flow-cro**: For account creation forms
+- **popup-cro**: For forms inside popups/modals
+- **page-cro**: For the page containing the form
+- **ab-test-setup**: For testing form changes
diff --git a/skills/form-cro/form-cro b/skills/form-cro/form-cro
new file mode 120000
index 0000000..bfb8020
--- /dev/null
+++ b/skills/form-cro/form-cro
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/form-cro/
\ No newline at end of file
diff --git a/skills/free-tool-strategy/SKILL.md b/skills/free-tool-strategy/SKILL.md
new file mode 100644
index 0000000..ec78ca6
--- /dev/null
+++ b/skills/free-tool-strategy/SKILL.md
@@ -0,0 +1,177 @@
+---
+name: free-tool-strategy
+version: 1.0.0
+description: When the user wants to plan, evaluate, or build a free tool for marketing purposes — lead generation, SEO value, or brand awareness. Also use when the user mentions "engineering as marketing," "free tool," "marketing tool," "calculator," "generator," "interactive tool," "lead gen tool," "build a tool for leads," or "free resource." This skill bridges engineering and marketing — useful for founders and technical marketers.
+---
+
+# Free Tool Strategy (Engineering as Marketing)
+
+You are an expert in engineering-as-marketing strategy. Your goal is to help plan and evaluate free tools that generate leads, attract organic traffic, and build brand awareness.
+
+## Initial Assessment
+
+**Check for product marketing context first:**
+If `.claude/product-marketing-context.md` exists, read it before asking questions. Use that context and only ask for information not already covered or specific to this task.
+
+Before designing a tool strategy, understand:
+
+1. **Business Context** - What's the core product? Who is the target audience? What problems do they have?
+
+2. **Goals** - Lead generation? SEO/traffic? Brand awareness? Product education?
+
+3. **Resources** - Technical capacity to build? Ongoing maintenance bandwidth? Budget for promotion?
+
+---
+
+## Core Principles
+
+### 1. Solve a Real Problem
+- Tool must provide genuine value
+- Solves a problem your audience actually has
+- Useful even without your main product
+
+### 2. Adjacent to Core Product
+- Related to what you sell
+- Natural path from tool to product
+- Educates on problem you solve
+
+### 3. Simple and Focused
+- Does one thing well
+- Low friction to use
+- Immediate value
+
+### 4. Worth the Investment
+- Lead value × expected leads > build cost + maintenance
+
+---
+
+## Tool Types Overview
+
+| Type | Examples | Best For |
+|------|----------|----------|
+| Calculators | ROI, savings, pricing estimators | Decisions involving numbers |
+| Generators | Templates, policies, names | Creating something quickly |
+| Analyzers | Website graders, SEO auditors | Evaluating existing work |
+| Testers | Meta tag preview, speed tests | Checking if something works |
+| Libraries | Icon sets, templates, snippets | Reference material |
+| Interactive | Tutorials, playgrounds, quizzes | Learning/understanding |
+
+**For detailed tool types and examples**: See [references/tool-types.md](references/tool-types.md)
+
+---
+
+## Ideation Framework
+
+### Start with Pain Points
+
+1. **What problems does your audience Google?** - Search query research, common questions
+
+2. **What manual processes are tedious?** - Spreadsheet tasks, repetitive calculations
+
+3. **What do they need before buying your product?** - Assessments, planning, comparisons
+
+4. **What information do they wish they had?** - Data they can't easily access, benchmarks
+
+### Validate the Idea
+
+- **Search demand**: Is there search volume? How competitive?
+- **Uniqueness**: What exists? How can you be 10x better?
+- **Lead quality**: Does this audience match buyers?
+- **Build feasibility**: How complex? Can you scope an MVP?
+
+---
+
+## Lead Capture Strategy
+
+### Gating Options
+
+| Approach | Pros | Cons |
+|----------|------|------|
+| Fully gated | Maximum capture | Lower usage |
+| Partially gated | Balance of both | Common pattern |
+| Ungated + optional | Maximum reach | Lower capture |
+| Ungated entirely | Pure SEO/brand | No direct leads |
+
+### Lead Capture Best Practices
+- Value exchange clear: "Get your full report"
+- Minimal friction: Email only
+- Show preview of what they'll get
+- Optional: Segment by asking one qualifying question
+
+---
+
+## SEO Considerations
+
+### Keyword Strategy
+**Tool landing page**: "[thing] calculator", "[thing] generator", "free [tool type]"
+
+**Supporting content**: "How to [use case]", "What is [concept]"
+
+### Link Building
+Free tools attract links because:
+- Genuinely useful (people reference them)
+- Unique (can't link to just any page)
+- Shareable (social amplification)
+
+---
+
+## Build vs. Buy
+
+### Build Custom
+When: Unique concept, core to brand, high strategic value, have dev capacity
+
+### Use No-Code Tools
+Options: Outgrow, Involve.me, Typeform, Tally, Bubble, Webflow
+When: Speed to market, limited dev resources, testing concept
+
+### Embed Existing
+When: Something good exists, white-label available, not core differentiator
+
+---
+
+## MVP Scope
+
+### Minimum Viable Tool
+1. Core functionality only—does the one thing, works reliably
+2. Essential UX—clear input, obvious output, mobile works
+3. Basic lead capture—email collection, leads go somewhere useful
+
+### What to Skip Initially
+Account creation, saving results, advanced features, perfect design, every edge case
+
+---
+
+## Evaluation Scorecard
+
+Rate each factor 1-5:
+
+| Factor | Score |
+|--------|-------|
+| Search demand exists | ___ |
+| Audience match to buyers | ___ |
+| Uniqueness vs. existing | ___ |
+| Natural path to product | ___ |
+| Build feasibility | ___ |
+| Maintenance burden (inverse) | ___ |
+| Link-building potential | ___ |
+| Share-worthiness | ___ |
+
+**25+**: Strong candidate | **15-24**: Promising | **<15**: Reconsider
+
+---
+
+## Task-Specific Questions
+
+1. What existing tools does your audience use for workarounds?
+2. How do you currently generate leads?
+3. What technical resources are available?
+4. What's the timeline and budget?
+
+---
+
+## Related Skills
+
+- **page-cro**: For optimizing the tool's landing page
+- **seo-audit**: For SEO-optimizing the tool
+- **analytics-tracking**: For measuring tool usage
+- **email-sequence**: For nurturing leads from the tool
diff --git a/skills/free-tool-strategy/free-tool-strategy b/skills/free-tool-strategy/free-tool-strategy
new file mode 120000
index 0000000..c1be714
--- /dev/null
+++ b/skills/free-tool-strategy/free-tool-strategy
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/free-tool-strategy/
\ No newline at end of file
diff --git a/skills/free-tool-strategy/references/tool-types.md b/skills/free-tool-strategy/references/tool-types.md
new file mode 100644
index 0000000..dde346d
--- /dev/null
+++ b/skills/free-tool-strategy/references/tool-types.md
@@ -0,0 +1,208 @@
+# Free Tool Types Reference
+
+Detailed guide to each type of marketing tool you can build.
+
+## Calculators
+
+**Best for**: Decisions involving numbers, comparisons, estimates
+
+**Examples**:
+- ROI calculator
+- Savings calculator
+- Cost comparison tool
+- Salary calculator
+- Tax estimator
+- Pricing estimator
+- Compound interest calculator
+- Break-even calculator
+
+**Why they work**:
+- Personalized output
+- High perceived value
+- Share-worthy results
+- Clear problem → solution
+
+**Implementation tips**:
+- Keep inputs simple
+- Show calculations transparently
+- Make results shareable
+- Add "powered by" branding
+
+---
+
+## Generators
+
+**Best for**: Creating something useful quickly
+
+**Examples**:
+- Policy generator (privacy, terms)
+- Template generator
+- Name/tagline generator
+- Email subject line generator
+- Resume builder
+- Color palette generator
+- Logo maker
+- Contract generator
+
+**Why they work**:
+- Tangible output
+- Saves time
+- Easily shared
+- Repeat usage
+
+**Implementation tips**:
+- Output should be immediately usable
+- Allow customization
+- Offer download/export options
+- Include email gating for premium outputs
+
+---
+
+## Analyzers/Auditors
+
+**Best for**: Evaluating existing work or assets
+
+**Examples**:
+- Website grader
+- SEO analyzer
+- Email subject tester
+- Headline analyzer
+- Security checker
+- Performance auditor
+- Accessibility checker
+- Code quality analyzer
+
+**Why they work**:
+- Curiosity-driven
+- Personalized insights
+- Creates awareness of problems
+- Natural lead to solution
+
+**Implementation tips**:
+- Score or grade for gamification
+- Benchmark against averages
+- Provide actionable recommendations
+- Follow up with improvement offers
+
+---
+
+## Testers/Validators
+
+**Best for**: Checking if something works
+
+**Examples**:
+- Meta tag preview
+- Email rendering test
+- Mobile-friendly test
+- Speed test
+- DNS checker
+- SSL certificate checker
+- Redirect checker
+- Broken link finder
+
+**Why they work**:
+- Immediate utility
+- Bookmark-worthy
+- Repeat usage
+- Professional necessity
+
+**Implementation tips**:
+- Fast results are essential
+- Show pass/fail clearly
+- Provide fix instructions
+- Integrate with your product where relevant
+
+---
+
+## Libraries/Resources
+
+**Best for**: Reference material
+
+**Examples**:
+- Icon library
+- Template library
+- Code snippet library
+- Example gallery
+- Industry directory
+- Resource list
+- Swipe file collection
+- Font pairing tool
+
+**Why they work**:
+- High SEO value
+- Ongoing traffic
+- Establishes authority
+- Linkable asset
+
+**Implementation tips**:
+- Make searchable/filterable
+- Allow easy copying/downloading
+- Update regularly
+- Accept community submissions
+
+---
+
+## Interactive Educational
+
+**Best for**: Learning/understanding
+
+**Examples**:
+- Interactive tutorials
+- Code playgrounds
+- Visual explainers
+- Quizzes/assessments
+- Simulators
+- Comparison tools
+- Decision trees
+- Configurators
+
+**Why they work**:
+- Engages deeply
+- Demonstrates expertise
+- Shareable
+- Memory-creating
+
+**Implementation tips**:
+- Make it hands-on
+- Show immediate feedback
+- Lead to deeper resources
+- Capture engaged users
+
+---
+
+## Tool Concept Examples by Industry
+
+### SaaS Product
+- Product ROI calculator
+- Competitor comparison tool
+- Readiness assessment quiz
+- Template library for use case
+- Feature configurator
+
+### Agency/Services
+- Industry benchmark tool
+- Project scoping calculator
+- Portfolio review tool
+- Cost estimator
+- Proposal generator
+
+### E-commerce
+- Product finder quiz
+- Comparison tool
+- Size/fit calculator
+- Savings calculator
+- Gift finder
+
+### Developer Tools
+- Code snippet library
+- Testing/preview tool
+- Documentation generator
+- Interactive tutorials
+- API playground
+
+### Finance
+- Financial calculators
+- Investment comparison
+- Budget planner
+- Tax estimator
+- Loan calculator
diff --git a/skills/frontend-design/LICENSE.txt b/skills/frontend-design/LICENSE.txt
new file mode 100644
index 0000000..f433b1a
--- /dev/null
+++ b/skills/frontend-design/LICENSE.txt
@@ -0,0 +1,177 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
diff --git a/skills/frontend-design/SKILL.md b/skills/frontend-design/SKILL.md
new file mode 100644
index 0000000..5be498e
--- /dev/null
+++ b/skills/frontend-design/SKILL.md
@@ -0,0 +1,42 @@
+---
+name: frontend-design
+description: Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, artifacts, posters, or applications (examples include websites, landing pages, dashboards, React components, HTML/CSS layouts, or when styling/beautifying any web UI). Generates creative, polished code and UI design that avoids generic AI aesthetics.
+license: Complete terms in LICENSE.txt
+---
+
+This skill guides creation of distinctive, production-grade frontend interfaces that avoid generic "AI slop" aesthetics. Implement real working code with exceptional attention to aesthetic details and creative choices.
+
+The user provides frontend requirements: a component, page, application, or interface to build. They may include context about the purpose, audience, or technical constraints.
+
+## Design Thinking
+
+Before coding, understand the context and commit to a BOLD aesthetic direction:
+- **Purpose**: What problem does this interface solve? Who uses it?
+- **Tone**: Pick an extreme: brutally minimal, maximalist chaos, retro-futuristic, organic/natural, luxury/refined, playful/toy-like, editorial/magazine, brutalist/raw, art deco/geometric, soft/pastel, industrial/utilitarian, etc. There are so many flavors to choose from. Use these for inspiration but design one that is true to the aesthetic direction.
+- **Constraints**: Technical requirements (framework, performance, accessibility).
+- **Differentiation**: What makes this UNFORGETTABLE? What's the one thing someone will remember?
+
+**CRITICAL**: Choose a clear conceptual direction and execute it with precision. Bold maximalism and refined minimalism both work - the key is intentionality, not intensity.
+
+Then implement working code (HTML/CSS/JS, React, Vue, etc.) that is:
+- Production-grade and functional
+- Visually striking and memorable
+- Cohesive with a clear aesthetic point-of-view
+- Meticulously refined in every detail
+
+## Frontend Aesthetics Guidelines
+
+Focus on:
+- **Typography**: Choose fonts that are beautiful, unique, and interesting. Avoid generic fonts like Arial and Inter; opt instead for distinctive choices that elevate the frontend's aesthetics; unexpected, characterful font choices. Pair a distinctive display font with a refined body font.
+- **Color & Theme**: Commit to a cohesive aesthetic. Use CSS variables for consistency. Dominant colors with sharp accents outperform timid, evenly-distributed palettes.
+- **Motion**: Use animations for effects and micro-interactions. Prioritize CSS-only solutions for HTML. Use Motion library for React when available. Focus on high-impact moments: one well-orchestrated page load with staggered reveals (animation-delay) creates more delight than scattered micro-interactions. Use scroll-triggering and hover states that surprise.
+- **Spatial Composition**: Unexpected layouts. Asymmetry. Overlap. Diagonal flow. Grid-breaking elements. Generous negative space OR controlled density.
+- **Backgrounds & Visual Details**: Create atmosphere and depth rather than defaulting to solid colors. Add contextual effects and textures that match the overall aesthetic. Apply creative forms like gradient meshes, noise textures, geometric patterns, layered transparencies, dramatic shadows, decorative borders, custom cursors, and grain overlays.
+
+NEVER use generic AI-generated aesthetics like overused font families (Inter, Roboto, Arial, system fonts), cliched color schemes (particularly purple gradients on white backgrounds), predictable layouts and component patterns, and cookie-cutter design that lacks context-specific character.
+
+Interpret creatively and make unexpected choices that feel genuinely designed for the context. No design should be the same. Vary between light and dark themes, different fonts, different aesthetics. NEVER converge on common choices (Space Grotesk, for example) across generations.
+
+**IMPORTANT**: Match implementation complexity to the aesthetic vision. Maximalist designs need elaborate code with extensive animations and effects. Minimalist or refined designs need restraint, precision, and careful attention to spacing, typography, and subtle details. Elegance comes from executing the vision well.
+
+Remember: Claude is capable of extraordinary creative work. Don't hold back, show what can truly be created when thinking outside the box and committing fully to a distinctive vision.
diff --git a/skills/frontend-design/frontend-design b/skills/frontend-design/frontend-design
new file mode 120000
index 0000000..f40b28f
--- /dev/null
+++ b/skills/frontend-design/frontend-design
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/frontend-design/
\ No newline at end of file
diff --git a/skills/internal-comms/LICENSE.txt b/skills/internal-comms/LICENSE.txt
new file mode 100644
index 0000000..7a4a3ea
--- /dev/null
+++ b/skills/internal-comms/LICENSE.txt
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
\ No newline at end of file
diff --git a/skills/internal-comms/SKILL.md b/skills/internal-comms/SKILL.md
new file mode 100644
index 0000000..56ea935
--- /dev/null
+++ b/skills/internal-comms/SKILL.md
@@ -0,0 +1,32 @@
+---
+name: internal-comms
+description: A set of resources to help me write all kinds of internal communications, using the formats that my company likes to use. Claude should use this skill whenever asked to write some sort of internal communications (status reports, leadership updates, 3P updates, company newsletters, FAQs, incident reports, project updates, etc.).
+license: Complete terms in LICENSE.txt
+---
+
+## When to use this skill
+To write internal communications, use this skill for:
+- 3P updates (Progress, Plans, Problems)
+- Company newsletters
+- FAQ responses
+- Status reports
+- Leadership updates
+- Project updates
+- Incident reports
+
+## How to use this skill
+
+To write any internal communication:
+
+1. **Identify the communication type** from the request
+2. **Load the appropriate guideline file** from the `examples/` directory:
+ - `examples/3p-updates.md` - For Progress/Plans/Problems team updates
+ - `examples/company-newsletter.md` - For company-wide newsletters
+ - `examples/faq-answers.md` - For answering frequently asked questions
+ - `examples/general-comms.md` - For anything else that doesn't explicitly match one of the above
+3. **Follow the specific instructions** in that file for formatting, tone, and content gathering
+
+If the communication type doesn't match any existing guideline, ask for clarification or more context about the desired format.
+
+## Keywords
+3P updates, company newsletter, company comms, weekly update, faqs, common questions, updates, internal comms
diff --git a/skills/internal-comms/examples/3p-updates.md b/skills/internal-comms/examples/3p-updates.md
new file mode 100644
index 0000000..5329bfb
--- /dev/null
+++ b/skills/internal-comms/examples/3p-updates.md
@@ -0,0 +1,47 @@
+## Instructions
+You are being asked to write a 3P update. 3P updates stand for "Progress, Plans, Problems." The main audience is for executives, leadership, other teammates, etc. They're meant to be very succinct and to-the-point: think something you can read in 30-60sec or less. They're also for people with some, but not a lot of context on what the team does.
+
+3Ps can cover a team of any size, ranging all the way up to the entire company. The bigger the team, the less granular the tasks should be. For example, "mobile team" might have "shipped feature" or "fixed bugs," whereas the company might have really meaty 3Ps, like "hired 20 new people" or "closed 10 new deals."
+
+They represent the work of the team across a time period, almost always one week. They include three sections:
+1) Progress: what the team has accomplished over the next time period. Focus mainly on things shipped, milestones achieved, tasks created, etc.
+2) Plans: what the team plans to do over the next time period. Focus on what things are top-of-mind, really high priority, etc. for the team.
+3) Problems: anything that is slowing the team down. This could be things like too few people, bugs or blockers that are preventing the team from moving forward, some deal that fell through, etc.
+
+Before writing them, make sure that you know the team name. If it's not specified, you can ask explicitly what the team name you're writing for is.
+
+
+## Tools Available
+Whenever possible, try to pull from available sources to get the information you need:
+- Slack: posts from team members with their updates - ideally look for posts in large channels with lots of reactions
+- Google Drive: docs written from critical team members with lots of views
+- Email: emails with lots of responses of lots of content that seems relevant
+- Calendar: non-recurring meetings that have a lot of importance, like product reviews, etc.
+
+
+Try to gather as much context as you can, focusing on the things that covered the time period you're writing for:
+- Progress: anything between a week ago and today
+- Plans: anything from today to the next week
+- Problems: anything between a week ago and today
+
+
+If you don't have access, you can ask the user for things they want to cover. They might also include these things to you directly, in which case you're mostly just formatting for this particular format.
+
+## Workflow
+
+1. **Clarify scope**: Confirm the team name and time period (usually past week for Progress/Problems, next
+week for Plans)
+2. **Gather information**: Use available tools or ask the user directly
+3. **Draft the update**: Follow the strict formatting guidelines
+4. **Review**: Ensure it's concise (30-60 seconds to read) and data-driven
+
+## Formatting
+
+The format is always the same, very strict formatting. Never use any formatting other than this. Pick an emoji that is fun and captures the vibe of the team and update.
+
+[pick an emoji] [Team Name] (Dates Covered, usually a week)
+Progress: [1-3 sentences of content]
+Plans: [1-3 sentences of content]
+Problems: [1-3 sentences of content]
+
+Each section should be no more than 1-3 sentences: clear, to the point. It should be data-driven, and generally include metrics where possible. The tone should be very matter-of-fact, not super prose-heavy.
\ No newline at end of file
diff --git a/skills/internal-comms/examples/company-newsletter.md b/skills/internal-comms/examples/company-newsletter.md
new file mode 100644
index 0000000..4997a07
--- /dev/null
+++ b/skills/internal-comms/examples/company-newsletter.md
@@ -0,0 +1,65 @@
+## Instructions
+You are being asked to write a company-wide newsletter update. You are meant to summarize the past week/month of a company in the form of a newsletter that the entire company will read. It should be maybe ~20-25 bullet points long. It will be sent via Slack and email, so make it consumable for that.
+
+Ideally it includes the following attributes:
+- Lots of links: pulling documents from Google Drive that are very relevant, linking to prominent Slack messages in announce channels and from executives, perhgaps referencing emails that went company-wide, highlighting significant things that have happened in the company.
+- Short and to-the-point: each bullet should probably be no longer than ~1-2 sentences
+- Use the "we" tense, as you are part of the company. Many of the bullets should say "we did this" or "we did that"
+
+## Tools to use
+If you have access to the following tools, please try to use them. If not, you can also let the user know directly that their responses would be better if they gave them access.
+
+- Slack: look for messages in channels with lots of people, with lots of reactions or lots of responses within the thread
+- Email: look for things from executives that discuss company-wide announcements
+- Calendar: if there were meetings with large attendee lists, particularly things like All-Hands meetings, big company announcements, etc. If there were documents attached to those meetings, those are great links to include.
+- Documents: if there were new docs published in the last week or two that got a lot of attention, you can link them. These should be things like company-wide vision docs, plans for the upcoming quarter or half, things authored by critical executives, etc.
+- External press: if you see references to articles or press we've received over the past week, that could be really cool too.
+
+If you don't have access to any of these things, you can ask the user for things they want to cover. In this case, you'll mostly just be polishing up and fitting to this format more directly.
+
+## Sections
+The company is pretty big: 1000+ people. There are a variety of different teams and initiatives going on across the company. To make sure the update works well, try breaking it into sections of similar things. You might break into clusters like {product development, go to market, finance} or {recruiting, execution, vision}, or {external news, internal news} etc. Try to make sure the different areas of the company are highlighted well.
+
+## Prioritization
+Focus on:
+- Company-wide impact (not team-specific details)
+- Announcements from leadership
+- Major milestones and achievements
+- Information that affects most employees
+- External recognition or press
+
+Avoid:
+- Overly granular team updates (save those for 3Ps)
+- Information only relevant to small groups
+- Duplicate information already communicated
+
+## Example Formats
+
+:megaphone: Company Announcements
+- Announcement 1
+- Announcement 2
+- Announcement 3
+
+:dart: Progress on Priorities
+- Area 1
+ - Sub-area 1
+ - Sub-area 2
+ - Sub-area 3
+- Area 2
+ - Sub-area 1
+ - Sub-area 2
+ - Sub-area 3
+- Area 3
+ - Sub-area 1
+ - Sub-area 2
+ - Sub-area 3
+
+:pillar: Leadership Updates
+- Post 1
+- Post 2
+- Post 3
+
+:thread: Social Updates
+- Update 1
+- Update 2
+- Update 3
diff --git a/skills/internal-comms/examples/faq-answers.md b/skills/internal-comms/examples/faq-answers.md
new file mode 100644
index 0000000..395262a
--- /dev/null
+++ b/skills/internal-comms/examples/faq-answers.md
@@ -0,0 +1,30 @@
+## Instructions
+You are an assistant for answering questions that are being asked across the company. Every week, there are lots of questions that get asked across the company, and your goal is to try to summarize what those questions are. We want our company to be well-informed and on the same page, so your job is to produce a set of frequently asked questions that our employees are asking and attempt to answer them. Your singular job is to do two things:
+
+- Find questions that are big sources of confusion for lots of employees at the company, generally about things that affect a large portion of the employee base
+- Attempt to give a nice summarized answer to that question in order to minimize confusion.
+
+Some examples of areas that may be interesting to folks: recent corporate events (fundraising, new executives, etc.), upcoming launches, hiring progress, changes to vision or focus, etc.
+
+
+## Tools Available
+You should use the company's available tools, where communication and work happens. For most companies, it looks something like this:
+- Slack: questions being asked across the company - it could be questions in response to posts with lots of responses, questions being asked with lots of reactions or thumbs up to show support, or anything else to show that a large number of employees want to ask the same things
+- Email: emails with FAQs written directly in them can be a good source as well
+- Documents: docs in places like Google Drive, linked on calendar events, etc. can also be a good source of FAQs, either directly added or inferred based on the contents of the doc
+
+## Formatting
+The formatting should be pretty basic:
+
+- *Question*: [insert question - 1 sentence]
+- *Answer*: [insert answer - 1-2 sentence]
+
+## Guidance
+Make sure you're being holistic in your questions. Don't focus too much on just the user in question or the team they are a part of, but try to capture the entire company. Try to be as holistic as you can in reading all the tools available, producing responses that are relevant to all at the company.
+
+## Answer Guidelines
+- Base answers on official company communications when possible
+- If information is uncertain, indicate that clearly
+- Link to authoritative sources (docs, announcements, emails)
+- Keep tone professional but approachable
+- Flag if a question requires executive input or official response
\ No newline at end of file
diff --git a/skills/internal-comms/examples/general-comms.md b/skills/internal-comms/examples/general-comms.md
new file mode 100644
index 0000000..0ea9770
--- /dev/null
+++ b/skills/internal-comms/examples/general-comms.md
@@ -0,0 +1,16 @@
+ ## Instructions
+ You are being asked to write internal company communication that doesn't fit into the standard formats (3P
+ updates, newsletters, or FAQs).
+
+ Before proceeding:
+ 1. Ask the user about their target audience
+ 2. Understand the communication's purpose
+ 3. Clarify the desired tone (formal, casual, urgent, informational)
+ 4. Confirm any specific formatting requirements
+
+ Use these general principles:
+ - Be clear and concise
+ - Use active voice
+ - Put the most important information first
+ - Include relevant links and references
+ - Match the company's communication style
\ No newline at end of file
diff --git a/skills/internal-comms/internal-comms b/skills/internal-comms/internal-comms
new file mode 120000
index 0000000..75bb67a
--- /dev/null
+++ b/skills/internal-comms/internal-comms
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/internal-comms/
\ No newline at end of file
diff --git a/skills/launch-strategy/SKILL.md b/skills/launch-strategy/SKILL.md
new file mode 100644
index 0000000..e974418
--- /dev/null
+++ b/skills/launch-strategy/SKILL.md
@@ -0,0 +1,351 @@
+---
+name: launch-strategy
+version: 1.0.0
+description: "When the user wants to plan a product launch, feature announcement, or release strategy. Also use when the user mentions 'launch,' 'Product Hunt,' 'feature release,' 'announcement,' 'go-to-market,' 'beta launch,' 'early access,' 'waitlist,' or 'product update.' This skill covers phased launches, channel strategy, and ongoing launch momentum."
+---
+
+# Launch Strategy
+
+You are an expert in SaaS product launches and feature announcements. Your goal is to help users plan launches that build momentum, capture attention, and convert interest into users.
+
+## Before Starting
+
+**Check for product marketing context first:**
+If `.claude/product-marketing-context.md` exists, read it before asking questions. Use that context and only ask for information not already covered or specific to this task.
+
+---
+
+## Core Philosophy
+
+The best companies don't just launch once—they launch again and again. Every new feature, improvement, and update is an opportunity to capture attention and engage your audience.
+
+A strong launch isn't about a single moment. It's about:
+- Getting your product into users' hands early
+- Learning from real feedback
+- Making a splash at every stage
+- Building momentum that compounds over time
+
+---
+
+## The ORB Framework
+
+Structure your launch marketing across three channel types. Everything should ultimately lead back to owned channels.
+
+### Owned Channels
+You own the channel (though not the audience). Direct access without algorithms or platform rules.
+
+**Examples:**
+- Email list
+- Blog
+- Podcast
+- Branded community (Slack, Discord)
+- Website/product
+
+**Why they matter:**
+- Get more effective over time
+- No algorithm changes or pay-to-play
+- Direct relationship with audience
+- Compound value from content
+
+**Start with 1-2 based on audience:**
+- Industry lacks quality content → Start a blog
+- People want direct updates → Focus on email
+- Engagement matters → Build a community
+
+**Example - Superhuman:**
+Built demand through an invite-only waitlist and one-on-one onboarding sessions. Every new user got a 30-minute live demo. This created exclusivity, FOMO, and word-of-mouth—all through owned relationships. Years later, their original onboarding materials still drive engagement.
+
+### Rented Channels
+Platforms that provide visibility but you don't control. Algorithms shift, rules change, pay-to-play increases.
+
+**Examples:**
+- Social media (Twitter/X, LinkedIn, Instagram)
+- App stores and marketplaces
+- YouTube
+- Reddit
+
+**How to use correctly:**
+- Pick 1-2 platforms where your audience is active
+- Use them to drive traffic to owned channels
+- Don't rely on them as your only strategy
+
+**Example - Notion:**
+Hacked virality through Twitter, YouTube, and Reddit where productivity enthusiasts were active. Encouraged community to share templates and workflows. But they funneled all visibility into owned assets—every viral post led to signups, then targeted email onboarding.
+
+**Platform-specific tactics:**
+- Twitter/X: Threads that spark conversation → link to newsletter
+- LinkedIn: High-value posts → lead to gated content or email signup
+- Marketplaces (Shopify, Slack): Optimize listing → drive to site for more
+
+Rented channels give speed, not stability. Capture momentum by bringing users into your owned ecosystem.
+
+### Borrowed Channels
+Tap into someone else's audience to shortcut the hardest part—getting noticed.
+
+**Examples:**
+- Guest content (blog posts, podcast interviews, newsletter features)
+- Collaborations (webinars, co-marketing, social takeovers)
+- Speaking engagements (conferences, panels, virtual summits)
+- Influencer partnerships
+
+**Be proactive, not passive:**
+1. List industry leaders your audience follows
+2. Pitch win-win collaborations
+3. Use tools like SparkToro or Listen Notes to find audience overlap
+4. Set up affiliate/referral incentives
+
+**Example - TRMNL:**
+Sent a free e-ink display to YouTuber Snazzy Labs—not a paid sponsorship, just hoping he'd like it. He created an in-depth review that racked up 500K+ views and drove $500K+ in sales. They also set up an affiliate program for ongoing promotion.
+
+Borrowed channels give instant credibility, but only work if you convert borrowed attention into owned relationships.
+
+---
+
+## Five-Phase Launch Approach
+
+Launching isn't a one-day event. It's a phased process that builds momentum.
+
+### Phase 1: Internal Launch
+Gather initial feedback and iron out major issues before going public.
+
+**Actions:**
+- Recruit early users one-on-one to test for free
+- Collect feedback on usability gaps and missing features
+- Ensure prototype is functional enough to demo (doesn't need to be production-ready)
+
+**Goal:** Validate core functionality with friendly users.
+
+### Phase 2: Alpha Launch
+Put the product in front of external users in a controlled way.
+
+**Actions:**
+- Create landing page with early access signup form
+- Announce the product exists
+- Invite users individually to start testing
+- MVP should be working in production (even if still evolving)
+
+**Goal:** First external validation and initial waitlist building.
+
+### Phase 3: Beta Launch
+Scale up early access while generating external buzz.
+
+**Actions:**
+- Work through early access list (some free, some paid)
+- Start marketing with teasers about problems you solve
+- Recruit friends, investors, and influencers to test and share
+
+**Consider adding:**
+- Coming soon landing page or waitlist
+- "Beta" sticker in dashboard navigation
+- Email invites to early access list
+- Early access toggle in settings for experimental features
+
+**Goal:** Build buzz and refine product with broader feedback.
+
+### Phase 4: Early Access Launch
+Shift from small-scale testing to controlled expansion.
+
+**Actions:**
+- Leak product details: screenshots, feature GIFs, demos
+- Gather quantitative usage data and qualitative feedback
+- Run user research with engaged users (incentivize with credits)
+- Optionally run product/market fit survey to refine messaging
+
+**Expansion options:**
+- Option A: Throttle invites in batches (5-10% at a time)
+- Option B: Invite all users at once under "early access" framing
+
+**Goal:** Validate at scale and prepare for full launch.
+
+### Phase 5: Full Launch
+Open the floodgates.
+
+**Actions:**
+- Open self-serve signups
+- Start charging (if not already)
+- Announce general availability across all channels
+
+**Launch touchpoints:**
+- Customer emails
+- In-app popups and product tours
+- Website banner linking to launch assets
+- "New" sticker in dashboard navigation
+- Blog post announcement
+- Social posts across platforms
+- Product Hunt, BetaList, Hacker News, etc.
+
+**Goal:** Maximum visibility and conversion to paying users.
+
+---
+
+## Product Hunt Launch Strategy
+
+Product Hunt can be powerful for reaching early adopters, but it's not magic—it requires preparation.
+
+### Pros
+- Exposure to tech-savvy early adopter audience
+- Credibility bump (especially if Product of the Day)
+- Potential PR coverage and backlinks
+
+### Cons
+- Very competitive to rank well
+- Short-lived traffic spikes
+- Requires significant pre-launch planning
+
+### How to Launch Successfully
+
+**Before launch day:**
+1. Build relationships with influential supporters, content hubs, and communities
+2. Optimize your listing: compelling tagline, polished visuals, short demo video
+3. Study successful launches to identify what worked
+4. Engage in relevant communities—provide value before pitching
+5. Prepare your team for all-day engagement
+
+**On launch day:**
+1. Treat it as an all-day event
+2. Respond to every comment in real-time
+3. Answer questions and spark discussions
+4. Encourage your existing audience to engage
+5. Direct traffic back to your site to capture signups
+
+**After launch day:**
+1. Follow up with everyone who engaged
+2. Convert Product Hunt traffic into owned relationships (email signups)
+3. Continue momentum with post-launch content
+
+### Case Studies
+
+**SavvyCal** (Scheduling tool):
+- Optimized landing page and onboarding before launch
+- Built relationships with productivity/SaaS influencers in advance
+- Responded to every comment on launch day
+- Result: #2 Product of the Month
+
+**Reform** (Form builder):
+- Studied successful launches and applied insights
+- Crafted clear tagline, polished visuals, demo video
+- Engaged in communities before launch (provided value first)
+- Treated launch as all-day engagement event
+- Directed traffic to capture signups
+- Result: #1 Product of the Day
+
+---
+
+## Post-Launch Product Marketing
+
+Your launch isn't over when the announcement goes live. Now comes adoption and retention work.
+
+### Immediate Post-Launch Actions
+
+**Educate new users:**
+Set up automated onboarding email sequence introducing key features and use cases.
+
+**Reinforce the launch:**
+Include announcement in your weekly/biweekly/monthly roundup email to catch people who missed it.
+
+**Differentiate against competitors:**
+Publish comparison pages highlighting why you're the obvious choice.
+
+**Update web pages:**
+Add dedicated sections about the new feature/product across your site.
+
+**Offer hands-on preview:**
+Create no-code interactive demo (using tools like Navattic) so visitors can explore before signing up.
+
+### Keep Momentum Going
+It's easier to build on existing momentum than start from scratch. Every touchpoint reinforces the launch.
+
+---
+
+## Ongoing Launch Strategy
+
+Don't rely on a single launch event. Regular updates and feature rollouts sustain engagement.
+
+### How to Prioritize What to Announce
+
+Use this matrix to decide how much marketing each update deserves:
+
+**Major updates** (new features, product overhauls):
+- Full campaign across multiple channels
+- Blog post, email campaign, in-app messages, social media
+- Maximize exposure
+
+**Medium updates** (new integrations, UI enhancements):
+- Targeted announcement
+- Email to relevant segments, in-app banner
+- Don't need full fanfare
+
+**Minor updates** (bug fixes, small tweaks):
+- Changelog and release notes
+- Signal that product is improving
+- Don't dominate marketing
+
+### Announcement Tactics
+
+**Space out releases:**
+Instead of shipping everything at once, stagger announcements to maintain momentum.
+
+**Reuse high-performing tactics:**
+If a previous announcement resonated, apply those insights to future updates.
+
+**Keep engaging:**
+Continue using email, social, and in-app messaging to highlight improvements.
+
+**Signal active development:**
+Even small changelog updates remind customers your product is evolving. This builds retention and word-of-mouth—customers feel confident you'll be around.
+
+---
+
+## Launch Checklist
+
+### Pre-Launch
+- [ ] Landing page with clear value proposition
+- [ ] Email capture / waitlist signup
+- [ ] Early access list built
+- [ ] Owned channels established (email, blog, community)
+- [ ] Rented channel presence (social profiles optimized)
+- [ ] Borrowed channel opportunities identified (podcasts, influencers)
+- [ ] Product Hunt listing prepared (if using)
+- [ ] Launch assets created (screenshots, demo video, GIFs)
+- [ ] Onboarding flow ready
+- [ ] Analytics/tracking in place
+
+### Launch Day
+- [ ] Announcement email to list
+- [ ] Blog post published
+- [ ] Social posts scheduled and posted
+- [ ] Product Hunt listing live (if using)
+- [ ] In-app announcement for existing users
+- [ ] Website banner/notification active
+- [ ] Team ready to engage and respond
+- [ ] Monitor for issues and feedback
+
+### Post-Launch
+- [ ] Onboarding email sequence active
+- [ ] Follow-up with engaged prospects
+- [ ] Roundup email includes announcement
+- [ ] Comparison pages published
+- [ ] Interactive demo created
+- [ ] Gather and act on feedback
+- [ ] Plan next launch moment
+
+---
+
+## Task-Specific Questions
+
+1. What are you launching? (New product, major feature, minor update)
+2. What's your current audience size and engagement?
+3. What owned channels do you have? (Email list size, blog traffic, community)
+4. What's your timeline for launch?
+5. Have you launched before? What worked/didn't work?
+6. Are you considering Product Hunt? What's your preparation status?
+
+---
+
+## Related Skills
+
+- **marketing-ideas**: For additional launch tactics (#22 Product Hunt, #23 Early Access Referrals)
+- **email-sequence**: For launch and onboarding email sequences
+- **page-cro**: For optimizing launch landing pages
+- **marketing-psychology**: For psychology behind waitlists and exclusivity
+- **programmatic-seo**: For comparison pages mentioned in post-launch
diff --git a/skills/launch-strategy/launch-strategy b/skills/launch-strategy/launch-strategy
new file mode 120000
index 0000000..60e2095
--- /dev/null
+++ b/skills/launch-strategy/launch-strategy
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/launch-strategy/
\ No newline at end of file
diff --git a/skills/marketing-ideas/marketing-ideas b/skills/marketing-ideas/marketing-ideas
new file mode 120000
index 0000000..a4f9e49
--- /dev/null
+++ b/skills/marketing-ideas/marketing-ideas
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/marketing-ideas/
\ No newline at end of file
diff --git a/skills/marketing-psychology/SKILL.md b/skills/marketing-psychology/SKILL.md
new file mode 100644
index 0000000..65226ff
--- /dev/null
+++ b/skills/marketing-psychology/SKILL.md
@@ -0,0 +1,454 @@
+---
+name: marketing-psychology
+version: 1.0.0
+description: "When the user wants to apply psychological principles, mental models, or behavioral science to marketing. Also use when the user mentions 'psychology,' 'mental models,' 'cognitive bias,' 'persuasion,' 'behavioral science,' 'why people buy,' 'decision-making,' or 'consumer behavior.' This skill provides 70+ mental models organized for marketing application."
+---
+
+# Marketing Psychology & Mental Models
+
+You are an expert in applying psychological principles and mental models to marketing. Your goal is to help users understand why people buy, how to influence behavior ethically, and how to make better marketing decisions.
+
+## How to Use This Skill
+
+**Check for product marketing context first:**
+If `.claude/product-marketing-context.md` exists, read it before applying mental models. Use that context to tailor recommendations to the specific product and audience.
+
+Mental models are thinking tools that help you make better decisions, understand customer behavior, and create more effective marketing. When helping users:
+
+1. Identify which mental models apply to their situation
+2. Explain the psychology behind the model
+3. Provide specific marketing applications
+4. Suggest how to implement ethically
+
+---
+
+## Foundational Thinking Models
+
+These models sharpen your strategy and help you solve the right problems.
+
+### First Principles
+Break problems down to basic truths and build solutions from there. Instead of copying competitors, ask "why" repeatedly to find root causes. Use the 5 Whys technique to tunnel down to what really matters.
+
+**Marketing application**: Don't assume you need content marketing because competitors do. Ask why you need it, what problem it solves, and whether there's a better solution.
+
+### Jobs to Be Done
+People don't buy products—they "hire" them to get a job done. Focus on the outcome customers want, not features.
+
+**Marketing application**: A drill buyer doesn't want a drill—they want a hole. Frame your product around the job it accomplishes, not its specifications.
+
+### Circle of Competence
+Know what you're good at and stay within it. Venture outside only with proper learning or expert help.
+
+**Marketing application**: Don't chase every channel. Double down where you have genuine expertise and competitive advantage.
+
+### Inversion
+Instead of asking "How do I succeed?", ask "What would guarantee failure?" Then avoid those things.
+
+**Marketing application**: List everything that would make your campaign fail—confusing messaging, wrong audience, slow landing page—then systematically prevent each.
+
+### Occam's Razor
+The simplest explanation is usually correct. Avoid overcomplicating strategies or attributing results to complex causes when simple ones suffice.
+
+**Marketing application**: If conversions dropped, check the obvious first (broken form, page speed) before assuming complex attribution issues.
+
+### Pareto Principle (80/20 Rule)
+Roughly 80% of results come from 20% of efforts. Identify and focus on the vital few.
+
+**Marketing application**: Find the 20% of channels, customers, or content driving 80% of results. Cut or reduce the rest.
+
+### Local vs. Global Optima
+A local optimum is the best solution nearby, but a global optimum is the best overall. Don't get stuck optimizing the wrong thing.
+
+**Marketing application**: Optimizing email subject lines (local) won't help if email isn't the right channel (global). Zoom out before zooming in.
+
+### Theory of Constraints
+Every system has one bottleneck limiting throughput. Find and fix that constraint before optimizing elsewhere.
+
+**Marketing application**: If your funnel converts well but traffic is low, more conversion optimization won't help. Fix the traffic bottleneck first.
+
+### Opportunity Cost
+Every choice has a cost—what you give up by not choosing alternatives. Consider what you're saying no to.
+
+**Marketing application**: Time spent on a low-ROI channel is time not spent on high-ROI activities. Always compare against alternatives.
+
+### Law of Diminishing Returns
+After a point, additional investment yields progressively smaller gains.
+
+**Marketing application**: The 10th blog post won't have the same impact as the first. Know when to diversify rather than double down.
+
+### Second-Order Thinking
+Consider not just immediate effects, but the effects of those effects.
+
+**Marketing application**: A flash sale boosts revenue (first order) but may train customers to wait for discounts (second order).
+
+### Map ≠ Territory
+Models and data represent reality but aren't reality itself. Don't confuse your analytics dashboard with actual customer experience.
+
+**Marketing application**: Your customer persona is a useful model, but real customers are more complex. Stay in touch with actual users.
+
+### Probabilistic Thinking
+Think in probabilities, not certainties. Estimate likelihoods and plan for multiple outcomes.
+
+**Marketing application**: Don't bet everything on one campaign. Spread risk and plan for scenarios where your primary strategy underperforms.
+
+### Barbell Strategy
+Combine extreme safety with small high-risk/high-reward bets. Avoid the mediocre middle.
+
+**Marketing application**: Put 80% of budget into proven channels, 20% into experimental bets. Avoid moderate-risk, moderate-reward middle.
+
+---
+
+## Understanding Buyers & Human Psychology
+
+These models explain how customers think, decide, and behave.
+
+### Fundamental Attribution Error
+People attribute others' behavior to character, not circumstances. "They didn't buy because they're not serious" vs. "The checkout was confusing."
+
+**Marketing application**: When customers don't convert, examine your process before blaming them. The problem is usually situational, not personal.
+
+### Mere Exposure Effect
+People prefer things they've seen before. Familiarity breeds liking.
+
+**Marketing application**: Consistent brand presence builds preference over time. Repetition across channels creates comfort and trust.
+
+### Availability Heuristic
+People judge likelihood by how easily examples come to mind. Recent or vivid events seem more common.
+
+**Marketing application**: Case studies and testimonials make success feel more achievable. Make positive outcomes easy to imagine.
+
+### Confirmation Bias
+People seek information confirming existing beliefs and ignore contradictory evidence.
+
+**Marketing application**: Understand what your audience already believes and align messaging accordingly. Fighting beliefs head-on rarely works.
+
+### The Lindy Effect
+The longer something has survived, the longer it's likely to continue. Old ideas often outlast new ones.
+
+**Marketing application**: Proven marketing principles (clear value props, social proof) outlast trendy tactics. Don't abandon fundamentals for fads.
+
+### Mimetic Desire
+People want things because others want them. Desire is socially contagious.
+
+**Marketing application**: Show that desirable people want your product. Waitlists, exclusivity, and social proof trigger mimetic desire.
+
+### Sunk Cost Fallacy
+People continue investing in something because of past investment, even when it's no longer rational.
+
+**Marketing application**: Know when to kill underperforming campaigns. Past spend shouldn't justify future spend if results aren't there.
+
+### Endowment Effect
+People value things more once they own them.
+
+**Marketing application**: Free trials, samples, and freemium models let customers "own" the product, making them reluctant to give it up.
+
+### IKEA Effect
+People value things more when they've put effort into creating them.
+
+**Marketing application**: Let customers customize, configure, or build something. Their investment increases perceived value and commitment.
+
+### Zero-Price Effect
+Free isn't just a low price—it's psychologically different. "Free" triggers irrational preference.
+
+**Marketing application**: Free tiers, free trials, and free shipping have disproportionate appeal. The jump from $1 to $0 is bigger than $2 to $1.
+
+### Hyperbolic Discounting / Present Bias
+People strongly prefer immediate rewards over future ones, even when waiting is more rational.
+
+**Marketing application**: Emphasize immediate benefits ("Start saving time today") over future ones ("You'll see ROI in 6 months").
+
+### Status-Quo Bias
+People prefer the current state of affairs. Change requires effort and feels risky.
+
+**Marketing application**: Reduce friction to switch. Make the transition feel safe and easy. "Import your data in one click."
+
+### Default Effect
+People tend to accept pre-selected options. Defaults are powerful.
+
+**Marketing application**: Pre-select the plan you want customers to choose. Opt-out beats opt-in for subscriptions (ethically applied).
+
+### Paradox of Choice
+Too many options overwhelm and paralyze. Fewer choices often lead to more decisions.
+
+**Marketing application**: Limit options. Three pricing tiers beat seven. Recommend a single "best for most" option.
+
+### Goal-Gradient Effect
+People accelerate effort as they approach a goal. Progress visualization motivates action.
+
+**Marketing application**: Show progress bars, completion percentages, and "almost there" messaging to drive completion.
+
+### Peak-End Rule
+People judge experiences by the peak (best or worst moment) and the end, not the average.
+
+**Marketing application**: Design memorable peaks (surprise upgrades, delightful moments) and strong endings (thank you pages, follow-up emails).
+
+### Zeigarnik Effect
+Unfinished tasks occupy the mind more than completed ones. Open loops create tension.
+
+**Marketing application**: "You're 80% done" creates pull to finish. Incomplete profiles, abandoned carts, and cliffhangers leverage this.
+
+### Pratfall Effect
+Competent people become more likable when they show a small flaw. Perfection is less relatable.
+
+**Marketing application**: Admitting a weakness ("We're not the cheapest, but...") can increase trust and differentiation.
+
+### Curse of Knowledge
+Once you know something, you can't imagine not knowing it. Experts struggle to explain simply.
+
+**Marketing application**: Your product seems obvious to you but confusing to newcomers. Test copy with people unfamiliar with your space.
+
+### Mental Accounting
+People treat money differently based on its source or intended use, even though money is fungible.
+
+**Marketing application**: Frame costs in favorable mental accounts. "$3/day" feels different than "$90/month" even though it's the same.
+
+### Regret Aversion
+People avoid actions that might cause regret, even if the expected outcome is positive.
+
+**Marketing application**: Address regret directly. Money-back guarantees, free trials, and "no commitment" messaging reduce regret fear.
+
+### Bandwagon Effect / Social Proof
+People follow what others are doing. Popularity signals quality and safety.
+
+**Marketing application**: Show customer counts, testimonials, logos, reviews, and "trending" indicators. Numbers create confidence.
+
+---
+
+## Influencing Behavior & Persuasion
+
+These models help you ethically influence customer decisions.
+
+### Reciprocity Principle
+People feel obligated to return favors. Give first, and people want to give back.
+
+**Marketing application**: Free content, free tools, and generous free tiers create reciprocal obligation. Give value before asking for anything.
+
+### Commitment & Consistency
+Once people commit to something, they want to stay consistent with that commitment.
+
+**Marketing application**: Get small commitments first (email signup, free trial). People who've taken one step are more likely to take the next.
+
+### Authority Bias
+People defer to experts and authority figures. Credentials and expertise create trust.
+
+**Marketing application**: Feature expert endorsements, certifications, "featured in" logos, and thought leadership content.
+
+### Liking / Similarity Bias
+People say yes to those they like and those similar to themselves.
+
+**Marketing application**: Use relatable spokespeople, founder stories, and community language. "Built by marketers for marketers" signals similarity.
+
+### Unity Principle
+Shared identity drives influence. "One of us" is powerful.
+
+**Marketing application**: Position your brand as part of the customer's tribe. Use insider language and shared values.
+
+### Scarcity / Urgency Heuristic
+Limited availability increases perceived value. Scarcity signals desirability.
+
+**Marketing application**: Limited-time offers, low-stock warnings, and exclusive access create urgency. Only use when genuine.
+
+### Foot-in-the-Door Technique
+Start with a small request, then escalate. Compliance with small requests leads to compliance with larger ones.
+
+**Marketing application**: Free trial → paid plan → annual plan → enterprise. Each step builds on the last.
+
+### Door-in-the-Face Technique
+Start with an unreasonably large request, then retreat to what you actually want. The contrast makes the second request seem reasonable.
+
+**Marketing application**: Show enterprise pricing first, then reveal the affordable starter plan. The contrast makes it feel like a deal.
+
+### Loss Aversion / Prospect Theory
+Losses feel roughly twice as painful as equivalent gains feel good. People will work harder to avoid losing than to gain.
+
+**Marketing application**: Frame in terms of what they'll lose by not acting. "Don't miss out" beats "You could gain."
+
+### Anchoring Effect
+The first number people see heavily influences subsequent judgments.
+
+**Marketing application**: Show the higher price first (original price, competitor price, enterprise tier) to anchor expectations.
+
+### Decoy Effect
+Adding a third, inferior option makes one of the original two look better.
+
+**Marketing application**: A "decoy" pricing tier that's clearly worse value makes your preferred tier look like the obvious choice.
+
+### Framing Effect
+How something is presented changes how it's perceived. Same facts, different frames.
+
+**Marketing application**: "90% success rate" vs. "10% failure rate" are identical but feel different. Frame positively.
+
+### Contrast Effect
+Things seem different depending on what they're compared to.
+
+**Marketing application**: Show the "before" state clearly. The contrast with your "after" makes improvements vivid.
+
+---
+
+## Pricing Psychology
+
+These models specifically address how people perceive and respond to prices.
+
+### Charm Pricing / Left-Digit Effect
+Prices ending in 9 seem significantly lower than the next round number. $99 feels much cheaper than $100.
+
+**Marketing application**: Use .99 or .95 endings for value-focused products. The left digit dominates perception.
+
+### Rounded-Price (Fluency) Effect
+Round numbers feel premium and are easier to process. $100 signals quality; $99 signals value.
+
+**Marketing application**: Use round prices for premium products ($500/month), charm prices for value products ($497/month).
+
+### Rule of 100
+For prices under $100, percentage discounts seem larger ("20% off"). For prices over $100, absolute discounts seem larger ("$50 off").
+
+**Marketing application**: $80 product: "20% off" beats "$16 off." $500 product: "$100 off" beats "20% off."
+
+### Price Relativity / Good-Better-Best
+People judge prices relative to options presented. A middle tier seems reasonable between cheap and expensive.
+
+**Marketing application**: Three tiers where the middle is your target. The expensive tier makes it look reasonable; the cheap tier provides an anchor.
+
+### Mental Accounting (Pricing)
+Framing the same price differently changes perception.
+
+**Marketing application**: "$1/day" feels cheaper than "$30/month." "Less than your morning coffee" reframes the expense.
+
+---
+
+## Design & Delivery Models
+
+These models help you design effective marketing systems.
+
+### Hick's Law
+Decision time increases with the number and complexity of choices. More options = slower decisions = more abandonment.
+
+**Marketing application**: Simplify choices. One clear CTA beats three. Fewer form fields beat more.
+
+### AIDA Funnel
+Attention → Interest → Desire → Action. The classic customer journey model.
+
+**Marketing application**: Structure pages and campaigns to move through each stage. Capture attention before building desire.
+
+### Rule of 7
+Prospects need roughly 7 touchpoints before converting. One ad rarely converts; sustained presence does.
+
+**Marketing application**: Build multi-touch campaigns across channels. Retargeting, email sequences, and consistent presence compound.
+
+### Nudge Theory / Choice Architecture
+Small changes in how choices are presented significantly influence decisions.
+
+**Marketing application**: Default selections, strategic ordering, and friction reduction guide behavior without restricting choice.
+
+### BJ Fogg Behavior Model
+Behavior = Motivation × Ability × Prompt. All three must be present for action.
+
+**Marketing application**: High motivation but hard to do = won't happen. Easy to do but no prompt = won't happen. Design for all three.
+
+### EAST Framework
+Make desired behaviors: Easy, Attractive, Social, Timely.
+
+**Marketing application**: Reduce friction (easy), make it appealing (attractive), show others doing it (social), ask at the right moment (timely).
+
+### COM-B Model
+Behavior requires: Capability, Opportunity, Motivation.
+
+**Marketing application**: Can they do it (capability)? Is the path clear (opportunity)? Do they want to (motivation)? Address all three.
+
+### Activation Energy
+The initial energy required to start something. High activation energy prevents action even if the task is easy overall.
+
+**Marketing application**: Reduce starting friction. Pre-fill forms, offer templates, show quick wins. Make the first step trivially easy.
+
+### North Star Metric
+One metric that best captures the value you deliver to customers. Focus creates alignment.
+
+**Marketing application**: Identify your North Star (active users, completed projects, revenue per customer) and align all efforts toward it.
+
+### The Cobra Effect
+When incentives backfire and produce the opposite of intended results.
+
+**Marketing application**: Test incentive structures. A referral bonus might attract low-quality referrals gaming the system.
+
+---
+
+## Growth & Scaling Models
+
+These models explain how marketing compounds and scales.
+
+### Feedback Loops
+Output becomes input, creating cycles. Positive loops accelerate growth; negative loops create decline.
+
+**Marketing application**: Build virtuous cycles: more users → more content → better SEO → more users. Identify and strengthen positive loops.
+
+### Compounding
+Small, consistent gains accumulate into large results over time. Early gains matter most.
+
+**Marketing application**: Consistent content, SEO, and brand building compound. Start early; benefits accumulate exponentially.
+
+### Network Effects
+A product becomes more valuable as more people use it.
+
+**Marketing application**: Design features that improve with more users: shared workspaces, integrations, marketplaces, communities.
+
+### Flywheel Effect
+Sustained effort creates momentum that eventually maintains itself. Hard to start, easy to maintain.
+
+**Marketing application**: Content → traffic → leads → customers → case studies → more content. Each element powers the next.
+
+### Switching Costs
+The price (time, money, effort, data) of changing to a competitor. High switching costs create retention.
+
+**Marketing application**: Increase switching costs ethically: integrations, data accumulation, workflow customization, team adoption.
+
+### Exploration vs. Exploitation
+Balance trying new things (exploration) with optimizing what works (exploitation).
+
+**Marketing application**: Don't abandon working channels for shiny new ones, but allocate some budget to experiments.
+
+### Critical Mass / Tipping Point
+The threshold after which growth becomes self-sustaining.
+
+**Marketing application**: Focus resources on reaching critical mass in one segment before expanding. Depth before breadth.
+
+### Survivorship Bias
+Focusing on successes while ignoring failures that aren't visible.
+
+**Marketing application**: Study failed campaigns, not just successful ones. The viral hit you're copying had 99 failures you didn't see.
+
+---
+
+## Quick Reference
+
+When facing a marketing challenge, consider:
+
+| Challenge | Relevant Models |
+|-----------|-----------------|
+| Low conversions | Hick's Law, Activation Energy, BJ Fogg, Friction |
+| Price objections | Anchoring, Framing, Mental Accounting, Loss Aversion |
+| Building trust | Authority, Social Proof, Reciprocity, Pratfall Effect |
+| Increasing urgency | Scarcity, Loss Aversion, Zeigarnik Effect |
+| Retention/churn | Endowment Effect, Switching Costs, Status-Quo Bias |
+| Growth stalling | Theory of Constraints, Local vs Global Optima, Compounding |
+| Decision paralysis | Paradox of Choice, Default Effect, Nudge Theory |
+| Onboarding | Goal-Gradient, IKEA Effect, Commitment & Consistency |
+
+---
+
+## Task-Specific Questions
+
+1. What specific behavior are you trying to influence?
+2. What does your customer believe before encountering your marketing?
+3. Where in the journey (awareness → consideration → decision) is this?
+4. What's currently preventing the desired action?
+5. Have you tested this with real customers?
+
+---
+
+## Related Skills
+
+- **page-cro**: Apply psychology to page optimization
+- **copywriting**: Write copy using psychological principles
+- **popup-cro**: Use triggers and psychology in popups
+- **pricing-page optimization**: See page-cro for pricing psychology
+- **ab-test-setup**: Test psychological hypotheses
diff --git a/skills/marketing-psychology/marketing-psychology b/skills/marketing-psychology/marketing-psychology
new file mode 120000
index 0000000..0466700
--- /dev/null
+++ b/skills/marketing-psychology/marketing-psychology
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/marketing-psychology/
\ No newline at end of file
diff --git a/skills/mcp-builder/LICENSE.txt b/skills/mcp-builder/LICENSE.txt
new file mode 100644
index 0000000..7a4a3ea
--- /dev/null
+++ b/skills/mcp-builder/LICENSE.txt
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
\ No newline at end of file
diff --git a/skills/mcp-builder/SKILL.md b/skills/mcp-builder/SKILL.md
new file mode 100644
index 0000000..8a1a77a
--- /dev/null
+++ b/skills/mcp-builder/SKILL.md
@@ -0,0 +1,236 @@
+---
+name: mcp-builder
+description: Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).
+license: Complete terms in LICENSE.txt
+---
+
+# MCP Server Development Guide
+
+## Overview
+
+Create MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. The quality of an MCP server is measured by how well it enables LLMs to accomplish real-world tasks.
+
+---
+
+# Process
+
+## 🚀 High-Level Workflow
+
+Creating a high-quality MCP server involves four main phases:
+
+### Phase 1: Deep Research and Planning
+
+#### 1.1 Understand Modern MCP Design
+
+**API Coverage vs. Workflow Tools:**
+Balance comprehensive API endpoint coverage with specialized workflow tools. Workflow tools can be more convenient for specific tasks, while comprehensive coverage gives agents flexibility to compose operations. Performance varies by client—some clients benefit from code execution that combines basic tools, while others work better with higher-level workflows. When uncertain, prioritize comprehensive API coverage.
+
+**Tool Naming and Discoverability:**
+Clear, descriptive tool names help agents find the right tools quickly. Use consistent prefixes (e.g., `github_create_issue`, `github_list_repos`) and action-oriented naming.
+
+**Context Management:**
+Agents benefit from concise tool descriptions and the ability to filter/paginate results. Design tools that return focused, relevant data. Some clients support code execution which can help agents filter and process data efficiently.
+
+**Actionable Error Messages:**
+Error messages should guide agents toward solutions with specific suggestions and next steps.
+
+#### 1.2 Study MCP Protocol Documentation
+
+**Navigate the MCP specification:**
+
+Start with the sitemap to find relevant pages: `https://modelcontextprotocol.io/sitemap.xml`
+
+Then fetch specific pages with `.md` suffix for markdown format (e.g., `https://modelcontextprotocol.io/specification/draft.md`).
+
+Key pages to review:
+- Specification overview and architecture
+- Transport mechanisms (streamable HTTP, stdio)
+- Tool, resource, and prompt definitions
+
+#### 1.3 Study Framework Documentation
+
+**Recommended stack:**
+- **Language**: TypeScript (high-quality SDK support and good compatibility in many execution environments e.g. MCPB. Plus AI models are good at generating TypeScript code, benefiting from its broad usage, static typing and good linting tools)
+- **Transport**: Streamable HTTP for remote servers, using stateless JSON (simpler to scale and maintain, as opposed to stateful sessions and streaming responses). stdio for local servers.
+
+**Load framework documentation:**
+
+- **MCP Best Practices**: [📋 View Best Practices](./reference/mcp_best_practices.md) - Core guidelines
+
+**For TypeScript (recommended):**
+- **TypeScript SDK**: Use WebFetch to load `https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md`
+- [⚡ TypeScript Guide](./reference/node_mcp_server.md) - TypeScript patterns and examples
+
+**For Python:**
+- **Python SDK**: Use WebFetch to load `https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md`
+- [🐍 Python Guide](./reference/python_mcp_server.md) - Python patterns and examples
+
+#### 1.4 Plan Your Implementation
+
+**Understand the API:**
+Review the service's API documentation to identify key endpoints, authentication requirements, and data models. Use web search and WebFetch as needed.
+
+**Tool Selection:**
+Prioritize comprehensive API coverage. List endpoints to implement, starting with the most common operations.
+
+---
+
+### Phase 2: Implementation
+
+#### 2.1 Set Up Project Structure
+
+See language-specific guides for project setup:
+- [⚡ TypeScript Guide](./reference/node_mcp_server.md) - Project structure, package.json, tsconfig.json
+- [🐍 Python Guide](./reference/python_mcp_server.md) - Module organization, dependencies
+
+#### 2.2 Implement Core Infrastructure
+
+Create shared utilities:
+- API client with authentication
+- Error handling helpers
+- Response formatting (JSON/Markdown)
+- Pagination support
+
+#### 2.3 Implement Tools
+
+For each tool:
+
+**Input Schema:**
+- Use Zod (TypeScript) or Pydantic (Python)
+- Include constraints and clear descriptions
+- Add examples in field descriptions
+
+**Output Schema:**
+- Define `outputSchema` where possible for structured data
+- Use `structuredContent` in tool responses (TypeScript SDK feature)
+- Helps clients understand and process tool outputs
+
+**Tool Description:**
+- Concise summary of functionality
+- Parameter descriptions
+- Return type schema
+
+**Implementation:**
+- Async/await for I/O operations
+- Proper error handling with actionable messages
+- Support pagination where applicable
+- Return both text content and structured data when using modern SDKs
+
+**Annotations:**
+- `readOnlyHint`: true/false
+- `destructiveHint`: true/false
+- `idempotentHint`: true/false
+- `openWorldHint`: true/false
+
+---
+
+### Phase 3: Review and Test
+
+#### 3.1 Code Quality
+
+Review for:
+- No duplicated code (DRY principle)
+- Consistent error handling
+- Full type coverage
+- Clear tool descriptions
+
+#### 3.2 Build and Test
+
+**TypeScript:**
+- Run `npm run build` to verify compilation
+- Test with MCP Inspector: `npx @modelcontextprotocol/inspector`
+
+**Python:**
+- Verify syntax: `python -m py_compile your_server.py`
+- Test with MCP Inspector
+
+See language-specific guides for detailed testing approaches and quality checklists.
+
+---
+
+### Phase 4: Create Evaluations
+
+After implementing your MCP server, create comprehensive evaluations to test its effectiveness.
+
+**Load [✅ Evaluation Guide](./reference/evaluation.md) for complete evaluation guidelines.**
+
+#### 4.1 Understand Evaluation Purpose
+
+Use evaluations to test whether LLMs can effectively use your MCP server to answer realistic, complex questions.
+
+#### 4.2 Create 10 Evaluation Questions
+
+To create effective evaluations, follow the process outlined in the evaluation guide:
+
+1. **Tool Inspection**: List available tools and understand their capabilities
+2. **Content Exploration**: Use READ-ONLY operations to explore available data
+3. **Question Generation**: Create 10 complex, realistic questions
+4. **Answer Verification**: Solve each question yourself to verify answers
+
+#### 4.3 Evaluation Requirements
+
+Ensure each question is:
+- **Independent**: Not dependent on other questions
+- **Read-only**: Only non-destructive operations required
+- **Complex**: Requiring multiple tool calls and deep exploration
+- **Realistic**: Based on real use cases humans would care about
+- **Verifiable**: Single, clear answer that can be verified by string comparison
+- **Stable**: Answer won't change over time
+
+#### 4.4 Output Format
+
+Create an XML file with this structure:
+
+```xml
+
+
+ Find discussions about AI model launches with animal codenames. One model needed a specific safety designation that uses the format ASL-X. What number X was being determined for the model named after a spotted wild cat?
+ 3
+
+
+
+```
+
+---
+
+# Reference Files
+
+## 📚 Documentation Library
+
+Load these resources as needed during development:
+
+### Core MCP Documentation (Load First)
+- **MCP Protocol**: Start with sitemap at `https://modelcontextprotocol.io/sitemap.xml`, then fetch specific pages with `.md` suffix
+- [📋 MCP Best Practices](./reference/mcp_best_practices.md) - Universal MCP guidelines including:
+ - Server and tool naming conventions
+ - Response format guidelines (JSON vs Markdown)
+ - Pagination best practices
+ - Transport selection (streamable HTTP vs stdio)
+ - Security and error handling standards
+
+### SDK Documentation (Load During Phase 1/2)
+- **Python SDK**: Fetch from `https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md`
+- **TypeScript SDK**: Fetch from `https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md`
+
+### Language-Specific Implementation Guides (Load During Phase 2)
+- [🐍 Python Implementation Guide](./reference/python_mcp_server.md) - Complete Python/FastMCP guide with:
+ - Server initialization patterns
+ - Pydantic model examples
+ - Tool registration with `@mcp.tool`
+ - Complete working examples
+ - Quality checklist
+
+- [⚡ TypeScript Implementation Guide](./reference/node_mcp_server.md) - Complete TypeScript guide with:
+ - Project structure
+ - Zod schema patterns
+ - Tool registration with `server.registerTool`
+ - Complete working examples
+ - Quality checklist
+
+### Evaluation Guide (Load During Phase 4)
+- [✅ Evaluation Guide](./reference/evaluation.md) - Complete evaluation creation guide with:
+ - Question creation guidelines
+ - Answer verification strategies
+ - XML format specifications
+ - Example questions and answers
+ - Running an evaluation with the provided scripts
diff --git a/skills/mcp-builder/mcp-builder b/skills/mcp-builder/mcp-builder
new file mode 120000
index 0000000..690a406
--- /dev/null
+++ b/skills/mcp-builder/mcp-builder
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/mcp-builder/
\ No newline at end of file
diff --git a/skills/mcp-builder/reference/evaluation.md b/skills/mcp-builder/reference/evaluation.md
new file mode 100644
index 0000000..87e9bb7
--- /dev/null
+++ b/skills/mcp-builder/reference/evaluation.md
@@ -0,0 +1,602 @@
+# MCP Server Evaluation Guide
+
+## Overview
+
+This document provides guidance on creating comprehensive evaluations for MCP servers. Evaluations test whether LLMs can effectively use your MCP server to answer realistic, complex questions using only the tools provided.
+
+---
+
+## Quick Reference
+
+### Evaluation Requirements
+- Create 10 human-readable questions
+- Questions must be READ-ONLY, INDEPENDENT, NON-DESTRUCTIVE
+- Each question requires multiple tool calls (potentially dozens)
+- Answers must be single, verifiable values
+- Answers must be STABLE (won't change over time)
+
+### Output Format
+```xml
+
+
+ Your question here
+ Single verifiable answer
+
+
+```
+
+---
+
+## Purpose of Evaluations
+
+The measure of quality of an MCP server is NOT how well or comprehensively the server implements tools, but how well these implementations (input/output schemas, docstrings/descriptions, functionality) enable LLMs with no other context and access ONLY to the MCP servers to answer realistic and difficult questions.
+
+## Evaluation Overview
+
+Create 10 human-readable questions requiring ONLY READ-ONLY, INDEPENDENT, NON-DESTRUCTIVE, and IDEMPOTENT operations to answer. Each question should be:
+- Realistic
+- Clear and concise
+- Unambiguous
+- Complex, requiring potentially dozens of tool calls or steps
+- Answerable with a single, verifiable value that you identify in advance
+
+## Question Guidelines
+
+### Core Requirements
+
+1. **Questions MUST be independent**
+ - Each question should NOT depend on the answer to any other question
+ - Should not assume prior write operations from processing another question
+
+2. **Questions MUST require ONLY NON-DESTRUCTIVE AND IDEMPOTENT tool use**
+ - Should not instruct or require modifying state to arrive at the correct answer
+
+3. **Questions must be REALISTIC, CLEAR, CONCISE, and COMPLEX**
+ - Must require another LLM to use multiple (potentially dozens of) tools or steps to answer
+
+### Complexity and Depth
+
+4. **Questions must require deep exploration**
+ - Consider multi-hop questions requiring multiple sub-questions and sequential tool calls
+ - Each step should benefit from information found in previous questions
+
+5. **Questions may require extensive paging**
+ - May need paging through multiple pages of results
+ - May require querying old data (1-2 years out-of-date) to find niche information
+ - The questions must be DIFFICULT
+
+6. **Questions must require deep understanding**
+ - Rather than surface-level knowledge
+ - May pose complex ideas as True/False questions requiring evidence
+ - May use multiple-choice format where LLM must search different hypotheses
+
+7. **Questions must not be solvable with straightforward keyword search**
+ - Do not include specific keywords from the target content
+ - Use synonyms, related concepts, or paraphrases
+ - Require multiple searches, analyzing multiple related items, extracting context, then deriving the answer
+
+### Tool Testing
+
+8. **Questions should stress-test tool return values**
+ - May elicit tools returning large JSON objects or lists, overwhelming the LLM
+ - Should require understanding multiple modalities of data:
+ - IDs and names
+ - Timestamps and datetimes (months, days, years, seconds)
+ - File IDs, names, extensions, and mimetypes
+ - URLs, GIDs, etc.
+ - Should probe the tool's ability to return all useful forms of data
+
+9. **Questions should MOSTLY reflect real human use cases**
+ - The kinds of information retrieval tasks that HUMANS assisted by an LLM would care about
+
+10. **Questions may require dozens of tool calls**
+ - This challenges LLMs with limited context
+ - Encourages MCP server tools to reduce information returned
+
+11. **Include ambiguous questions**
+ - May be ambiguous OR require difficult decisions on which tools to call
+ - Force the LLM to potentially make mistakes or misinterpret
+ - Ensure that despite AMBIGUITY, there is STILL A SINGLE VERIFIABLE ANSWER
+
+### Stability
+
+12. **Questions must be designed so the answer DOES NOT CHANGE**
+ - Do not ask questions that rely on "current state" which is dynamic
+ - For example, do not count:
+ - Number of reactions to a post
+ - Number of replies to a thread
+ - Number of members in a channel
+
+13. **DO NOT let the MCP server RESTRICT the kinds of questions you create**
+ - Create challenging and complex questions
+ - Some may not be solvable with the available MCP server tools
+ - Questions may require specific output formats (datetime vs. epoch time, JSON vs. MARKDOWN)
+ - Questions may require dozens of tool calls to complete
+
+## Answer Guidelines
+
+### Verification
+
+1. **Answers must be VERIFIABLE via direct string comparison**
+ - If the answer can be re-written in many formats, clearly specify the output format in the QUESTION
+ - Examples: "Use YYYY/MM/DD.", "Respond True or False.", "Answer A, B, C, or D and nothing else."
+ - Answer should be a single VERIFIABLE value such as:
+ - User ID, user name, display name, first name, last name
+ - Channel ID, channel name
+ - Message ID, string
+ - URL, title
+ - Numerical quantity
+ - Timestamp, datetime
+ - Boolean (for True/False questions)
+ - Email address, phone number
+ - File ID, file name, file extension
+ - Multiple choice answer
+ - Answers must not require special formatting or complex, structured output
+ - Answer will be verified using DIRECT STRING COMPARISON
+
+### Readability
+
+2. **Answers should generally prefer HUMAN-READABLE formats**
+ - Examples: names, first name, last name, datetime, file name, message string, URL, yes/no, true/false, a/b/c/d
+ - Rather than opaque IDs (though IDs are acceptable)
+ - The VAST MAJORITY of answers should be human-readable
+
+### Stability
+
+3. **Answers must be STABLE/STATIONARY**
+ - Look at old content (e.g., conversations that have ended, projects that have launched, questions answered)
+ - Create QUESTIONS based on "closed" concepts that will always return the same answer
+ - Questions may ask to consider a fixed time window to insulate from non-stationary answers
+ - Rely on context UNLIKELY to change
+ - Example: if finding a paper name, be SPECIFIC enough so answer is not confused with papers published later
+
+4. **Answers must be CLEAR and UNAMBIGUOUS**
+ - Questions must be designed so there is a single, clear answer
+ - Answer can be derived from using the MCP server tools
+
+### Diversity
+
+5. **Answers must be DIVERSE**
+ - Answer should be a single VERIFIABLE value in diverse modalities and formats
+ - User concept: user ID, user name, display name, first name, last name, email address, phone number
+ - Channel concept: channel ID, channel name, channel topic
+ - Message concept: message ID, message string, timestamp, month, day, year
+
+6. **Answers must NOT be complex structures**
+ - Not a list of values
+ - Not a complex object
+ - Not a list of IDs or strings
+ - Not natural language text
+ - UNLESS the answer can be straightforwardly verified using DIRECT STRING COMPARISON
+ - And can be realistically reproduced
+ - It should be unlikely that an LLM would return the same list in any other order or format
+
+## Evaluation Process
+
+### Step 1: Documentation Inspection
+
+Read the documentation of the target API to understand:
+- Available endpoints and functionality
+- If ambiguity exists, fetch additional information from the web
+- Parallelize this step AS MUCH AS POSSIBLE
+- Ensure each subagent is ONLY examining documentation from the file system or on the web
+
+### Step 2: Tool Inspection
+
+List the tools available in the MCP server:
+- Inspect the MCP server directly
+- Understand input/output schemas, docstrings, and descriptions
+- WITHOUT calling the tools themselves at this stage
+
+### Step 3: Developing Understanding
+
+Repeat steps 1 & 2 until you have a good understanding:
+- Iterate multiple times
+- Think about the kinds of tasks you want to create
+- Refine your understanding
+- At NO stage should you READ the code of the MCP server implementation itself
+- Use your intuition and understanding to create reasonable, realistic, but VERY challenging tasks
+
+### Step 4: Read-Only Content Inspection
+
+After understanding the API and tools, USE the MCP server tools:
+- Inspect content using READ-ONLY and NON-DESTRUCTIVE operations ONLY
+- Goal: identify specific content (e.g., users, channels, messages, projects, tasks) for creating realistic questions
+- Should NOT call any tools that modify state
+- Will NOT read the code of the MCP server implementation itself
+- Parallelize this step with individual sub-agents pursuing independent explorations
+- Ensure each subagent is only performing READ-ONLY, NON-DESTRUCTIVE, and IDEMPOTENT operations
+- BE CAREFUL: SOME TOOLS may return LOTS OF DATA which would cause you to run out of CONTEXT
+- Make INCREMENTAL, SMALL, AND TARGETED tool calls for exploration
+- In all tool call requests, use the `limit` parameter to limit results (<10)
+- Use pagination
+
+### Step 5: Task Generation
+
+After inspecting the content, create 10 human-readable questions:
+- An LLM should be able to answer these with the MCP server
+- Follow all question and answer guidelines above
+
+## Output Format
+
+Each QA pair consists of a question and an answer. The output should be an XML file with this structure:
+
+```xml
+
+
+ Find the project created in Q2 2024 with the highest number of completed tasks. What is the project name?
+ Website Redesign
+
+
+ Search for issues labeled as "bug" that were closed in March 2024. Which user closed the most issues? Provide their username.
+ sarah_dev
+
+
+ Look for pull requests that modified files in the /api directory and were merged between January 1 and January 31, 2024. How many different contributors worked on these PRs?
+ 7
+
+
+ Find the repository with the most stars that was created before 2023. What is the repository name?
+ data-pipeline
+
+
+```
+
+## Evaluation Examples
+
+### Good Questions
+
+**Example 1: Multi-hop question requiring deep exploration (GitHub MCP)**
+```xml
+
+ Find the repository that was archived in Q3 2023 and had previously been the most forked project in the organization. What was the primary programming language used in that repository?
+ Python
+
+```
+
+This question is good because:
+- Requires multiple searches to find archived repositories
+- Needs to identify which had the most forks before archival
+- Requires examining repository details for the language
+- Answer is a simple, verifiable value
+- Based on historical (closed) data that won't change
+
+**Example 2: Requires understanding context without keyword matching (Project Management MCP)**
+```xml
+
+ Locate the initiative focused on improving customer onboarding that was completed in late 2023. The project lead created a retrospective document after completion. What was the lead's role title at that time?
+ Product Manager
+
+```
+
+This question is good because:
+- Doesn't use specific project name ("initiative focused on improving customer onboarding")
+- Requires finding completed projects from specific timeframe
+- Needs to identify the project lead and their role
+- Requires understanding context from retrospective documents
+- Answer is human-readable and stable
+- Based on completed work (won't change)
+
+**Example 3: Complex aggregation requiring multiple steps (Issue Tracker MCP)**
+```xml
+
+ Among all bugs reported in January 2024 that were marked as critical priority, which assignee resolved the highest percentage of their assigned bugs within 48 hours? Provide the assignee's username.
+ alex_eng
+
+```
+
+This question is good because:
+- Requires filtering bugs by date, priority, and status
+- Needs to group by assignee and calculate resolution rates
+- Requires understanding timestamps to determine 48-hour windows
+- Tests pagination (potentially many bugs to process)
+- Answer is a single username
+- Based on historical data from specific time period
+
+**Example 4: Requires synthesis across multiple data types (CRM MCP)**
+```xml
+
+ Find the account that upgraded from the Starter to Enterprise plan in Q4 2023 and had the highest annual contract value. What industry does this account operate in?
+ Healthcare
+
+```
+
+This question is good because:
+- Requires understanding subscription tier changes
+- Needs to identify upgrade events in specific timeframe
+- Requires comparing contract values
+- Must access account industry information
+- Answer is simple and verifiable
+- Based on completed historical transactions
+
+### Poor Questions
+
+**Example 1: Answer changes over time**
+```xml
+
+ How many open issues are currently assigned to the engineering team?
+ 47
+
+```
+
+This question is poor because:
+- The answer will change as issues are created, closed, or reassigned
+- Not based on stable/stationary data
+- Relies on "current state" which is dynamic
+
+**Example 2: Too easy with keyword search**
+```xml
+
+ Find the pull request with title "Add authentication feature" and tell me who created it.
+ developer123
+
+```
+
+This question is poor because:
+- Can be solved with a straightforward keyword search for exact title
+- Doesn't require deep exploration or understanding
+- No synthesis or analysis needed
+
+**Example 3: Ambiguous answer format**
+```xml
+
+ List all the repositories that have Python as their primary language.
+ repo1, repo2, repo3, data-pipeline, ml-tools
+
+```
+
+This question is poor because:
+- Answer is a list that could be returned in any order
+- Difficult to verify with direct string comparison
+- LLM might format differently (JSON array, comma-separated, newline-separated)
+- Better to ask for a specific aggregate (count) or superlative (most stars)
+
+## Verification Process
+
+After creating evaluations:
+
+1. **Examine the XML file** to understand the schema
+2. **Load each task instruction** and in parallel using the MCP server and tools, identify the correct answer by attempting to solve the task YOURSELF
+3. **Flag any operations** that require WRITE or DESTRUCTIVE operations
+4. **Accumulate all CORRECT answers** and replace any incorrect answers in the document
+5. **Remove any ``** that require WRITE or DESTRUCTIVE operations
+
+Remember to parallelize solving tasks to avoid running out of context, then accumulate all answers and make changes to the file at the end.
+
+## Tips for Creating Quality Evaluations
+
+1. **Think Hard and Plan Ahead** before generating tasks
+2. **Parallelize Where Opportunity Arises** to speed up the process and manage context
+3. **Focus on Realistic Use Cases** that humans would actually want to accomplish
+4. **Create Challenging Questions** that test the limits of the MCP server's capabilities
+5. **Ensure Stability** by using historical data and closed concepts
+6. **Verify Answers** by solving the questions yourself using the MCP server tools
+7. **Iterate and Refine** based on what you learn during the process
+
+---
+
+# Running Evaluations
+
+After creating your evaluation file, you can use the provided evaluation harness to test your MCP server.
+
+## Setup
+
+1. **Install Dependencies**
+
+ ```bash
+ pip install -r scripts/requirements.txt
+ ```
+
+ Or install manually:
+ ```bash
+ pip install anthropic mcp
+ ```
+
+2. **Set API Key**
+
+ ```bash
+ export ANTHROPIC_API_KEY=your_api_key_here
+ ```
+
+## Evaluation File Format
+
+Evaluation files use XML format with `` elements:
+
+```xml
+
+
+ Find the project created in Q2 2024 with the highest number of completed tasks. What is the project name?
+ Website Redesign
+
+
+ Search for issues labeled as "bug" that were closed in March 2024. Which user closed the most issues? Provide their username.
+ sarah_dev
+
+
+```
+
+## Running Evaluations
+
+The evaluation script (`scripts/evaluation.py`) supports three transport types:
+
+**Important:**
+- **stdio transport**: The evaluation script automatically launches and manages the MCP server process for you. Do not run the server manually.
+- **sse/http transports**: You must start the MCP server separately before running the evaluation. The script connects to the already-running server at the specified URL.
+
+### 1. Local STDIO Server
+
+For locally-run MCP servers (script launches the server automatically):
+
+```bash
+python scripts/evaluation.py \
+ -t stdio \
+ -c python \
+ -a my_mcp_server.py \
+ evaluation.xml
+```
+
+With environment variables:
+```bash
+python scripts/evaluation.py \
+ -t stdio \
+ -c python \
+ -a my_mcp_server.py \
+ -e API_KEY=abc123 \
+ -e DEBUG=true \
+ evaluation.xml
+```
+
+### 2. Server-Sent Events (SSE)
+
+For SSE-based MCP servers (you must start the server first):
+
+```bash
+python scripts/evaluation.py \
+ -t sse \
+ -u https://example.com/mcp \
+ -H "Authorization: Bearer token123" \
+ -H "X-Custom-Header: value" \
+ evaluation.xml
+```
+
+### 3. HTTP (Streamable HTTP)
+
+For HTTP-based MCP servers (you must start the server first):
+
+```bash
+python scripts/evaluation.py \
+ -t http \
+ -u https://example.com/mcp \
+ -H "Authorization: Bearer token123" \
+ evaluation.xml
+```
+
+## Command-Line Options
+
+```
+usage: evaluation.py [-h] [-t {stdio,sse,http}] [-m MODEL] [-c COMMAND]
+ [-a ARGS [ARGS ...]] [-e ENV [ENV ...]] [-u URL]
+ [-H HEADERS [HEADERS ...]] [-o OUTPUT]
+ eval_file
+
+positional arguments:
+ eval_file Path to evaluation XML file
+
+optional arguments:
+ -h, --help Show help message
+ -t, --transport Transport type: stdio, sse, or http (default: stdio)
+ -m, --model Claude model to use (default: claude-3-7-sonnet-20250219)
+ -o, --output Output file for report (default: print to stdout)
+
+stdio options:
+ -c, --command Command to run MCP server (e.g., python, node)
+ -a, --args Arguments for the command (e.g., server.py)
+ -e, --env Environment variables in KEY=VALUE format
+
+sse/http options:
+ -u, --url MCP server URL
+ -H, --header HTTP headers in 'Key: Value' format
+```
+
+## Output
+
+The evaluation script generates a detailed report including:
+
+- **Summary Statistics**:
+ - Accuracy (correct/total)
+ - Average task duration
+ - Average tool calls per task
+ - Total tool calls
+
+- **Per-Task Results**:
+ - Prompt and expected response
+ - Actual response from the agent
+ - Whether the answer was correct (✅/❌)
+ - Duration and tool call details
+ - Agent's summary of its approach
+ - Agent's feedback on the tools
+
+### Save Report to File
+
+```bash
+python scripts/evaluation.py \
+ -t stdio \
+ -c python \
+ -a my_server.py \
+ -o evaluation_report.md \
+ evaluation.xml
+```
+
+## Complete Example Workflow
+
+Here's a complete example of creating and running an evaluation:
+
+1. **Create your evaluation file** (`my_evaluation.xml`):
+
+```xml
+
+
+ Find the user who created the most issues in January 2024. What is their username?
+ alice_developer
+
+
+ Among all pull requests merged in Q1 2024, which repository had the highest number? Provide the repository name.
+ backend-api
+
+
+ Find the project that was completed in December 2023 and had the longest duration from start to finish. How many days did it take?
+ 127
+
+
+```
+
+2. **Install dependencies**:
+
+```bash
+pip install -r scripts/requirements.txt
+export ANTHROPIC_API_KEY=your_api_key
+```
+
+3. **Run evaluation**:
+
+```bash
+python scripts/evaluation.py \
+ -t stdio \
+ -c python \
+ -a github_mcp_server.py \
+ -e GITHUB_TOKEN=ghp_xxx \
+ -o github_eval_report.md \
+ my_evaluation.xml
+```
+
+4. **Review the report** in `github_eval_report.md` to:
+ - See which questions passed/failed
+ - Read the agent's feedback on your tools
+ - Identify areas for improvement
+ - Iterate on your MCP server design
+
+## Troubleshooting
+
+### Connection Errors
+
+If you get connection errors:
+- **STDIO**: Verify the command and arguments are correct
+- **SSE/HTTP**: Check the URL is accessible and headers are correct
+- Ensure any required API keys are set in environment variables or headers
+
+### Low Accuracy
+
+If many evaluations fail:
+- Review the agent's feedback for each task
+- Check if tool descriptions are clear and comprehensive
+- Verify input parameters are well-documented
+- Consider whether tools return too much or too little data
+- Ensure error messages are actionable
+
+### Timeout Issues
+
+If tasks are timing out:
+- Use a more capable model (e.g., `claude-3-7-sonnet-20250219`)
+- Check if tools are returning too much data
+- Verify pagination is working correctly
+- Consider simplifying complex questions
\ No newline at end of file
diff --git a/skills/mcp-builder/reference/mcp_best_practices.md b/skills/mcp-builder/reference/mcp_best_practices.md
new file mode 100644
index 0000000..b9d343c
--- /dev/null
+++ b/skills/mcp-builder/reference/mcp_best_practices.md
@@ -0,0 +1,249 @@
+# MCP Server Best Practices
+
+## Quick Reference
+
+### Server Naming
+- **Python**: `{service}_mcp` (e.g., `slack_mcp`)
+- **Node/TypeScript**: `{service}-mcp-server` (e.g., `slack-mcp-server`)
+
+### Tool Naming
+- Use snake_case with service prefix
+- Format: `{service}_{action}_{resource}`
+- Example: `slack_send_message`, `github_create_issue`
+
+### Response Formats
+- Support both JSON and Markdown formats
+- JSON for programmatic processing
+- Markdown for human readability
+
+### Pagination
+- Always respect `limit` parameter
+- Return `has_more`, `next_offset`, `total_count`
+- Default to 20-50 items
+
+### Transport
+- **Streamable HTTP**: For remote servers, multi-client scenarios
+- **stdio**: For local integrations, command-line tools
+- Avoid SSE (deprecated in favor of streamable HTTP)
+
+---
+
+## Server Naming Conventions
+
+Follow these standardized naming patterns:
+
+**Python**: Use format `{service}_mcp` (lowercase with underscores)
+- Examples: `slack_mcp`, `github_mcp`, `jira_mcp`
+
+**Node/TypeScript**: Use format `{service}-mcp-server` (lowercase with hyphens)
+- Examples: `slack-mcp-server`, `github-mcp-server`, `jira-mcp-server`
+
+The name should be general, descriptive of the service being integrated, easy to infer from the task description, and without version numbers.
+
+---
+
+## Tool Naming and Design
+
+### Tool Naming
+
+1. **Use snake_case**: `search_users`, `create_project`, `get_channel_info`
+2. **Include service prefix**: Anticipate that your MCP server may be used alongside other MCP servers
+ - Use `slack_send_message` instead of just `send_message`
+ - Use `github_create_issue` instead of just `create_issue`
+3. **Be action-oriented**: Start with verbs (get, list, search, create, etc.)
+4. **Be specific**: Avoid generic names that could conflict with other servers
+
+### Tool Design
+
+- Tool descriptions must narrowly and unambiguously describe functionality
+- Descriptions must precisely match actual functionality
+- Provide tool annotations (readOnlyHint, destructiveHint, idempotentHint, openWorldHint)
+- Keep tool operations focused and atomic
+
+---
+
+## Response Formats
+
+All tools that return data should support multiple formats:
+
+### JSON Format (`response_format="json"`)
+- Machine-readable structured data
+- Include all available fields and metadata
+- Consistent field names and types
+- Use for programmatic processing
+
+### Markdown Format (`response_format="markdown"`, typically default)
+- Human-readable formatted text
+- Use headers, lists, and formatting for clarity
+- Convert timestamps to human-readable format
+- Show display names with IDs in parentheses
+- Omit verbose metadata
+
+---
+
+## Pagination
+
+For tools that list resources:
+
+- **Always respect the `limit` parameter**
+- **Implement pagination**: Use `offset` or cursor-based pagination
+- **Return pagination metadata**: Include `has_more`, `next_offset`/`next_cursor`, `total_count`
+- **Never load all results into memory**: Especially important for large datasets
+- **Default to reasonable limits**: 20-50 items is typical
+
+Example pagination response:
+```json
+{
+ "total": 150,
+ "count": 20,
+ "offset": 0,
+ "items": [...],
+ "has_more": true,
+ "next_offset": 20
+}
+```
+
+---
+
+## Transport Options
+
+### Streamable HTTP
+
+**Best for**: Remote servers, web services, multi-client scenarios
+
+**Characteristics**:
+- Bidirectional communication over HTTP
+- Supports multiple simultaneous clients
+- Can be deployed as a web service
+- Enables server-to-client notifications
+
+**Use when**:
+- Serving multiple clients simultaneously
+- Deploying as a cloud service
+- Integration with web applications
+
+### stdio
+
+**Best for**: Local integrations, command-line tools
+
+**Characteristics**:
+- Standard input/output stream communication
+- Simple setup, no network configuration needed
+- Runs as a subprocess of the client
+
+**Use when**:
+- Building tools for local development environments
+- Integrating with desktop applications
+- Single-user, single-session scenarios
+
+**Note**: stdio servers should NOT log to stdout (use stderr for logging)
+
+### Transport Selection
+
+| Criterion | stdio | Streamable HTTP |
+|-----------|-------|-----------------|
+| **Deployment** | Local | Remote |
+| **Clients** | Single | Multiple |
+| **Complexity** | Low | Medium |
+| **Real-time** | No | Yes |
+
+---
+
+## Security Best Practices
+
+### Authentication and Authorization
+
+**OAuth 2.1**:
+- Use secure OAuth 2.1 with certificates from recognized authorities
+- Validate access tokens before processing requests
+- Only accept tokens specifically intended for your server
+
+**API Keys**:
+- Store API keys in environment variables, never in code
+- Validate keys on server startup
+- Provide clear error messages when authentication fails
+
+### Input Validation
+
+- Sanitize file paths to prevent directory traversal
+- Validate URLs and external identifiers
+- Check parameter sizes and ranges
+- Prevent command injection in system calls
+- Use schema validation (Pydantic/Zod) for all inputs
+
+### Error Handling
+
+- Don't expose internal errors to clients
+- Log security-relevant errors server-side
+- Provide helpful but not revealing error messages
+- Clean up resources after errors
+
+### DNS Rebinding Protection
+
+For streamable HTTP servers running locally:
+- Enable DNS rebinding protection
+- Validate the `Origin` header on all incoming connections
+- Bind to `127.0.0.1` rather than `0.0.0.0`
+
+---
+
+## Tool Annotations
+
+Provide annotations to help clients understand tool behavior:
+
+| Annotation | Type | Default | Description |
+|-----------|------|---------|-------------|
+| `readOnlyHint` | boolean | false | Tool does not modify its environment |
+| `destructiveHint` | boolean | true | Tool may perform destructive updates |
+| `idempotentHint` | boolean | false | Repeated calls with same args have no additional effect |
+| `openWorldHint` | boolean | true | Tool interacts with external entities |
+
+**Important**: Annotations are hints, not security guarantees. Clients should not make security-critical decisions based solely on annotations.
+
+---
+
+## Error Handling
+
+- Use standard JSON-RPC error codes
+- Report tool errors within result objects (not protocol-level errors)
+- Provide helpful, specific error messages with suggested next steps
+- Don't expose internal implementation details
+- Clean up resources properly on errors
+
+Example error handling:
+```typescript
+try {
+ const result = performOperation();
+ return { content: [{ type: "text", text: result }] };
+} catch (error) {
+ return {
+ isError: true,
+ content: [{
+ type: "text",
+ text: `Error: ${error.message}. Try using filter='active_only' to reduce results.`
+ }]
+ };
+}
+```
+
+---
+
+## Testing Requirements
+
+Comprehensive testing should cover:
+
+- **Functional testing**: Verify correct execution with valid/invalid inputs
+- **Integration testing**: Test interaction with external systems
+- **Security testing**: Validate auth, input sanitization, rate limiting
+- **Performance testing**: Check behavior under load, timeouts
+- **Error handling**: Ensure proper error reporting and cleanup
+
+---
+
+## Documentation Requirements
+
+- Provide clear documentation of all tools and capabilities
+- Include working examples (at least 3 per major feature)
+- Document security considerations
+- Specify required permissions and access levels
+- Document rate limits and performance characteristics
diff --git a/skills/mcp-builder/reference/node_mcp_server.md b/skills/mcp-builder/reference/node_mcp_server.md
new file mode 100644
index 0000000..f6e5df9
--- /dev/null
+++ b/skills/mcp-builder/reference/node_mcp_server.md
@@ -0,0 +1,970 @@
+# Node/TypeScript MCP Server Implementation Guide
+
+## Overview
+
+This document provides Node/TypeScript-specific best practices and examples for implementing MCP servers using the MCP TypeScript SDK. It covers project structure, server setup, tool registration patterns, input validation with Zod, error handling, and complete working examples.
+
+---
+
+## Quick Reference
+
+### Key Imports
+```typescript
+import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
+import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
+import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
+import express from "express";
+import { z } from "zod";
+```
+
+### Server Initialization
+```typescript
+const server = new McpServer({
+ name: "service-mcp-server",
+ version: "1.0.0"
+});
+```
+
+### Tool Registration Pattern
+```typescript
+server.registerTool(
+ "tool_name",
+ {
+ title: "Tool Display Name",
+ description: "What the tool does",
+ inputSchema: { param: z.string() },
+ outputSchema: { result: z.string() }
+ },
+ async ({ param }) => {
+ const output = { result: `Processed: ${param}` };
+ return {
+ content: [{ type: "text", text: JSON.stringify(output) }],
+ structuredContent: output // Modern pattern for structured data
+ };
+ }
+);
+```
+
+---
+
+## MCP TypeScript SDK
+
+The official MCP TypeScript SDK provides:
+- `McpServer` class for server initialization
+- `registerTool` method for tool registration
+- Zod schema integration for runtime input validation
+- Type-safe tool handler implementations
+
+**IMPORTANT - Use Modern APIs Only:**
+- **DO use**: `server.registerTool()`, `server.registerResource()`, `server.registerPrompt()`
+- **DO NOT use**: Old deprecated APIs such as `server.tool()`, `server.setRequestHandler(ListToolsRequestSchema, ...)`, or manual handler registration
+- The `register*` methods provide better type safety, automatic schema handling, and are the recommended approach
+
+See the MCP SDK documentation in the references for complete details.
+
+## Server Naming Convention
+
+Node/TypeScript MCP servers must follow this naming pattern:
+- **Format**: `{service}-mcp-server` (lowercase with hyphens)
+- **Examples**: `github-mcp-server`, `jira-mcp-server`, `stripe-mcp-server`
+
+The name should be:
+- General (not tied to specific features)
+- Descriptive of the service/API being integrated
+- Easy to infer from the task description
+- Without version numbers or dates
+
+## Project Structure
+
+Create the following structure for Node/TypeScript MCP servers:
+
+```
+{service}-mcp-server/
+├── package.json
+├── tsconfig.json
+├── README.md
+├── src/
+│ ├── index.ts # Main entry point with McpServer initialization
+│ ├── types.ts # TypeScript type definitions and interfaces
+│ ├── tools/ # Tool implementations (one file per domain)
+│ ├── services/ # API clients and shared utilities
+│ ├── schemas/ # Zod validation schemas
+│ └── constants.ts # Shared constants (API_URL, CHARACTER_LIMIT, etc.)
+└── dist/ # Built JavaScript files (entry point: dist/index.js)
+```
+
+## Tool Implementation
+
+### Tool Naming
+
+Use snake_case for tool names (e.g., "search_users", "create_project", "get_channel_info") with clear, action-oriented names.
+
+**Avoid Naming Conflicts**: Include the service context to prevent overlaps:
+- Use "slack_send_message" instead of just "send_message"
+- Use "github_create_issue" instead of just "create_issue"
+- Use "asana_list_tasks" instead of just "list_tasks"
+
+### Tool Structure
+
+Tools are registered using the `registerTool` method with the following requirements:
+- Use Zod schemas for runtime input validation and type safety
+- The `description` field must be explicitly provided - JSDoc comments are NOT automatically extracted
+- Explicitly provide `title`, `description`, `inputSchema`, and `annotations`
+- The `inputSchema` must be a Zod schema object (not a JSON schema)
+- Type all parameters and return values explicitly
+
+```typescript
+import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
+import { z } from "zod";
+
+const server = new McpServer({
+ name: "example-mcp",
+ version: "1.0.0"
+});
+
+// Zod schema for input validation
+const UserSearchInputSchema = z.object({
+ query: z.string()
+ .min(2, "Query must be at least 2 characters")
+ .max(200, "Query must not exceed 200 characters")
+ .describe("Search string to match against names/emails"),
+ limit: z.number()
+ .int()
+ .min(1)
+ .max(100)
+ .default(20)
+ .describe("Maximum results to return"),
+ offset: z.number()
+ .int()
+ .min(0)
+ .default(0)
+ .describe("Number of results to skip for pagination"),
+ response_format: z.nativeEnum(ResponseFormat)
+ .default(ResponseFormat.MARKDOWN)
+ .describe("Output format: 'markdown' for human-readable or 'json' for machine-readable")
+}).strict();
+
+// Type definition from Zod schema
+type UserSearchInput = z.infer;
+
+server.registerTool(
+ "example_search_users",
+ {
+ title: "Search Example Users",
+ description: `Search for users in the Example system by name, email, or team.
+
+This tool searches across all user profiles in the Example platform, supporting partial matches and various search filters. It does NOT create or modify users, only searches existing ones.
+
+Args:
+ - query (string): Search string to match against names/emails
+ - limit (number): Maximum results to return, between 1-100 (default: 20)
+ - offset (number): Number of results to skip for pagination (default: 0)
+ - response_format ('markdown' | 'json'): Output format (default: 'markdown')
+
+Returns:
+ For JSON format: Structured data with schema:
+ {
+ "total": number, // Total number of matches found
+ "count": number, // Number of results in this response
+ "offset": number, // Current pagination offset
+ "users": [
+ {
+ "id": string, // User ID (e.g., "U123456789")
+ "name": string, // Full name (e.g., "John Doe")
+ "email": string, // Email address
+ "team": string, // Team name (optional)
+ "active": boolean // Whether user is active
+ }
+ ],
+ "has_more": boolean, // Whether more results are available
+ "next_offset": number // Offset for next page (if has_more is true)
+ }
+
+Examples:
+ - Use when: "Find all marketing team members" -> params with query="team:marketing"
+ - Use when: "Search for John's account" -> params with query="john"
+ - Don't use when: You need to create a user (use example_create_user instead)
+
+Error Handling:
+ - Returns "Error: Rate limit exceeded" if too many requests (429 status)
+ - Returns "No users found matching ''" if search returns empty`,
+ inputSchema: UserSearchInputSchema,
+ annotations: {
+ readOnlyHint: true,
+ destructiveHint: false,
+ idempotentHint: true,
+ openWorldHint: true
+ }
+ },
+ async (params: UserSearchInput) => {
+ try {
+ // Input validation is handled by Zod schema
+ // Make API request using validated parameters
+ const data = await makeApiRequest(
+ "users/search",
+ "GET",
+ undefined,
+ {
+ q: params.query,
+ limit: params.limit,
+ offset: params.offset
+ }
+ );
+
+ const users = data.users || [];
+ const total = data.total || 0;
+
+ if (!users.length) {
+ return {
+ content: [{
+ type: "text",
+ text: `No users found matching '${params.query}'`
+ }]
+ };
+ }
+
+ // Prepare structured output
+ const output = {
+ total,
+ count: users.length,
+ offset: params.offset,
+ users: users.map((user: any) => ({
+ id: user.id,
+ name: user.name,
+ email: user.email,
+ ...(user.team ? { team: user.team } : {}),
+ active: user.active ?? true
+ })),
+ has_more: total > params.offset + users.length,
+ ...(total > params.offset + users.length ? {
+ next_offset: params.offset + users.length
+ } : {})
+ };
+
+ // Format text representation based on requested format
+ let textContent: string;
+ if (params.response_format === ResponseFormat.MARKDOWN) {
+ const lines = [`# User Search Results: '${params.query}'`, "",
+ `Found ${total} users (showing ${users.length})`, ""];
+ for (const user of users) {
+ lines.push(`## ${user.name} (${user.id})`);
+ lines.push(`- **Email**: ${user.email}`);
+ if (user.team) lines.push(`- **Team**: ${user.team}`);
+ lines.push("");
+ }
+ textContent = lines.join("\n");
+ } else {
+ textContent = JSON.stringify(output, null, 2);
+ }
+
+ return {
+ content: [{ type: "text", text: textContent }],
+ structuredContent: output // Modern pattern for structured data
+ };
+ } catch (error) {
+ return {
+ content: [{
+ type: "text",
+ text: handleApiError(error)
+ }]
+ };
+ }
+ }
+);
+```
+
+## Zod Schemas for Input Validation
+
+Zod provides runtime type validation:
+
+```typescript
+import { z } from "zod";
+
+// Basic schema with validation
+const CreateUserSchema = z.object({
+ name: z.string()
+ .min(1, "Name is required")
+ .max(100, "Name must not exceed 100 characters"),
+ email: z.string()
+ .email("Invalid email format"),
+ age: z.number()
+ .int("Age must be a whole number")
+ .min(0, "Age cannot be negative")
+ .max(150, "Age cannot be greater than 150")
+}).strict(); // Use .strict() to forbid extra fields
+
+// Enums
+enum ResponseFormat {
+ MARKDOWN = "markdown",
+ JSON = "json"
+}
+
+const SearchSchema = z.object({
+ response_format: z.nativeEnum(ResponseFormat)
+ .default(ResponseFormat.MARKDOWN)
+ .describe("Output format")
+});
+
+// Optional fields with defaults
+const PaginationSchema = z.object({
+ limit: z.number()
+ .int()
+ .min(1)
+ .max(100)
+ .default(20)
+ .describe("Maximum results to return"),
+ offset: z.number()
+ .int()
+ .min(0)
+ .default(0)
+ .describe("Number of results to skip")
+});
+```
+
+## Response Format Options
+
+Support multiple output formats for flexibility:
+
+```typescript
+enum ResponseFormat {
+ MARKDOWN = "markdown",
+ JSON = "json"
+}
+
+const inputSchema = z.object({
+ query: z.string(),
+ response_format: z.nativeEnum(ResponseFormat)
+ .default(ResponseFormat.MARKDOWN)
+ .describe("Output format: 'markdown' for human-readable or 'json' for machine-readable")
+});
+```
+
+**Markdown format**:
+- Use headers, lists, and formatting for clarity
+- Convert timestamps to human-readable format
+- Show display names with IDs in parentheses
+- Omit verbose metadata
+- Group related information logically
+
+**JSON format**:
+- Return complete, structured data suitable for programmatic processing
+- Include all available fields and metadata
+- Use consistent field names and types
+
+## Pagination Implementation
+
+For tools that list resources:
+
+```typescript
+const ListSchema = z.object({
+ limit: z.number().int().min(1).max(100).default(20),
+ offset: z.number().int().min(0).default(0)
+});
+
+async function listItems(params: z.infer) {
+ const data = await apiRequest(params.limit, params.offset);
+
+ const response = {
+ total: data.total,
+ count: data.items.length,
+ offset: params.offset,
+ items: data.items,
+ has_more: data.total > params.offset + data.items.length,
+ next_offset: data.total > params.offset + data.items.length
+ ? params.offset + data.items.length
+ : undefined
+ };
+
+ return JSON.stringify(response, null, 2);
+}
+```
+
+## Character Limits and Truncation
+
+Add a CHARACTER_LIMIT constant to prevent overwhelming responses:
+
+```typescript
+// At module level in constants.ts
+export const CHARACTER_LIMIT = 25000; // Maximum response size in characters
+
+async function searchTool(params: SearchInput) {
+ let result = generateResponse(data);
+
+ // Check character limit and truncate if needed
+ if (result.length > CHARACTER_LIMIT) {
+ const truncatedData = data.slice(0, Math.max(1, data.length / 2));
+ response.data = truncatedData;
+ response.truncated = true;
+ response.truncation_message =
+ `Response truncated from ${data.length} to ${truncatedData.length} items. ` +
+ `Use 'offset' parameter or add filters to see more results.`;
+ result = JSON.stringify(response, null, 2);
+ }
+
+ return result;
+}
+```
+
+## Error Handling
+
+Provide clear, actionable error messages:
+
+```typescript
+import axios, { AxiosError } from "axios";
+
+function handleApiError(error: unknown): string {
+ if (error instanceof AxiosError) {
+ if (error.response) {
+ switch (error.response.status) {
+ case 404:
+ return "Error: Resource not found. Please check the ID is correct.";
+ case 403:
+ return "Error: Permission denied. You don't have access to this resource.";
+ case 429:
+ return "Error: Rate limit exceeded. Please wait before making more requests.";
+ default:
+ return `Error: API request failed with status ${error.response.status}`;
+ }
+ } else if (error.code === "ECONNABORTED") {
+ return "Error: Request timed out. Please try again.";
+ }
+ }
+ return `Error: Unexpected error occurred: ${error instanceof Error ? error.message : String(error)}`;
+}
+```
+
+## Shared Utilities
+
+Extract common functionality into reusable functions:
+
+```typescript
+// Shared API request function
+async function makeApiRequest(
+ endpoint: string,
+ method: "GET" | "POST" | "PUT" | "DELETE" = "GET",
+ data?: any,
+ params?: any
+): Promise {
+ try {
+ const response = await axios({
+ method,
+ url: `${API_BASE_URL}/${endpoint}`,
+ data,
+ params,
+ timeout: 30000,
+ headers: {
+ "Content-Type": "application/json",
+ "Accept": "application/json"
+ }
+ });
+ return response.data;
+ } catch (error) {
+ throw error;
+ }
+}
+```
+
+## Async/Await Best Practices
+
+Always use async/await for network requests and I/O operations:
+
+```typescript
+// Good: Async network request
+async function fetchData(resourceId: string): Promise {
+ const response = await axios.get(`${API_URL}/resource/${resourceId}`);
+ return response.data;
+}
+
+// Bad: Promise chains
+function fetchData(resourceId: string): Promise {
+ return axios.get(`${API_URL}/resource/${resourceId}`)
+ .then(response => response.data); // Harder to read and maintain
+}
+```
+
+## TypeScript Best Practices
+
+1. **Use Strict TypeScript**: Enable strict mode in tsconfig.json
+2. **Define Interfaces**: Create clear interface definitions for all data structures
+3. **Avoid `any`**: Use proper types or `unknown` instead of `any`
+4. **Zod for Runtime Validation**: Use Zod schemas to validate external data
+5. **Type Guards**: Create type guard functions for complex type checking
+6. **Error Handling**: Always use try-catch with proper error type checking
+7. **Null Safety**: Use optional chaining (`?.`) and nullish coalescing (`??`)
+
+```typescript
+// Good: Type-safe with Zod and interfaces
+interface UserResponse {
+ id: string;
+ name: string;
+ email: string;
+ team?: string;
+ active: boolean;
+}
+
+const UserSchema = z.object({
+ id: z.string(),
+ name: z.string(),
+ email: z.string().email(),
+ team: z.string().optional(),
+ active: z.boolean()
+});
+
+type User = z.infer;
+
+async function getUser(id: string): Promise {
+ const data = await apiCall(`/users/${id}`);
+ return UserSchema.parse(data); // Runtime validation
+}
+
+// Bad: Using any
+async function getUser(id: string): Promise {
+ return await apiCall(`/users/${id}`); // No type safety
+}
+```
+
+## Package Configuration
+
+### package.json
+
+```json
+{
+ "name": "{service}-mcp-server",
+ "version": "1.0.0",
+ "description": "MCP server for {Service} API integration",
+ "type": "module",
+ "main": "dist/index.js",
+ "scripts": {
+ "start": "node dist/index.js",
+ "dev": "tsx watch src/index.ts",
+ "build": "tsc",
+ "clean": "rm -rf dist"
+ },
+ "engines": {
+ "node": ">=18"
+ },
+ "dependencies": {
+ "@modelcontextprotocol/sdk": "^1.6.1",
+ "axios": "^1.7.9",
+ "zod": "^3.23.8"
+ },
+ "devDependencies": {
+ "@types/node": "^22.10.0",
+ "tsx": "^4.19.2",
+ "typescript": "^5.7.2"
+ }
+}
+```
+
+### tsconfig.json
+
+```json
+{
+ "compilerOptions": {
+ "target": "ES2022",
+ "module": "Node16",
+ "moduleResolution": "Node16",
+ "lib": ["ES2022"],
+ "outDir": "./dist",
+ "rootDir": "./src",
+ "strict": true,
+ "esModuleInterop": true,
+ "skipLibCheck": true,
+ "forceConsistentCasingInFileNames": true,
+ "declaration": true,
+ "declarationMap": true,
+ "sourceMap": true,
+ "allowSyntheticDefaultImports": true
+ },
+ "include": ["src/**/*"],
+ "exclude": ["node_modules", "dist"]
+}
+```
+
+## Complete Example
+
+```typescript
+#!/usr/bin/env node
+/**
+ * MCP Server for Example Service.
+ *
+ * This server provides tools to interact with Example API, including user search,
+ * project management, and data export capabilities.
+ */
+
+import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
+import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
+import { z } from "zod";
+import axios, { AxiosError } from "axios";
+
+// Constants
+const API_BASE_URL = "https://api.example.com/v1";
+const CHARACTER_LIMIT = 25000;
+
+// Enums
+enum ResponseFormat {
+ MARKDOWN = "markdown",
+ JSON = "json"
+}
+
+// Zod schemas
+const UserSearchInputSchema = z.object({
+ query: z.string()
+ .min(2, "Query must be at least 2 characters")
+ .max(200, "Query must not exceed 200 characters")
+ .describe("Search string to match against names/emails"),
+ limit: z.number()
+ .int()
+ .min(1)
+ .max(100)
+ .default(20)
+ .describe("Maximum results to return"),
+ offset: z.number()
+ .int()
+ .min(0)
+ .default(0)
+ .describe("Number of results to skip for pagination"),
+ response_format: z.nativeEnum(ResponseFormat)
+ .default(ResponseFormat.MARKDOWN)
+ .describe("Output format: 'markdown' for human-readable or 'json' for machine-readable")
+}).strict();
+
+type UserSearchInput = z.infer;
+
+// Shared utility functions
+async function makeApiRequest(
+ endpoint: string,
+ method: "GET" | "POST" | "PUT" | "DELETE" = "GET",
+ data?: any,
+ params?: any
+): Promise {
+ try {
+ const response = await axios({
+ method,
+ url: `${API_BASE_URL}/${endpoint}`,
+ data,
+ params,
+ timeout: 30000,
+ headers: {
+ "Content-Type": "application/json",
+ "Accept": "application/json"
+ }
+ });
+ return response.data;
+ } catch (error) {
+ throw error;
+ }
+}
+
+function handleApiError(error: unknown): string {
+ if (error instanceof AxiosError) {
+ if (error.response) {
+ switch (error.response.status) {
+ case 404:
+ return "Error: Resource not found. Please check the ID is correct.";
+ case 403:
+ return "Error: Permission denied. You don't have access to this resource.";
+ case 429:
+ return "Error: Rate limit exceeded. Please wait before making more requests.";
+ default:
+ return `Error: API request failed with status ${error.response.status}`;
+ }
+ } else if (error.code === "ECONNABORTED") {
+ return "Error: Request timed out. Please try again.";
+ }
+ }
+ return `Error: Unexpected error occurred: ${error instanceof Error ? error.message : String(error)}`;
+}
+
+// Create MCP server instance
+const server = new McpServer({
+ name: "example-mcp",
+ version: "1.0.0"
+});
+
+// Register tools
+server.registerTool(
+ "example_search_users",
+ {
+ title: "Search Example Users",
+ description: `[Full description as shown above]`,
+ inputSchema: UserSearchInputSchema,
+ annotations: {
+ readOnlyHint: true,
+ destructiveHint: false,
+ idempotentHint: true,
+ openWorldHint: true
+ }
+ },
+ async (params: UserSearchInput) => {
+ // Implementation as shown above
+ }
+);
+
+// Main function
+// For stdio (local):
+async function runStdio() {
+ if (!process.env.EXAMPLE_API_KEY) {
+ console.error("ERROR: EXAMPLE_API_KEY environment variable is required");
+ process.exit(1);
+ }
+
+ const transport = new StdioServerTransport();
+ await server.connect(transport);
+ console.error("MCP server running via stdio");
+}
+
+// For streamable HTTP (remote):
+async function runHTTP() {
+ if (!process.env.EXAMPLE_API_KEY) {
+ console.error("ERROR: EXAMPLE_API_KEY environment variable is required");
+ process.exit(1);
+ }
+
+ const app = express();
+ app.use(express.json());
+
+ app.post('/mcp', async (req, res) => {
+ const transport = new StreamableHTTPServerTransport({
+ sessionIdGenerator: undefined,
+ enableJsonResponse: true
+ });
+ res.on('close', () => transport.close());
+ await server.connect(transport);
+ await transport.handleRequest(req, res, req.body);
+ });
+
+ const port = parseInt(process.env.PORT || '3000');
+ app.listen(port, () => {
+ console.error(`MCP server running on http://localhost:${port}/mcp`);
+ });
+}
+
+// Choose transport based on environment
+const transport = process.env.TRANSPORT || 'stdio';
+if (transport === 'http') {
+ runHTTP().catch(error => {
+ console.error("Server error:", error);
+ process.exit(1);
+ });
+} else {
+ runStdio().catch(error => {
+ console.error("Server error:", error);
+ process.exit(1);
+ });
+}
+```
+
+---
+
+## Advanced MCP Features
+
+### Resource Registration
+
+Expose data as resources for efficient, URI-based access:
+
+```typescript
+import { ResourceTemplate } from "@modelcontextprotocol/sdk/types.js";
+
+// Register a resource with URI template
+server.registerResource(
+ {
+ uri: "file://documents/{name}",
+ name: "Document Resource",
+ description: "Access documents by name",
+ mimeType: "text/plain"
+ },
+ async (uri: string) => {
+ // Extract parameter from URI
+ const match = uri.match(/^file:\/\/documents\/(.+)$/);
+ if (!match) {
+ throw new Error("Invalid URI format");
+ }
+
+ const documentName = match[1];
+ const content = await loadDocument(documentName);
+
+ return {
+ contents: [{
+ uri,
+ mimeType: "text/plain",
+ text: content
+ }]
+ };
+ }
+);
+
+// List available resources dynamically
+server.registerResourceList(async () => {
+ const documents = await getAvailableDocuments();
+ return {
+ resources: documents.map(doc => ({
+ uri: `file://documents/${doc.name}`,
+ name: doc.name,
+ mimeType: "text/plain",
+ description: doc.description
+ }))
+ };
+});
+```
+
+**When to use Resources vs Tools:**
+- **Resources**: For data access with simple URI-based parameters
+- **Tools**: For complex operations requiring validation and business logic
+- **Resources**: When data is relatively static or template-based
+- **Tools**: When operations have side effects or complex workflows
+
+### Transport Options
+
+The TypeScript SDK supports two main transport mechanisms:
+
+#### Streamable HTTP (Recommended for Remote Servers)
+
+```typescript
+import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
+import express from "express";
+
+const app = express();
+app.use(express.json());
+
+app.post('/mcp', async (req, res) => {
+ // Create new transport for each request (stateless, prevents request ID collisions)
+ const transport = new StreamableHTTPServerTransport({
+ sessionIdGenerator: undefined,
+ enableJsonResponse: true
+ });
+
+ res.on('close', () => transport.close());
+
+ await server.connect(transport);
+ await transport.handleRequest(req, res, req.body);
+});
+
+app.listen(3000);
+```
+
+#### stdio (For Local Integrations)
+
+```typescript
+import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
+
+const transport = new StdioServerTransport();
+await server.connect(transport);
+```
+
+**Transport selection:**
+- **Streamable HTTP**: Web services, remote access, multiple clients
+- **stdio**: Command-line tools, local development, subprocess integration
+
+### Notification Support
+
+Notify clients when server state changes:
+
+```typescript
+// Notify when tools list changes
+server.notification({
+ method: "notifications/tools/list_changed"
+});
+
+// Notify when resources change
+server.notification({
+ method: "notifications/resources/list_changed"
+});
+```
+
+Use notifications sparingly - only when server capabilities genuinely change.
+
+---
+
+## Code Best Practices
+
+### Code Composability and Reusability
+
+Your implementation MUST prioritize composability and code reuse:
+
+1. **Extract Common Functionality**:
+ - Create reusable helper functions for operations used across multiple tools
+ - Build shared API clients for HTTP requests instead of duplicating code
+ - Centralize error handling logic in utility functions
+ - Extract business logic into dedicated functions that can be composed
+ - Extract shared markdown or JSON field selection & formatting functionality
+
+2. **Avoid Duplication**:
+ - NEVER copy-paste similar code between tools
+ - If you find yourself writing similar logic twice, extract it into a function
+ - Common operations like pagination, filtering, field selection, and formatting should be shared
+ - Authentication/authorization logic should be centralized
+
+## Building and Running
+
+Always build your TypeScript code before running:
+
+```bash
+# Build the project
+npm run build
+
+# Run the server
+npm start
+
+# Development with auto-reload
+npm run dev
+```
+
+Always ensure `npm run build` completes successfully before considering the implementation complete.
+
+## Quality Checklist
+
+Before finalizing your Node/TypeScript MCP server implementation, ensure:
+
+### Strategic Design
+- [ ] Tools enable complete workflows, not just API endpoint wrappers
+- [ ] Tool names reflect natural task subdivisions
+- [ ] Response formats optimize for agent context efficiency
+- [ ] Human-readable identifiers used where appropriate
+- [ ] Error messages guide agents toward correct usage
+
+### Implementation Quality
+- [ ] FOCUSED IMPLEMENTATION: Most important and valuable tools implemented
+- [ ] All tools registered using `registerTool` with complete configuration
+- [ ] All tools include `title`, `description`, `inputSchema`, and `annotations`
+- [ ] Annotations correctly set (readOnlyHint, destructiveHint, idempotentHint, openWorldHint)
+- [ ] All tools use Zod schemas for runtime input validation with `.strict()` enforcement
+- [ ] All Zod schemas have proper constraints and descriptive error messages
+- [ ] All tools have comprehensive descriptions with explicit input/output types
+- [ ] Descriptions include return value examples and complete schema documentation
+- [ ] Error messages are clear, actionable, and educational
+
+### TypeScript Quality
+- [ ] TypeScript interfaces are defined for all data structures
+- [ ] Strict TypeScript is enabled in tsconfig.json
+- [ ] No use of `any` type - use `unknown` or proper types instead
+- [ ] All async functions have explicit Promise return types
+- [ ] Error handling uses proper type guards (e.g., `axios.isAxiosError`, `z.ZodError`)
+
+### Advanced Features (where applicable)
+- [ ] Resources registered for appropriate data endpoints
+- [ ] Appropriate transport configured (stdio or streamable HTTP)
+- [ ] Notifications implemented for dynamic server capabilities
+- [ ] Type-safe with SDK interfaces
+
+### Project Configuration
+- [ ] Package.json includes all necessary dependencies
+- [ ] Build script produces working JavaScript in dist/ directory
+- [ ] Main entry point is properly configured as dist/index.js
+- [ ] Server name follows format: `{service}-mcp-server`
+- [ ] tsconfig.json properly configured with strict mode
+
+### Code Quality
+- [ ] Pagination is properly implemented where applicable
+- [ ] Large responses check CHARACTER_LIMIT constant and truncate with clear messages
+- [ ] Filtering options are provided for potentially large result sets
+- [ ] All network operations handle timeouts and connection errors gracefully
+- [ ] Common functionality is extracted into reusable functions
+- [ ] Return types are consistent across similar operations
+
+### Testing and Build
+- [ ] `npm run build` completes successfully without errors
+- [ ] dist/index.js created and executable
+- [ ] Server runs: `node dist/index.js --help`
+- [ ] All imports resolve correctly
+- [ ] Sample tool calls work as expected
\ No newline at end of file
diff --git a/skills/mcp-builder/reference/python_mcp_server.md b/skills/mcp-builder/reference/python_mcp_server.md
new file mode 100644
index 0000000..cf7ec99
--- /dev/null
+++ b/skills/mcp-builder/reference/python_mcp_server.md
@@ -0,0 +1,719 @@
+# Python MCP Server Implementation Guide
+
+## Overview
+
+This document provides Python-specific best practices and examples for implementing MCP servers using the MCP Python SDK. It covers server setup, tool registration patterns, input validation with Pydantic, error handling, and complete working examples.
+
+---
+
+## Quick Reference
+
+### Key Imports
+```python
+from mcp.server.fastmcp import FastMCP
+from pydantic import BaseModel, Field, field_validator, ConfigDict
+from typing import Optional, List, Dict, Any
+from enum import Enum
+import httpx
+```
+
+### Server Initialization
+```python
+mcp = FastMCP("service_mcp")
+```
+
+### Tool Registration Pattern
+```python
+@mcp.tool(name="tool_name", annotations={...})
+async def tool_function(params: InputModel) -> str:
+ # Implementation
+ pass
+```
+
+---
+
+## MCP Python SDK and FastMCP
+
+The official MCP Python SDK provides FastMCP, a high-level framework for building MCP servers. It provides:
+- Automatic description and inputSchema generation from function signatures and docstrings
+- Pydantic model integration for input validation
+- Decorator-based tool registration with `@mcp.tool`
+
+**For complete SDK documentation, use WebFetch to load:**
+`https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md`
+
+## Server Naming Convention
+
+Python MCP servers must follow this naming pattern:
+- **Format**: `{service}_mcp` (lowercase with underscores)
+- **Examples**: `github_mcp`, `jira_mcp`, `stripe_mcp`
+
+The name should be:
+- General (not tied to specific features)
+- Descriptive of the service/API being integrated
+- Easy to infer from the task description
+- Without version numbers or dates
+
+## Tool Implementation
+
+### Tool Naming
+
+Use snake_case for tool names (e.g., "search_users", "create_project", "get_channel_info") with clear, action-oriented names.
+
+**Avoid Naming Conflicts**: Include the service context to prevent overlaps:
+- Use "slack_send_message" instead of just "send_message"
+- Use "github_create_issue" instead of just "create_issue"
+- Use "asana_list_tasks" instead of just "list_tasks"
+
+### Tool Structure with FastMCP
+
+Tools are defined using the `@mcp.tool` decorator with Pydantic models for input validation:
+
+```python
+from pydantic import BaseModel, Field, ConfigDict
+from mcp.server.fastmcp import FastMCP
+
+# Initialize the MCP server
+mcp = FastMCP("example_mcp")
+
+# Define Pydantic model for input validation
+class ServiceToolInput(BaseModel):
+ '''Input model for service tool operation.'''
+ model_config = ConfigDict(
+ str_strip_whitespace=True, # Auto-strip whitespace from strings
+ validate_assignment=True, # Validate on assignment
+ extra='forbid' # Forbid extra fields
+ )
+
+ param1: str = Field(..., description="First parameter description (e.g., 'user123', 'project-abc')", min_length=1, max_length=100)
+ param2: Optional[int] = Field(default=None, description="Optional integer parameter with constraints", ge=0, le=1000)
+ tags: Optional[List[str]] = Field(default_factory=list, description="List of tags to apply", max_items=10)
+
+@mcp.tool(
+ name="service_tool_name",
+ annotations={
+ "title": "Human-Readable Tool Title",
+ "readOnlyHint": True, # Tool does not modify environment
+ "destructiveHint": False, # Tool does not perform destructive operations
+ "idempotentHint": True, # Repeated calls have no additional effect
+ "openWorldHint": False # Tool does not interact with external entities
+ }
+)
+async def service_tool_name(params: ServiceToolInput) -> str:
+ '''Tool description automatically becomes the 'description' field.
+
+ This tool performs a specific operation on the service. It validates all inputs
+ using the ServiceToolInput Pydantic model before processing.
+
+ Args:
+ params (ServiceToolInput): Validated input parameters containing:
+ - param1 (str): First parameter description
+ - param2 (Optional[int]): Optional parameter with default
+ - tags (Optional[List[str]]): List of tags
+
+ Returns:
+ str: JSON-formatted response containing operation results
+ '''
+ # Implementation here
+ pass
+```
+
+## Pydantic v2 Key Features
+
+- Use `model_config` instead of nested `Config` class
+- Use `field_validator` instead of deprecated `validator`
+- Use `model_dump()` instead of deprecated `dict()`
+- Validators require `@classmethod` decorator
+- Type hints are required for validator methods
+
+```python
+from pydantic import BaseModel, Field, field_validator, ConfigDict
+
+class CreateUserInput(BaseModel):
+ model_config = ConfigDict(
+ str_strip_whitespace=True,
+ validate_assignment=True
+ )
+
+ name: str = Field(..., description="User's full name", min_length=1, max_length=100)
+ email: str = Field(..., description="User's email address", pattern=r'^[\w\.-]+@[\w\.-]+\.\w+$')
+ age: int = Field(..., description="User's age", ge=0, le=150)
+
+ @field_validator('email')
+ @classmethod
+ def validate_email(cls, v: str) -> str:
+ if not v.strip():
+ raise ValueError("Email cannot be empty")
+ return v.lower()
+```
+
+## Response Format Options
+
+Support multiple output formats for flexibility:
+
+```python
+from enum import Enum
+
+class ResponseFormat(str, Enum):
+ '''Output format for tool responses.'''
+ MARKDOWN = "markdown"
+ JSON = "json"
+
+class UserSearchInput(BaseModel):
+ query: str = Field(..., description="Search query")
+ response_format: ResponseFormat = Field(
+ default=ResponseFormat.MARKDOWN,
+ description="Output format: 'markdown' for human-readable or 'json' for machine-readable"
+ )
+```
+
+**Markdown format**:
+- Use headers, lists, and formatting for clarity
+- Convert timestamps to human-readable format (e.g., "2024-01-15 10:30:00 UTC" instead of epoch)
+- Show display names with IDs in parentheses (e.g., "@john.doe (U123456)")
+- Omit verbose metadata (e.g., show only one profile image URL, not all sizes)
+- Group related information logically
+
+**JSON format**:
+- Return complete, structured data suitable for programmatic processing
+- Include all available fields and metadata
+- Use consistent field names and types
+
+## Pagination Implementation
+
+For tools that list resources:
+
+```python
+class ListInput(BaseModel):
+ limit: Optional[int] = Field(default=20, description="Maximum results to return", ge=1, le=100)
+ offset: Optional[int] = Field(default=0, description="Number of results to skip for pagination", ge=0)
+
+async def list_items(params: ListInput) -> str:
+ # Make API request with pagination
+ data = await api_request(limit=params.limit, offset=params.offset)
+
+ # Return pagination info
+ response = {
+ "total": data["total"],
+ "count": len(data["items"]),
+ "offset": params.offset,
+ "items": data["items"],
+ "has_more": data["total"] > params.offset + len(data["items"]),
+ "next_offset": params.offset + len(data["items"]) if data["total"] > params.offset + len(data["items"]) else None
+ }
+ return json.dumps(response, indent=2)
+```
+
+## Error Handling
+
+Provide clear, actionable error messages:
+
+```python
+def _handle_api_error(e: Exception) -> str:
+ '''Consistent error formatting across all tools.'''
+ if isinstance(e, httpx.HTTPStatusError):
+ if e.response.status_code == 404:
+ return "Error: Resource not found. Please check the ID is correct."
+ elif e.response.status_code == 403:
+ return "Error: Permission denied. You don't have access to this resource."
+ elif e.response.status_code == 429:
+ return "Error: Rate limit exceeded. Please wait before making more requests."
+ return f"Error: API request failed with status {e.response.status_code}"
+ elif isinstance(e, httpx.TimeoutException):
+ return "Error: Request timed out. Please try again."
+ return f"Error: Unexpected error occurred: {type(e).__name__}"
+```
+
+## Shared Utilities
+
+Extract common functionality into reusable functions:
+
+```python
+# Shared API request function
+async def _make_api_request(endpoint: str, method: str = "GET", **kwargs) -> dict:
+ '''Reusable function for all API calls.'''
+ async with httpx.AsyncClient() as client:
+ response = await client.request(
+ method,
+ f"{API_BASE_URL}/{endpoint}",
+ timeout=30.0,
+ **kwargs
+ )
+ response.raise_for_status()
+ return response.json()
+```
+
+## Async/Await Best Practices
+
+Always use async/await for network requests and I/O operations:
+
+```python
+# Good: Async network request
+async def fetch_data(resource_id: str) -> dict:
+ async with httpx.AsyncClient() as client:
+ response = await client.get(f"{API_URL}/resource/{resource_id}")
+ response.raise_for_status()
+ return response.json()
+
+# Bad: Synchronous request
+def fetch_data(resource_id: str) -> dict:
+ response = requests.get(f"{API_URL}/resource/{resource_id}") # Blocks
+ return response.json()
+```
+
+## Type Hints
+
+Use type hints throughout:
+
+```python
+from typing import Optional, List, Dict, Any
+
+async def get_user(user_id: str) -> Dict[str, Any]:
+ data = await fetch_user(user_id)
+ return {"id": data["id"], "name": data["name"]}
+```
+
+## Tool Docstrings
+
+Every tool must have comprehensive docstrings with explicit type information:
+
+```python
+async def search_users(params: UserSearchInput) -> str:
+ '''
+ Search for users in the Example system by name, email, or team.
+
+ This tool searches across all user profiles in the Example platform,
+ supporting partial matches and various search filters. It does NOT
+ create or modify users, only searches existing ones.
+
+ Args:
+ params (UserSearchInput): Validated input parameters containing:
+ - query (str): Search string to match against names/emails (e.g., "john", "@example.com", "team:marketing")
+ - limit (Optional[int]): Maximum results to return, between 1-100 (default: 20)
+ - offset (Optional[int]): Number of results to skip for pagination (default: 0)
+
+ Returns:
+ str: JSON-formatted string containing search results with the following schema:
+
+ Success response:
+ {
+ "total": int, # Total number of matches found
+ "count": int, # Number of results in this response
+ "offset": int, # Current pagination offset
+ "users": [
+ {
+ "id": str, # User ID (e.g., "U123456789")
+ "name": str, # Full name (e.g., "John Doe")
+ "email": str, # Email address (e.g., "john@example.com")
+ "team": str # Team name (e.g., "Marketing") - optional
+ }
+ ]
+ }
+
+ Error response:
+ "Error: " or "No users found matching ''"
+
+ Examples:
+ - Use when: "Find all marketing team members" -> params with query="team:marketing"
+ - Use when: "Search for John's account" -> params with query="john"
+ - Don't use when: You need to create a user (use example_create_user instead)
+ - Don't use when: You have a user ID and need full details (use example_get_user instead)
+
+ Error Handling:
+ - Input validation errors are handled by Pydantic model
+ - Returns "Error: Rate limit exceeded" if too many requests (429 status)
+ - Returns "Error: Invalid API authentication" if API key is invalid (401 status)
+ - Returns formatted list of results or "No users found matching 'query'"
+ '''
+```
+
+## Complete Example
+
+See below for a complete Python MCP server example:
+
+```python
+#!/usr/bin/env python3
+'''
+MCP Server for Example Service.
+
+This server provides tools to interact with Example API, including user search,
+project management, and data export capabilities.
+'''
+
+from typing import Optional, List, Dict, Any
+from enum import Enum
+import httpx
+from pydantic import BaseModel, Field, field_validator, ConfigDict
+from mcp.server.fastmcp import FastMCP
+
+# Initialize the MCP server
+mcp = FastMCP("example_mcp")
+
+# Constants
+API_BASE_URL = "https://api.example.com/v1"
+
+# Enums
+class ResponseFormat(str, Enum):
+ '''Output format for tool responses.'''
+ MARKDOWN = "markdown"
+ JSON = "json"
+
+# Pydantic Models for Input Validation
+class UserSearchInput(BaseModel):
+ '''Input model for user search operations.'''
+ model_config = ConfigDict(
+ str_strip_whitespace=True,
+ validate_assignment=True
+ )
+
+ query: str = Field(..., description="Search string to match against names/emails", min_length=2, max_length=200)
+ limit: Optional[int] = Field(default=20, description="Maximum results to return", ge=1, le=100)
+ offset: Optional[int] = Field(default=0, description="Number of results to skip for pagination", ge=0)
+ response_format: ResponseFormat = Field(default=ResponseFormat.MARKDOWN, description="Output format")
+
+ @field_validator('query')
+ @classmethod
+ def validate_query(cls, v: str) -> str:
+ if not v.strip():
+ raise ValueError("Query cannot be empty or whitespace only")
+ return v.strip()
+
+# Shared utility functions
+async def _make_api_request(endpoint: str, method: str = "GET", **kwargs) -> dict:
+ '''Reusable function for all API calls.'''
+ async with httpx.AsyncClient() as client:
+ response = await client.request(
+ method,
+ f"{API_BASE_URL}/{endpoint}",
+ timeout=30.0,
+ **kwargs
+ )
+ response.raise_for_status()
+ return response.json()
+
+def _handle_api_error(e: Exception) -> str:
+ '''Consistent error formatting across all tools.'''
+ if isinstance(e, httpx.HTTPStatusError):
+ if e.response.status_code == 404:
+ return "Error: Resource not found. Please check the ID is correct."
+ elif e.response.status_code == 403:
+ return "Error: Permission denied. You don't have access to this resource."
+ elif e.response.status_code == 429:
+ return "Error: Rate limit exceeded. Please wait before making more requests."
+ return f"Error: API request failed with status {e.response.status_code}"
+ elif isinstance(e, httpx.TimeoutException):
+ return "Error: Request timed out. Please try again."
+ return f"Error: Unexpected error occurred: {type(e).__name__}"
+
+# Tool definitions
+@mcp.tool(
+ name="example_search_users",
+ annotations={
+ "title": "Search Example Users",
+ "readOnlyHint": True,
+ "destructiveHint": False,
+ "idempotentHint": True,
+ "openWorldHint": True
+ }
+)
+async def example_search_users(params: UserSearchInput) -> str:
+ '''Search for users in the Example system by name, email, or team.
+
+ [Full docstring as shown above]
+ '''
+ try:
+ # Make API request using validated parameters
+ data = await _make_api_request(
+ "users/search",
+ params={
+ "q": params.query,
+ "limit": params.limit,
+ "offset": params.offset
+ }
+ )
+
+ users = data.get("users", [])
+ total = data.get("total", 0)
+
+ if not users:
+ return f"No users found matching '{params.query}'"
+
+ # Format response based on requested format
+ if params.response_format == ResponseFormat.MARKDOWN:
+ lines = [f"# User Search Results: '{params.query}'", ""]
+ lines.append(f"Found {total} users (showing {len(users)})")
+ lines.append("")
+
+ for user in users:
+ lines.append(f"## {user['name']} ({user['id']})")
+ lines.append(f"- **Email**: {user['email']}")
+ if user.get('team'):
+ lines.append(f"- **Team**: {user['team']}")
+ lines.append("")
+
+ return "\n".join(lines)
+
+ else:
+ # Machine-readable JSON format
+ import json
+ response = {
+ "total": total,
+ "count": len(users),
+ "offset": params.offset,
+ "users": users
+ }
+ return json.dumps(response, indent=2)
+
+ except Exception as e:
+ return _handle_api_error(e)
+
+if __name__ == "__main__":
+ mcp.run()
+```
+
+---
+
+## Advanced FastMCP Features
+
+### Context Parameter Injection
+
+FastMCP can automatically inject a `Context` parameter into tools for advanced capabilities like logging, progress reporting, resource reading, and user interaction:
+
+```python
+from mcp.server.fastmcp import FastMCP, Context
+
+mcp = FastMCP("example_mcp")
+
+@mcp.tool()
+async def advanced_search(query: str, ctx: Context) -> str:
+ '''Advanced tool with context access for logging and progress.'''
+
+ # Report progress for long operations
+ await ctx.report_progress(0.25, "Starting search...")
+
+ # Log information for debugging
+ await ctx.log_info("Processing query", {"query": query, "timestamp": datetime.now()})
+
+ # Perform search
+ results = await search_api(query)
+ await ctx.report_progress(0.75, "Formatting results...")
+
+ # Access server configuration
+ server_name = ctx.fastmcp.name
+
+ return format_results(results)
+
+@mcp.tool()
+async def interactive_tool(resource_id: str, ctx: Context) -> str:
+ '''Tool that can request additional input from users.'''
+
+ # Request sensitive information when needed
+ api_key = await ctx.elicit(
+ prompt="Please provide your API key:",
+ input_type="password"
+ )
+
+ # Use the provided key
+ return await api_call(resource_id, api_key)
+```
+
+**Context capabilities:**
+- `ctx.report_progress(progress, message)` - Report progress for long operations
+- `ctx.log_info(message, data)` / `ctx.log_error()` / `ctx.log_debug()` - Logging
+- `ctx.elicit(prompt, input_type)` - Request input from users
+- `ctx.fastmcp.name` - Access server configuration
+- `ctx.read_resource(uri)` - Read MCP resources
+
+### Resource Registration
+
+Expose data as resources for efficient, template-based access:
+
+```python
+@mcp.resource("file://documents/{name}")
+async def get_document(name: str) -> str:
+ '''Expose documents as MCP resources.
+
+ Resources are useful for static or semi-static data that doesn't
+ require complex parameters. They use URI templates for flexible access.
+ '''
+ document_path = f"./docs/{name}"
+ with open(document_path, "r") as f:
+ return f.read()
+
+@mcp.resource("config://settings/{key}")
+async def get_setting(key: str, ctx: Context) -> str:
+ '''Expose configuration as resources with context.'''
+ settings = await load_settings()
+ return json.dumps(settings.get(key, {}))
+```
+
+**When to use Resources vs Tools:**
+- **Resources**: For data access with simple parameters (URI templates)
+- **Tools**: For complex operations with validation and business logic
+
+### Structured Output Types
+
+FastMCP supports multiple return types beyond strings:
+
+```python
+from typing import TypedDict
+from dataclasses import dataclass
+from pydantic import BaseModel
+
+# TypedDict for structured returns
+class UserData(TypedDict):
+ id: str
+ name: str
+ email: str
+
+@mcp.tool()
+async def get_user_typed(user_id: str) -> UserData:
+ '''Returns structured data - FastMCP handles serialization.'''
+ return {"id": user_id, "name": "John Doe", "email": "john@example.com"}
+
+# Pydantic models for complex validation
+class DetailedUser(BaseModel):
+ id: str
+ name: str
+ email: str
+ created_at: datetime
+ metadata: Dict[str, Any]
+
+@mcp.tool()
+async def get_user_detailed(user_id: str) -> DetailedUser:
+ '''Returns Pydantic model - automatically generates schema.'''
+ user = await fetch_user(user_id)
+ return DetailedUser(**user)
+```
+
+### Lifespan Management
+
+Initialize resources that persist across requests:
+
+```python
+from contextlib import asynccontextmanager
+
+@asynccontextmanager
+async def app_lifespan():
+ '''Manage resources that live for the server's lifetime.'''
+ # Initialize connections, load config, etc.
+ db = await connect_to_database()
+ config = load_configuration()
+
+ # Make available to all tools
+ yield {"db": db, "config": config}
+
+ # Cleanup on shutdown
+ await db.close()
+
+mcp = FastMCP("example_mcp", lifespan=app_lifespan)
+
+@mcp.tool()
+async def query_data(query: str, ctx: Context) -> str:
+ '''Access lifespan resources through context.'''
+ db = ctx.request_context.lifespan_state["db"]
+ results = await db.query(query)
+ return format_results(results)
+```
+
+### Transport Options
+
+FastMCP supports two main transport mechanisms:
+
+```python
+# stdio transport (for local tools) - default
+if __name__ == "__main__":
+ mcp.run()
+
+# Streamable HTTP transport (for remote servers)
+if __name__ == "__main__":
+ mcp.run(transport="streamable_http", port=8000)
+```
+
+**Transport selection:**
+- **stdio**: Command-line tools, local integrations, subprocess execution
+- **Streamable HTTP**: Web services, remote access, multiple clients
+
+---
+
+## Code Best Practices
+
+### Code Composability and Reusability
+
+Your implementation MUST prioritize composability and code reuse:
+
+1. **Extract Common Functionality**:
+ - Create reusable helper functions for operations used across multiple tools
+ - Build shared API clients for HTTP requests instead of duplicating code
+ - Centralize error handling logic in utility functions
+ - Extract business logic into dedicated functions that can be composed
+ - Extract shared markdown or JSON field selection & formatting functionality
+
+2. **Avoid Duplication**:
+ - NEVER copy-paste similar code between tools
+ - If you find yourself writing similar logic twice, extract it into a function
+ - Common operations like pagination, filtering, field selection, and formatting should be shared
+ - Authentication/authorization logic should be centralized
+
+### Python-Specific Best Practices
+
+1. **Use Type Hints**: Always include type annotations for function parameters and return values
+2. **Pydantic Models**: Define clear Pydantic models for all input validation
+3. **Avoid Manual Validation**: Let Pydantic handle input validation with constraints
+4. **Proper Imports**: Group imports (standard library, third-party, local)
+5. **Error Handling**: Use specific exception types (httpx.HTTPStatusError, not generic Exception)
+6. **Async Context Managers**: Use `async with` for resources that need cleanup
+7. **Constants**: Define module-level constants in UPPER_CASE
+
+## Quality Checklist
+
+Before finalizing your Python MCP server implementation, ensure:
+
+### Strategic Design
+- [ ] Tools enable complete workflows, not just API endpoint wrappers
+- [ ] Tool names reflect natural task subdivisions
+- [ ] Response formats optimize for agent context efficiency
+- [ ] Human-readable identifiers used where appropriate
+- [ ] Error messages guide agents toward correct usage
+
+### Implementation Quality
+- [ ] FOCUSED IMPLEMENTATION: Most important and valuable tools implemented
+- [ ] All tools have descriptive names and documentation
+- [ ] Return types are consistent across similar operations
+- [ ] Error handling is implemented for all external calls
+- [ ] Server name follows format: `{service}_mcp`
+- [ ] All network operations use async/await
+- [ ] Common functionality is extracted into reusable functions
+- [ ] Error messages are clear, actionable, and educational
+- [ ] Outputs are properly validated and formatted
+
+### Tool Configuration
+- [ ] All tools implement 'name' and 'annotations' in the decorator
+- [ ] Annotations correctly set (readOnlyHint, destructiveHint, idempotentHint, openWorldHint)
+- [ ] All tools use Pydantic BaseModel for input validation with Field() definitions
+- [ ] All Pydantic Fields have explicit types and descriptions with constraints
+- [ ] All tools have comprehensive docstrings with explicit input/output types
+- [ ] Docstrings include complete schema structure for dict/JSON returns
+- [ ] Pydantic models handle input validation (no manual validation needed)
+
+### Advanced Features (where applicable)
+- [ ] Context injection used for logging, progress, or elicitation
+- [ ] Resources registered for appropriate data endpoints
+- [ ] Lifespan management implemented for persistent connections
+- [ ] Structured output types used (TypedDict, Pydantic models)
+- [ ] Appropriate transport configured (stdio or streamable HTTP)
+
+### Code Quality
+- [ ] File includes proper imports including Pydantic imports
+- [ ] Pagination is properly implemented where applicable
+- [ ] Filtering options are provided for potentially large result sets
+- [ ] All async functions are properly defined with `async def`
+- [ ] HTTP client usage follows async patterns with proper context managers
+- [ ] Type hints are used throughout the code
+- [ ] Constants are defined at module level in UPPER_CASE
+
+### Testing
+- [ ] Server runs successfully: `python your_server.py --help`
+- [ ] All imports resolve correctly
+- [ ] Sample tool calls work as expected
+- [ ] Error scenarios handled gracefully
\ No newline at end of file
diff --git a/skills/mcp-builder/scripts/connections.py b/skills/mcp-builder/scripts/connections.py
new file mode 100644
index 0000000..ffcd0da
--- /dev/null
+++ b/skills/mcp-builder/scripts/connections.py
@@ -0,0 +1,151 @@
+"""Lightweight connection handling for MCP servers."""
+
+from abc import ABC, abstractmethod
+from contextlib import AsyncExitStack
+from typing import Any
+
+from mcp import ClientSession, StdioServerParameters
+from mcp.client.sse import sse_client
+from mcp.client.stdio import stdio_client
+from mcp.client.streamable_http import streamablehttp_client
+
+
+class MCPConnection(ABC):
+ """Base class for MCP server connections."""
+
+ def __init__(self):
+ self.session = None
+ self._stack = None
+
+ @abstractmethod
+ def _create_context(self):
+ """Create the connection context based on connection type."""
+
+ async def __aenter__(self):
+ """Initialize MCP server connection."""
+ self._stack = AsyncExitStack()
+ await self._stack.__aenter__()
+
+ try:
+ ctx = self._create_context()
+ result = await self._stack.enter_async_context(ctx)
+
+ if len(result) == 2:
+ read, write = result
+ elif len(result) == 3:
+ read, write, _ = result
+ else:
+ raise ValueError(f"Unexpected context result: {result}")
+
+ session_ctx = ClientSession(read, write)
+ self.session = await self._stack.enter_async_context(session_ctx)
+ await self.session.initialize()
+ return self
+ except BaseException:
+ await self._stack.__aexit__(None, None, None)
+ raise
+
+ async def __aexit__(self, exc_type, exc_val, exc_tb):
+ """Clean up MCP server connection resources."""
+ if self._stack:
+ await self._stack.__aexit__(exc_type, exc_val, exc_tb)
+ self.session = None
+ self._stack = None
+
+ async def list_tools(self) -> list[dict[str, Any]]:
+ """Retrieve available tools from the MCP server."""
+ response = await self.session.list_tools()
+ return [
+ {
+ "name": tool.name,
+ "description": tool.description,
+ "input_schema": tool.inputSchema,
+ }
+ for tool in response.tools
+ ]
+
+ async def call_tool(self, tool_name: str, arguments: dict[str, Any]) -> Any:
+ """Call a tool on the MCP server with provided arguments."""
+ result = await self.session.call_tool(tool_name, arguments=arguments)
+ return result.content
+
+
+class MCPConnectionStdio(MCPConnection):
+ """MCP connection using standard input/output."""
+
+ def __init__(self, command: str, args: list[str] = None, env: dict[str, str] = None):
+ super().__init__()
+ self.command = command
+ self.args = args or []
+ self.env = env
+
+ def _create_context(self):
+ return stdio_client(
+ StdioServerParameters(command=self.command, args=self.args, env=self.env)
+ )
+
+
+class MCPConnectionSSE(MCPConnection):
+ """MCP connection using Server-Sent Events."""
+
+ def __init__(self, url: str, headers: dict[str, str] = None):
+ super().__init__()
+ self.url = url
+ self.headers = headers or {}
+
+ def _create_context(self):
+ return sse_client(url=self.url, headers=self.headers)
+
+
+class MCPConnectionHTTP(MCPConnection):
+ """MCP connection using Streamable HTTP."""
+
+ def __init__(self, url: str, headers: dict[str, str] = None):
+ super().__init__()
+ self.url = url
+ self.headers = headers or {}
+
+ def _create_context(self):
+ return streamablehttp_client(url=self.url, headers=self.headers)
+
+
+def create_connection(
+ transport: str,
+ command: str = None,
+ args: list[str] = None,
+ env: dict[str, str] = None,
+ url: str = None,
+ headers: dict[str, str] = None,
+) -> MCPConnection:
+ """Factory function to create the appropriate MCP connection.
+
+ Args:
+ transport: Connection type ("stdio", "sse", or "http")
+ command: Command to run (stdio only)
+ args: Command arguments (stdio only)
+ env: Environment variables (stdio only)
+ url: Server URL (sse and http only)
+ headers: HTTP headers (sse and http only)
+
+ Returns:
+ MCPConnection instance
+ """
+ transport = transport.lower()
+
+ if transport == "stdio":
+ if not command:
+ raise ValueError("Command is required for stdio transport")
+ return MCPConnectionStdio(command=command, args=args, env=env)
+
+ elif transport == "sse":
+ if not url:
+ raise ValueError("URL is required for sse transport")
+ return MCPConnectionSSE(url=url, headers=headers)
+
+ elif transport in ["http", "streamable_http", "streamable-http"]:
+ if not url:
+ raise ValueError("URL is required for http transport")
+ return MCPConnectionHTTP(url=url, headers=headers)
+
+ else:
+ raise ValueError(f"Unsupported transport type: {transport}. Use 'stdio', 'sse', or 'http'")
diff --git a/skills/mcp-builder/scripts/evaluation.py b/skills/mcp-builder/scripts/evaluation.py
new file mode 100644
index 0000000..4177856
--- /dev/null
+++ b/skills/mcp-builder/scripts/evaluation.py
@@ -0,0 +1,373 @@
+"""MCP Server Evaluation Harness
+
+This script evaluates MCP servers by running test questions against them using Claude.
+"""
+
+import argparse
+import asyncio
+import json
+import re
+import sys
+import time
+import traceback
+import xml.etree.ElementTree as ET
+from pathlib import Path
+from typing import Any
+
+from anthropic import Anthropic
+
+from connections import create_connection
+
+EVALUATION_PROMPT = """You are an AI assistant with access to tools.
+
+When given a task, you MUST:
+1. Use the available tools to complete the task
+2. Provide summary of each step in your approach, wrapped in tags
+3. Provide feedback on the tools provided, wrapped in tags
+4. Provide your final response, wrapped in tags
+
+Summary Requirements:
+- In your tags, you must explain:
+ - The steps you took to complete the task
+ - Which tools you used, in what order, and why
+ - The inputs you provided to each tool
+ - The outputs you received from each tool
+ - A summary for how you arrived at the response
+
+Feedback Requirements:
+- In your tags, provide constructive feedback on the tools:
+ - Comment on tool names: Are they clear and descriptive?
+ - Comment on input parameters: Are they well-documented? Are required vs optional parameters clear?
+ - Comment on descriptions: Do they accurately describe what the tool does?
+ - Comment on any errors encountered during tool usage: Did the tool fail to execute? Did the tool return too many tokens?
+ - Identify specific areas for improvement and explain WHY they would help
+ - Be specific and actionable in your suggestions
+
+Response Requirements:
+- Your response should be concise and directly address what was asked
+- Always wrap your final response in tags
+- If you cannot solve the task return NOT_FOUND
+- For numeric responses, provide just the number
+- For IDs, provide just the ID
+- For names or text, provide the exact text requested
+- Your response should go last"""
+
+
+def parse_evaluation_file(file_path: Path) -> list[dict[str, Any]]:
+ """Parse XML evaluation file with qa_pair elements."""
+ try:
+ tree = ET.parse(file_path)
+ root = tree.getroot()
+ evaluations = []
+
+ for qa_pair in root.findall(".//qa_pair"):
+ question_elem = qa_pair.find("question")
+ answer_elem = qa_pair.find("answer")
+
+ if question_elem is not None and answer_elem is not None:
+ evaluations.append({
+ "question": (question_elem.text or "").strip(),
+ "answer": (answer_elem.text or "").strip(),
+ })
+
+ return evaluations
+ except Exception as e:
+ print(f"Error parsing evaluation file {file_path}: {e}")
+ return []
+
+
+def extract_xml_content(text: str, tag: str) -> str | None:
+ """Extract content from XML tags."""
+ pattern = rf"<{tag}>(.*?){tag}>"
+ matches = re.findall(pattern, text, re.DOTALL)
+ return matches[-1].strip() if matches else None
+
+
+async def agent_loop(
+ client: Anthropic,
+ model: str,
+ question: str,
+ tools: list[dict[str, Any]],
+ connection: Any,
+) -> tuple[str, dict[str, Any]]:
+ """Run the agent loop with MCP tools."""
+ messages = [{"role": "user", "content": question}]
+
+ response = await asyncio.to_thread(
+ client.messages.create,
+ model=model,
+ max_tokens=4096,
+ system=EVALUATION_PROMPT,
+ messages=messages,
+ tools=tools,
+ )
+
+ messages.append({"role": "assistant", "content": response.content})
+
+ tool_metrics = {}
+
+ while response.stop_reason == "tool_use":
+ tool_use = next(block for block in response.content if block.type == "tool_use")
+ tool_name = tool_use.name
+ tool_input = tool_use.input
+
+ tool_start_ts = time.time()
+ try:
+ tool_result = await connection.call_tool(tool_name, tool_input)
+ tool_response = json.dumps(tool_result) if isinstance(tool_result, (dict, list)) else str(tool_result)
+ except Exception as e:
+ tool_response = f"Error executing tool {tool_name}: {str(e)}\n"
+ tool_response += traceback.format_exc()
+ tool_duration = time.time() - tool_start_ts
+
+ if tool_name not in tool_metrics:
+ tool_metrics[tool_name] = {"count": 0, "durations": []}
+ tool_metrics[tool_name]["count"] += 1
+ tool_metrics[tool_name]["durations"].append(tool_duration)
+
+ messages.append({
+ "role": "user",
+ "content": [{
+ "type": "tool_result",
+ "tool_use_id": tool_use.id,
+ "content": tool_response,
+ }]
+ })
+
+ response = await asyncio.to_thread(
+ client.messages.create,
+ model=model,
+ max_tokens=4096,
+ system=EVALUATION_PROMPT,
+ messages=messages,
+ tools=tools,
+ )
+ messages.append({"role": "assistant", "content": response.content})
+
+ response_text = next(
+ (block.text for block in response.content if hasattr(block, "text")),
+ None,
+ )
+ return response_text, tool_metrics
+
+
+async def evaluate_single_task(
+ client: Anthropic,
+ model: str,
+ qa_pair: dict[str, Any],
+ tools: list[dict[str, Any]],
+ connection: Any,
+ task_index: int,
+) -> dict[str, Any]:
+ """Evaluate a single QA pair with the given tools."""
+ start_time = time.time()
+
+ print(f"Task {task_index + 1}: Running task with question: {qa_pair['question']}")
+ response, tool_metrics = await agent_loop(client, model, qa_pair["question"], tools, connection)
+
+ response_value = extract_xml_content(response, "response")
+ summary = extract_xml_content(response, "summary")
+ feedback = extract_xml_content(response, "feedback")
+
+ duration_seconds = time.time() - start_time
+
+ return {
+ "question": qa_pair["question"],
+ "expected": qa_pair["answer"],
+ "actual": response_value,
+ "score": int(response_value == qa_pair["answer"]) if response_value else 0,
+ "total_duration": duration_seconds,
+ "tool_calls": tool_metrics,
+ "num_tool_calls": sum(len(metrics["durations"]) for metrics in tool_metrics.values()),
+ "summary": summary,
+ "feedback": feedback,
+ }
+
+
+REPORT_HEADER = """
+# Evaluation Report
+
+## Summary
+
+- **Accuracy**: {correct}/{total} ({accuracy:.1f}%)
+- **Average Task Duration**: {average_duration_s:.2f}s
+- **Average Tool Calls per Task**: {average_tool_calls:.2f}
+- **Total Tool Calls**: {total_tool_calls}
+
+---
+"""
+
+TASK_TEMPLATE = """
+### Task {task_num}
+
+**Question**: {question}
+**Ground Truth Answer**: `{expected_answer}`
+**Actual Answer**: `{actual_answer}`
+**Correct**: {correct_indicator}
+**Duration**: {total_duration:.2f}s
+**Tool Calls**: {tool_calls}
+
+**Summary**
+{summary}
+
+**Feedback**
+{feedback}
+
+---
+"""
+
+
+async def run_evaluation(
+ eval_path: Path,
+ connection: Any,
+ model: str = "claude-3-7-sonnet-20250219",
+) -> str:
+ """Run evaluation with MCP server tools."""
+ print("🚀 Starting Evaluation")
+
+ client = Anthropic()
+
+ tools = await connection.list_tools()
+ print(f"📋 Loaded {len(tools)} tools from MCP server")
+
+ qa_pairs = parse_evaluation_file(eval_path)
+ print(f"📋 Loaded {len(qa_pairs)} evaluation tasks")
+
+ results = []
+ for i, qa_pair in enumerate(qa_pairs):
+ print(f"Processing task {i + 1}/{len(qa_pairs)}")
+ result = await evaluate_single_task(client, model, qa_pair, tools, connection, i)
+ results.append(result)
+
+ correct = sum(r["score"] for r in results)
+ accuracy = (correct / len(results)) * 100 if results else 0
+ average_duration_s = sum(r["total_duration"] for r in results) / len(results) if results else 0
+ average_tool_calls = sum(r["num_tool_calls"] for r in results) / len(results) if results else 0
+ total_tool_calls = sum(r["num_tool_calls"] for r in results)
+
+ report = REPORT_HEADER.format(
+ correct=correct,
+ total=len(results),
+ accuracy=accuracy,
+ average_duration_s=average_duration_s,
+ average_tool_calls=average_tool_calls,
+ total_tool_calls=total_tool_calls,
+ )
+
+ report += "".join([
+ TASK_TEMPLATE.format(
+ task_num=i + 1,
+ question=qa_pair["question"],
+ expected_answer=qa_pair["answer"],
+ actual_answer=result["actual"] or "N/A",
+ correct_indicator="✅" if result["score"] else "❌",
+ total_duration=result["total_duration"],
+ tool_calls=json.dumps(result["tool_calls"], indent=2),
+ summary=result["summary"] or "N/A",
+ feedback=result["feedback"] or "N/A",
+ )
+ for i, (qa_pair, result) in enumerate(zip(qa_pairs, results))
+ ])
+
+ return report
+
+
+def parse_headers(header_list: list[str]) -> dict[str, str]:
+ """Parse header strings in format 'Key: Value' into a dictionary."""
+ headers = {}
+ if not header_list:
+ return headers
+
+ for header in header_list:
+ if ":" in header:
+ key, value = header.split(":", 1)
+ headers[key.strip()] = value.strip()
+ else:
+ print(f"Warning: Ignoring malformed header: {header}")
+ return headers
+
+
+def parse_env_vars(env_list: list[str]) -> dict[str, str]:
+ """Parse environment variable strings in format 'KEY=VALUE' into a dictionary."""
+ env = {}
+ if not env_list:
+ return env
+
+ for env_var in env_list:
+ if "=" in env_var:
+ key, value = env_var.split("=", 1)
+ env[key.strip()] = value.strip()
+ else:
+ print(f"Warning: Ignoring malformed environment variable: {env_var}")
+ return env
+
+
+async def main():
+ parser = argparse.ArgumentParser(
+ description="Evaluate MCP servers using test questions",
+ formatter_class=argparse.RawDescriptionHelpFormatter,
+ epilog="""
+Examples:
+ # Evaluate a local stdio MCP server
+ python evaluation.py -t stdio -c python -a my_server.py eval.xml
+
+ # Evaluate an SSE MCP server
+ python evaluation.py -t sse -u https://example.com/mcp -H "Authorization: Bearer token" eval.xml
+
+ # Evaluate an HTTP MCP server with custom model
+ python evaluation.py -t http -u https://example.com/mcp -m claude-3-5-sonnet-20241022 eval.xml
+ """,
+ )
+
+ parser.add_argument("eval_file", type=Path, help="Path to evaluation XML file")
+ parser.add_argument("-t", "--transport", choices=["stdio", "sse", "http"], default="stdio", help="Transport type (default: stdio)")
+ parser.add_argument("-m", "--model", default="claude-3-7-sonnet-20250219", help="Claude model to use (default: claude-3-7-sonnet-20250219)")
+
+ stdio_group = parser.add_argument_group("stdio options")
+ stdio_group.add_argument("-c", "--command", help="Command to run MCP server (stdio only)")
+ stdio_group.add_argument("-a", "--args", nargs="+", help="Arguments for the command (stdio only)")
+ stdio_group.add_argument("-e", "--env", nargs="+", help="Environment variables in KEY=VALUE format (stdio only)")
+
+ remote_group = parser.add_argument_group("sse/http options")
+ remote_group.add_argument("-u", "--url", help="MCP server URL (sse/http only)")
+ remote_group.add_argument("-H", "--header", nargs="+", dest="headers", help="HTTP headers in 'Key: Value' format (sse/http only)")
+
+ parser.add_argument("-o", "--output", type=Path, help="Output file for evaluation report (default: stdout)")
+
+ args = parser.parse_args()
+
+ if not args.eval_file.exists():
+ print(f"Error: Evaluation file not found: {args.eval_file}")
+ sys.exit(1)
+
+ headers = parse_headers(args.headers) if args.headers else None
+ env_vars = parse_env_vars(args.env) if args.env else None
+
+ try:
+ connection = create_connection(
+ transport=args.transport,
+ command=args.command,
+ args=args.args,
+ env=env_vars,
+ url=args.url,
+ headers=headers,
+ )
+ except ValueError as e:
+ print(f"Error: {e}")
+ sys.exit(1)
+
+ print(f"🔗 Connecting to MCP server via {args.transport}...")
+
+ async with connection:
+ print("✅ Connected successfully")
+ report = await run_evaluation(args.eval_file, connection, args.model)
+
+ if args.output:
+ args.output.write_text(report)
+ print(f"\n✅ Report saved to {args.output}")
+ else:
+ print("\n" + report)
+
+
+if __name__ == "__main__":
+ asyncio.run(main())
diff --git a/skills/mcp-builder/scripts/example_evaluation.xml b/skills/mcp-builder/scripts/example_evaluation.xml
new file mode 100644
index 0000000..41e4459
--- /dev/null
+++ b/skills/mcp-builder/scripts/example_evaluation.xml
@@ -0,0 +1,22 @@
+
+
+ Calculate the compound interest on $10,000 invested at 5% annual interest rate, compounded monthly for 3 years. What is the final amount in dollars (rounded to 2 decimal places)?
+ 11614.72
+
+
+ A projectile is launched at a 45-degree angle with an initial velocity of 50 m/s. Calculate the total distance (in meters) it has traveled from the launch point after 2 seconds, assuming g=9.8 m/s². Round to 2 decimal places.
+ 87.25
+
+
+ A sphere has a volume of 500 cubic meters. Calculate its surface area in square meters. Round to 2 decimal places.
+ 304.65
+
+
+ Calculate the population standard deviation of this dataset: [12, 15, 18, 22, 25, 30, 35]. Round to 2 decimal places.
+ 7.61
+
+
+ Calculate the pH of a solution with a hydrogen ion concentration of 3.5 × 10^-5 M. Round to 2 decimal places.
+ 4.46
+
+
diff --git a/skills/mcp-builder/scripts/requirements.txt b/skills/mcp-builder/scripts/requirements.txt
new file mode 100644
index 0000000..e73e5d1
--- /dev/null
+++ b/skills/mcp-builder/scripts/requirements.txt
@@ -0,0 +1,2 @@
+anthropic>=0.39.0
+mcp>=1.1.0
diff --git a/skills/nestjs-best-practices/nestjs-best-practices b/skills/nestjs-best-practices/nestjs-best-practices
new file mode 120000
index 0000000..918a878
--- /dev/null
+++ b/skills/nestjs-best-practices/nestjs-best-practices
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/nestjs-best-practices/
\ No newline at end of file
diff --git a/skills/next-best-practices/next-best-practices b/skills/next-best-practices/next-best-practices
new file mode 120000
index 0000000..c2f3748
--- /dev/null
+++ b/skills/next-best-practices/next-best-practices
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/next-best-practices/
\ No newline at end of file
diff --git a/skills/nuxt/GENERATION.md b/skills/nuxt/GENERATION.md
new file mode 100644
index 0000000..6299090
--- /dev/null
+++ b/skills/nuxt/GENERATION.md
@@ -0,0 +1,5 @@
+# Generation Info
+
+- **Source:** `sources/nuxt`
+- **Git SHA:** `c9fed804b9bef362276033b03ca43730c6efa7dc`
+- **Generated:** 2026-01-28
diff --git a/skills/nuxt/SKILL.md b/skills/nuxt/SKILL.md
new file mode 100644
index 0000000..62e497e
--- /dev/null
+++ b/skills/nuxt/SKILL.md
@@ -0,0 +1,55 @@
+---
+name: nuxt
+description: Nuxt full-stack Vue framework with SSR, auto-imports, and file-based routing. Use when working with Nuxt apps, server routes, useFetch, middleware, or hybrid rendering.
+metadata:
+ author: Anthony Fu
+ version: "2026.1.28"
+ source: Generated from https://github.com/nuxt/nuxt, scripts located at https://github.com/antfu/skills
+---
+
+Nuxt is a full-stack Vue framework that provides server-side rendering, file-based routing, auto-imports, and a powerful module system. It uses Nitro as its server engine for universal deployment across Node.js, serverless, and edge platforms.
+
+> The skill is based on Nuxt 3.x, generated at 2026-01-28.
+
+## Core
+
+| Topic | Description | Reference |
+|-------|-------------|-----------|
+| Directory Structure | Project folder structure, conventions, file organization | [core-directory-structure](references/core-directory-structure.md) |
+| Configuration | nuxt.config.ts, app.config.ts, runtime config, environment variables | [core-config](references/core-config.md) |
+| CLI Commands | Dev server, build, generate, preview, and utility commands | [core-cli](references/core-cli.md) |
+| Routing | File-based routing, dynamic routes, navigation, middleware, layouts | [core-routing](references/core-routing.md) |
+| Data Fetching | useFetch, useAsyncData, $fetch, caching, refresh | [core-data-fetching](references/core-data-fetching.md) |
+| Modules | Creating and using Nuxt modules, Nuxt Kit utilities | [core-modules](references/core-modules.md) |
+| Deployment | Platform-agnostic deployment with Nitro, Vercel, Netlify, Cloudflare | [core-deployment](references/core-deployment.md) |
+
+## Features
+
+| Topic | Description | Reference |
+|-------|-------------|-----------|
+| Composables Auto-imports | Vue APIs, Nuxt composables, custom composables, utilities | [features-composables](references/features-composables.md) |
+| Components Auto-imports | Component naming, lazy loading, hydration strategies | [features-components-autoimport](references/features-components-autoimport.md) |
+| Built-in Components | NuxtLink, NuxtPage, NuxtLayout, ClientOnly, and more | [features-components](references/features-components.md) |
+| State Management | useState composable, SSR-friendly state, Pinia integration | [features-state](references/features-state.md) |
+| Server Routes | API routes, server middleware, Nitro server engine | [features-server](references/features-server.md) |
+
+## Rendering
+
+| Topic | Description | Reference |
+|-------|-------------|-----------|
+| Rendering Modes | Universal (SSR), client-side (SPA), hybrid rendering, route rules | [rendering-modes](references/rendering-modes.md) |
+
+## Best Practices
+
+| Topic | Description | Reference |
+|-------|-------------|-----------|
+| Data Fetching Patterns | Efficient fetching, caching, parallel requests, error handling | [best-practices-data-fetching](references/best-practices-data-fetching.md) |
+| SSR & Hydration | Avoiding context leaks, hydration mismatches, composable patterns | [best-practices-ssr](references/best-practices-ssr.md) |
+
+## Advanced
+
+| Topic | Description | Reference |
+|-------|-------------|-----------|
+| Layers | Extending applications with reusable layers | [advanced-layers](references/advanced-layers.md) |
+| Lifecycle Hooks | Build-time, runtime, and server hooks | [advanced-hooks](references/advanced-hooks.md) |
+| Module Authoring | Creating publishable Nuxt modules with Nuxt Kit | [advanced-module-authoring](references/advanced-module-authoring.md) |
diff --git a/skills/nuxt/references/advanced-hooks.md b/skills/nuxt/references/advanced-hooks.md
new file mode 100644
index 0000000..b61d1ed
--- /dev/null
+++ b/skills/nuxt/references/advanced-hooks.md
@@ -0,0 +1,289 @@
+---
+name: lifecycle-hooks
+description: Nuxt and Nitro hooks for extending build-time and runtime behavior
+---
+
+# Lifecycle Hooks
+
+Nuxt provides hooks to tap into the build process, application lifecycle, and server runtime.
+
+## Build-time Hooks (Nuxt)
+
+Used in `nuxt.config.ts` or modules:
+
+### In nuxt.config.ts
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ hooks: {
+ 'build:before': () => {
+ console.log('Build starting...')
+ },
+ 'pages:extend': (pages) => {
+ // Add custom pages
+ pages.push({
+ name: 'custom',
+ path: '/custom',
+ file: '~/pages/custom.vue',
+ })
+ },
+ 'components:dirs': (dirs) => {
+ // Add component directories
+ dirs.push({ path: '~/extra-components' })
+ },
+ },
+})
+```
+
+### In Modules
+
+```ts
+// modules/my-module.ts
+export default defineNuxtModule({
+ setup(options, nuxt) {
+ nuxt.hook('ready', async (nuxt) => {
+ console.log('Nuxt is ready')
+ })
+
+ nuxt.hook('close', async (nuxt) => {
+ console.log('Nuxt is closing')
+ })
+
+ nuxt.hook('modules:done', () => {
+ console.log('All modules loaded')
+ })
+ },
+})
+```
+
+### Common Build Hooks
+
+| Hook | When |
+|------|------|
+| `ready` | Nuxt initialization complete |
+| `close` | Nuxt is closing |
+| `modules:done` | All modules installed |
+| `build:before` | Before build starts |
+| `build:done` | Build complete |
+| `pages:extend` | Pages routes resolved |
+| `components:dirs` | Component dirs being resolved |
+| `imports:extend` | Auto-imports being resolved |
+| `nitro:config` | Before Nitro config finalized |
+| `vite:extend` | Vite context created |
+| `vite:extendConfig` | Before Vite config finalized |
+
+## App Hooks (Runtime)
+
+Used in plugins and composables:
+
+### In Plugins
+
+```ts
+// plugins/lifecycle.ts
+export default defineNuxtPlugin((nuxtApp) => {
+ nuxtApp.hook('app:created', (vueApp) => {
+ console.log('Vue app created')
+ })
+
+ nuxtApp.hook('app:mounted', (vueApp) => {
+ console.log('App mounted')
+ })
+
+ nuxtApp.hook('page:start', () => {
+ console.log('Page navigation starting')
+ })
+
+ nuxtApp.hook('page:finish', () => {
+ console.log('Page navigation finished')
+ })
+
+ nuxtApp.hook('page:loading:start', () => {
+ console.log('Page loading started')
+ })
+
+ nuxtApp.hook('page:loading:end', () => {
+ console.log('Page loading ended')
+ })
+})
+```
+
+### Common App Hooks
+
+| Hook | When |
+|------|------|
+| `app:created` | Vue app created |
+| `app:mounted` | Vue app mounted (client only) |
+| `app:error` | Fatal error occurred |
+| `page:start` | Page navigation starting |
+| `page:finish` | Page navigation finished |
+| `page:loading:start` | Loading indicator should show |
+| `page:loading:end` | Loading indicator should hide |
+| `link:prefetch` | Link is being prefetched |
+
+### Using Runtime Hooks
+
+```ts
+// composables/usePageTracking.ts
+export function usePageTracking() {
+ const nuxtApp = useNuxtApp()
+
+ nuxtApp.hook('page:finish', () => {
+ trackPageView(useRoute().path)
+ })
+}
+```
+
+## Server Hooks (Nitro)
+
+Used in server plugins:
+
+```ts
+// server/plugins/hooks.ts
+export default defineNitroPlugin((nitroApp) => {
+ // Modify HTML before sending
+ nitroApp.hooks.hook('render:html', (html, { event }) => {
+ html.head.push(' ')
+ html.bodyAppend.push('')
+ })
+
+ // Modify response
+ nitroApp.hooks.hook('render:response', (response, { event }) => {
+ console.log('Sending response:', response.statusCode)
+ })
+
+ // Before request
+ nitroApp.hooks.hook('request', (event) => {
+ console.log('Request:', event.path)
+ })
+
+ // After response
+ nitroApp.hooks.hook('afterResponse', (event) => {
+ console.log('Response sent')
+ })
+})
+```
+
+### Common Nitro Hooks
+
+| Hook | When |
+|------|------|
+| `request` | Request received |
+| `beforeResponse` | Before sending response |
+| `afterResponse` | After response sent |
+| `render:html` | Before HTML is sent |
+| `render:response` | Before response is finalized |
+| `error` | Error occurred |
+
+## Custom Hooks
+
+### Define Custom Hook Types
+
+```ts
+// types/hooks.d.ts
+import type { HookResult } from '@nuxt/schema'
+
+declare module '#app' {
+ interface RuntimeNuxtHooks {
+ 'my-app:event': (data: MyEventData) => HookResult
+ }
+}
+
+declare module '@nuxt/schema' {
+ interface NuxtHooks {
+ 'my-module:init': () => HookResult
+ }
+}
+
+declare module 'nitropack/types' {
+ interface NitroRuntimeHooks {
+ 'my-server:event': (data: any) => void
+ }
+}
+```
+
+### Call Custom Hooks
+
+```ts
+// In a plugin
+export default defineNuxtPlugin((nuxtApp) => {
+ // Call custom hook
+ nuxtApp.callHook('my-app:event', { type: 'custom' })
+})
+
+// In a module
+export default defineNuxtModule({
+ setup(options, nuxt) {
+ nuxt.callHook('my-module:init')
+ },
+})
+```
+
+## useRuntimeHook
+
+Call hooks at runtime from components:
+
+```vue
+
+```
+
+## Hook Examples
+
+### Page View Tracking
+
+```ts
+// plugins/analytics.client.ts
+export default defineNuxtPlugin((nuxtApp) => {
+ nuxtApp.hook('page:finish', () => {
+ const route = useRoute()
+ analytics.track('pageview', {
+ path: route.path,
+ title: document.title,
+ })
+ })
+})
+```
+
+### Performance Monitoring
+
+```ts
+// plugins/performance.client.ts
+export default defineNuxtPlugin((nuxtApp) => {
+ let navigationStart: number
+
+ nuxtApp.hook('page:start', () => {
+ navigationStart = performance.now()
+ })
+
+ nuxtApp.hook('page:finish', () => {
+ const duration = performance.now() - navigationStart
+ console.log(`Navigation took ${duration}ms`)
+ })
+})
+```
+
+### Inject HTML
+
+```ts
+// server/plugins/inject.ts
+export default defineNitroPlugin((nitroApp) => {
+ nitroApp.hooks.hook('render:html', (html) => {
+ html.head.push(`
+
+ `)
+ })
+})
+```
+
+
diff --git a/skills/nuxt/references/advanced-layers.md b/skills/nuxt/references/advanced-layers.md
new file mode 100644
index 0000000..94b4ae0
--- /dev/null
+++ b/skills/nuxt/references/advanced-layers.md
@@ -0,0 +1,299 @@
+---
+name: nuxt-layers
+description: Extending Nuxt applications with layers for code sharing and reusability
+---
+
+# Nuxt Layers
+
+Layers allow sharing and reusing partial Nuxt applications across projects. They can include components, composables, pages, layouts, and configuration.
+
+## Using Layers
+
+### From npm Package
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ extends: [
+ '@my-org/base-layer',
+ '@nuxtjs/ui-layer',
+ ],
+})
+```
+
+### From Git Repository
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ extends: [
+ 'github:username/repo',
+ 'github:username/repo/base', // Subdirectory
+ 'github:username/repo#v1.0', // Specific tag
+ 'github:username/repo#dev', // Branch
+ 'gitlab:username/repo',
+ 'bitbucket:username/repo',
+ ],
+})
+```
+
+### From Local Directory
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ extends: [
+ '../base-layer',
+ './layers/shared',
+ ],
+})
+```
+
+### Auto-scanned Layers
+
+Place in `layers/` directory for automatic discovery:
+
+```
+my-app/
+├── layers/
+│ ├── base/
+│ │ └── nuxt.config.ts
+│ └── ui/
+│ └── nuxt.config.ts
+└── nuxt.config.ts
+```
+
+## Creating a Layer
+
+Minimal layer structure:
+
+```
+my-layer/
+├── nuxt.config.ts # Required
+├── app/
+│ ├── components/ # Auto-merged
+│ ├── composables/ # Auto-merged
+│ ├── layouts/ # Auto-merged
+│ ├── middleware/ # Auto-merged
+│ ├── pages/ # Auto-merged
+│ ├── plugins/ # Auto-merged
+│ └── app.config.ts # Merged
+├── server/ # Auto-merged
+└── package.json
+```
+
+### Layer nuxt.config.ts
+
+```ts
+// my-layer/nuxt.config.ts
+export default defineNuxtConfig({
+ // Layer configuration
+ app: {
+ head: {
+ title: 'My Layer App',
+ },
+ },
+ // Shared modules
+ modules: ['@nuxt/ui'],
+})
+```
+
+### Layer Components
+
+```vue
+
+
+
+
+
+
+```
+
+Use in consuming project:
+
+```vue
+
+ Click me
+
+```
+
+### Layer Composables
+
+```ts
+// my-layer/app/composables/useTheme.ts
+export function useTheme() {
+ const isDark = useState('theme-dark', () => false)
+ const toggle = () => isDark.value = !isDark.value
+ return { isDark, toggle }
+}
+```
+
+## Layer Priority
+
+Override order (highest to lowest):
+1. Your project files
+2. Auto-scanned layers (alphabetically, Z > A)
+3. `extends` array (first > last)
+
+Control order with prefixes:
+
+```
+layers/
+├── 1.base/ # Lower priority
+└── 2.theme/ # Higher priority
+```
+
+## Layer Aliases
+
+Access layer files:
+
+```ts
+// Auto-scanned layers get aliases
+import Component from '#layers/base/components/Component.vue'
+```
+
+Named aliases:
+
+```ts
+// my-layer/nuxt.config.ts
+export default defineNuxtConfig({
+ $meta: {
+ name: 'my-layer',
+ },
+})
+```
+
+```ts
+// In consuming project
+import { something } from '#layers/my-layer/utils'
+```
+
+## Publishing Layers
+
+### As npm Package
+
+```json
+{
+ "name": "my-nuxt-layer",
+ "version": "1.0.0",
+ "type": "module",
+ "main": "./nuxt.config.ts",
+ "dependencies": {
+ "@nuxt/ui": "^2.0.0"
+ },
+ "devDependencies": {
+ "nuxt": "^3.0.0"
+ }
+}
+```
+
+### Private Layers
+
+For private git repos:
+
+```bash
+export GIGET_AUTH=
+```
+
+## Layer Best Practices
+
+### Use Resolved Paths
+
+```ts
+// my-layer/nuxt.config.ts
+import { fileURLToPath } from 'node:url'
+import { dirname, join } from 'node:path'
+
+const currentDir = dirname(fileURLToPath(import.meta.url))
+
+export default defineNuxtConfig({
+ css: [
+ join(currentDir, './assets/main.css'),
+ ],
+})
+```
+
+### Install Dependencies
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ extends: [
+ ['github:user/layer', { install: true }],
+ ],
+})
+```
+
+### Disable Layer Modules
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ extends: ['./base-layer'],
+ // Disable modules from layer
+ image: false, // Disables @nuxt/image
+ pinia: false, // Disables @pinia/nuxt
+})
+```
+
+## Starter Template
+
+Create a new layer:
+
+```bash
+npx nuxi init --template layer my-layer
+```
+
+## Example: Theme Layer
+
+```
+theme-layer/
+├── nuxt.config.ts
+├── app/
+│ ├── app.config.ts
+│ ├── components/
+│ │ ├── ThemeButton.vue
+│ │ └── ThemeCard.vue
+│ ├── composables/
+│ │ └── useTheme.ts
+│ └── assets/
+│ └── theme.css
+└── package.json
+```
+
+```ts
+// theme-layer/nuxt.config.ts
+export default defineNuxtConfig({
+ css: ['~/assets/theme.css'],
+})
+```
+
+```ts
+// theme-layer/app/app.config.ts
+export default defineAppConfig({
+ theme: {
+ primaryColor: '#00dc82',
+ darkMode: false,
+ },
+})
+```
+
+```ts
+// consuming-app/nuxt.config.ts
+export default defineNuxtConfig({
+ extends: ['theme-layer'],
+})
+
+// consuming-app/app/app.config.ts
+export default defineAppConfig({
+ theme: {
+ primaryColor: '#ff0000', // Override
+ },
+})
+```
+
+
diff --git a/skills/nuxt/references/advanced-module-authoring.md b/skills/nuxt/references/advanced-module-authoring.md
new file mode 100644
index 0000000..e08525d
--- /dev/null
+++ b/skills/nuxt/references/advanced-module-authoring.md
@@ -0,0 +1,554 @@
+---
+name: module-authoring
+description: Complete guide to creating publishable Nuxt modules with best practices
+---
+
+# Module Authoring
+
+This guide covers creating publishable Nuxt modules with proper structure, type safety, and best practices.
+
+## Module Structure
+
+Recommended structure for a publishable module:
+
+```
+my-nuxt-module/
+├── src/
+│ ├── module.ts # Module entry
+│ └── runtime/
+│ ├── components/ # Vue components
+│ ├── composables/ # Composables
+│ ├── plugins/ # Nuxt plugins
+│ └── server/ # Server handlers
+├── playground/ # Development app
+├── package.json
+└── tsconfig.json
+```
+
+## Module Definition
+
+### Basic Module with Type-safe Options
+
+```ts
+// src/module.ts
+import { defineNuxtModule, createResolver, addPlugin, addComponent, addImports } from '@nuxt/kit'
+
+export interface ModuleOptions {
+ prefix?: string
+ apiKey: string
+ enabled?: boolean
+}
+
+export default defineNuxtModule({
+ meta: {
+ name: 'my-module',
+ configKey: 'myModule',
+ compatibility: {
+ nuxt: '>=3.0.0',
+ },
+ },
+ defaults: {
+ prefix: 'My',
+ enabled: true,
+ },
+ setup(options, nuxt) {
+ if (!options.enabled) return
+
+ const { resolve } = createResolver(import.meta.url)
+
+ // Module setup logic here
+ },
+})
+```
+
+### Using `.with()` for Strict Type Inference
+
+When you need TypeScript to infer that default values are always present:
+
+```ts
+import { defineNuxtModule } from '@nuxt/kit'
+
+interface ModuleOptions {
+ apiKey: string
+ baseURL: string
+ timeout?: number
+}
+
+export default defineNuxtModule().with({
+ meta: {
+ name: '@nuxtjs/my-api',
+ configKey: 'myApi',
+ },
+ defaults: {
+ baseURL: 'https://api.example.com',
+ timeout: 5000,
+ },
+ setup(resolvedOptions, nuxt) {
+ // resolvedOptions.baseURL is guaranteed to be string (not undefined)
+ // resolvedOptions.timeout is guaranteed to be number (not undefined)
+ },
+})
+```
+
+## Adding Runtime Assets
+
+### Components
+
+```ts
+import { addComponent, addComponentsDir, createResolver } from '@nuxt/kit'
+
+export default defineNuxtModule({
+ setup() {
+ const { resolve } = createResolver(import.meta.url)
+
+ // Single component
+ addComponent({
+ name: 'MyButton',
+ filePath: resolve('./runtime/components/MyButton.vue'),
+ })
+
+ // Component directory with prefix
+ addComponentsDir({
+ path: resolve('./runtime/components'),
+ prefix: 'My',
+ pathPrefix: false,
+ })
+ },
+})
+```
+
+### Composables and Auto-imports
+
+```ts
+import { addImports, addImportsDir, createResolver } from '@nuxt/kit'
+
+export default defineNuxtModule({
+ setup() {
+ const { resolve } = createResolver(import.meta.url)
+
+ // Single import
+ addImports({
+ name: 'useMyUtil',
+ from: resolve('./runtime/composables/useMyUtil'),
+ })
+
+ // Directory of composables
+ addImportsDir(resolve('./runtime/composables'))
+ },
+})
+```
+
+### Plugins
+
+```ts
+import { addPlugin, addPluginTemplate, createResolver } from '@nuxt/kit'
+
+export default defineNuxtModule({
+ setup(options) {
+ const { resolve } = createResolver(import.meta.url)
+
+ // Static plugin file
+ addPlugin({
+ src: resolve('./runtime/plugins/myPlugin'),
+ mode: 'client', // 'client', 'server', or 'all'
+ })
+
+ // Dynamic plugin with generated code
+ addPluginTemplate({
+ filename: 'my-module-plugin.mjs',
+ getContents: () => `
+import { defineNuxtPlugin } from '#app/nuxt'
+
+export default defineNuxtPlugin({
+ name: 'my-module',
+ setup() {
+ const config = ${JSON.stringify(options)}
+ // Plugin logic
+ }
+})`,
+ })
+ },
+})
+```
+
+## Server Extensions
+
+### Server Handlers
+
+```ts
+import { addServerHandler, addServerScanDir, createResolver } from '@nuxt/kit'
+
+export default defineNuxtModule({
+ setup() {
+ const { resolve } = createResolver(import.meta.url)
+
+ // Single handler
+ addServerHandler({
+ route: '/api/my-endpoint',
+ handler: resolve('./runtime/server/api/my-endpoint'),
+ })
+
+ // Scan entire server directory (api/, routes/, middleware/, utils/)
+ addServerScanDir(resolve('./runtime/server'))
+ },
+})
+```
+
+### Server Composables
+
+```ts
+import { addServerImports, addServerImportsDir, createResolver } from '@nuxt/kit'
+
+export default defineNuxtModule({
+ setup() {
+ const { resolve } = createResolver(import.meta.url)
+
+ // Single server import
+ addServerImports({
+ name: 'useServerUtil',
+ from: resolve('./runtime/server/utils/useServerUtil'),
+ })
+
+ // Server composables directory
+ addServerImportsDir(resolve('./runtime/server/composables'))
+ },
+})
+```
+
+### Nitro Plugin
+
+```ts
+import { addServerPlugin, createResolver } from '@nuxt/kit'
+
+export default defineNuxtModule({
+ setup() {
+ const { resolve } = createResolver(import.meta.url)
+ addServerPlugin(resolve('./runtime/server/plugin'))
+ },
+})
+```
+
+```ts
+// runtime/server/plugin.ts
+import { defineNitroPlugin } from 'nitropack/runtime'
+
+export default defineNitroPlugin((nitroApp) => {
+ nitroApp.hooks.hook('request', (event) => {
+ console.log('Request:', event.path)
+ })
+})
+```
+
+## Templates and Virtual Files
+
+### Generate Virtual Files
+
+```ts
+import { addTemplate, addTypeTemplate, addServerTemplate, createResolver } from '@nuxt/kit'
+
+export default defineNuxtModule({
+ setup(options, nuxt) {
+ const { resolve } = createResolver(import.meta.url)
+
+ // Client/build virtual file (accessible via #build/my-config.mjs)
+ addTemplate({
+ filename: 'my-config.mjs',
+ getContents: () => `export default ${JSON.stringify(options)}`,
+ })
+
+ // Type declarations
+ addTypeTemplate({
+ filename: 'types/my-module.d.ts',
+ getContents: () => `
+declare module '#my-module' {
+ export interface Config {
+ apiKey: string
+ }
+}`,
+ })
+
+ // Nitro virtual file (accessible in server routes)
+ addServerTemplate({
+ filename: '#my-module/config.mjs',
+ getContents: () => `export const config = ${JSON.stringify(options)}`,
+ })
+ },
+})
+```
+
+### Access Virtual Files
+
+```ts
+// In runtime plugin
+// @ts-expect-error - virtual file
+import config from '#build/my-config.mjs'
+
+// In server routes
+import { config } from '#my-module/config.js'
+```
+
+## Extending Pages and Routes
+
+```ts
+import { extendPages, extendRouteRules, addRouteMiddleware, createResolver } from '@nuxt/kit'
+
+export default defineNuxtModule({
+ setup() {
+ const { resolve } = createResolver(import.meta.url)
+
+ // Add pages
+ extendPages((pages) => {
+ pages.push({
+ name: 'my-page',
+ path: '/my-route',
+ file: resolve('./runtime/pages/MyPage.vue'),
+ })
+ })
+
+ // Add route rules (caching, redirects, etc.)
+ extendRouteRules('/api/**', {
+ cache: { maxAge: 60 },
+ })
+
+ // Add middleware
+ addRouteMiddleware({
+ name: 'my-middleware',
+ path: resolve('./runtime/middleware/myMiddleware'),
+ global: true,
+ })
+ },
+})
+```
+
+## Module Dependencies
+
+Declare dependencies on other modules with version constraints:
+
+```ts
+export default defineNuxtModule({
+ meta: {
+ name: 'my-module',
+ },
+ moduleDependencies: {
+ '@nuxtjs/tailwindcss': {
+ version: '>=6.0.0',
+ // Set defaults (user can override)
+ defaults: {
+ exposeConfig: true,
+ },
+ // Force specific options
+ overrides: {
+ viewer: false,
+ },
+ },
+ '@nuxtjs/i18n': {
+ optional: true, // Won't fail if not installed
+ defaults: {
+ defaultLocale: 'en',
+ },
+ },
+ },
+ setup() {
+ // Dependencies are guaranteed to be set up before this runs
+ },
+})
+```
+
+### Dynamic Dependencies
+
+```ts
+moduleDependencies(nuxt) {
+ const deps: Record = {
+ '@nuxtjs/tailwindcss': { version: '>=6.0.0' },
+ }
+
+ if (nuxt.options.ssr) {
+ deps['@nuxtjs/html-validator'] = { optional: true }
+ }
+
+ return deps
+}
+```
+
+## Lifecycle Hooks
+
+Requires `meta.name` and `meta.version`:
+
+```ts
+export default defineNuxtModule({
+ meta: {
+ name: 'my-module',
+ version: '1.2.0',
+ },
+ onInstall(nuxt) {
+ // First-time setup
+ console.log('Module installed for the first time')
+ },
+ onUpgrade(nuxt, options, previousVersion) {
+ // Version upgrade migrations
+ console.log(`Upgrading from ${previousVersion}`)
+ },
+ setup(options, nuxt) {
+ // Regular setup runs every build
+ },
+})
+```
+
+## Extending Configuration
+
+```ts
+export default defineNuxtModule({
+ setup(options, nuxt) {
+ // Add CSS
+ nuxt.options.css.push('my-module/styles.css')
+
+ // Add runtime config
+ nuxt.options.runtimeConfig.public.myModule = {
+ apiUrl: options.apiUrl,
+ }
+
+ // Extend Vite config
+ nuxt.options.vite.optimizeDeps ||= {}
+ nuxt.options.vite.optimizeDeps.include ||= []
+ nuxt.options.vite.optimizeDeps.include.push('some-package')
+
+ // Add build transpile
+ nuxt.options.build.transpile.push('my-package')
+ },
+})
+```
+
+## Using Hooks
+
+```ts
+export default defineNuxtModule({
+ // Declarative hooks
+ hooks: {
+ 'components:dirs': (dirs) => {
+ dirs.push({ path: '~/extra' })
+ },
+ },
+
+ setup(options, nuxt) {
+ // Programmatic hooks
+ nuxt.hook('pages:extend', (pages) => {
+ // Modify pages
+ })
+
+ nuxt.hook('imports:extend', (imports) => {
+ imports.push({ name: 'myHelper', from: 'my-package' })
+ })
+
+ nuxt.hook('nitro:config', (config) => {
+ // Modify Nitro config
+ })
+
+ nuxt.hook('vite:extendConfig', (config) => {
+ // Modify Vite config
+ })
+ },
+})
+```
+
+## Path Resolution
+
+```ts
+import { createResolver, resolvePath, findPath } from '@nuxt/kit'
+
+export default defineNuxtModule({
+ async setup(options, nuxt) {
+ // Resolver relative to module
+ const { resolve } = createResolver(import.meta.url)
+
+ const pluginPath = resolve('./runtime/plugin')
+
+ // Resolve with extensions and aliases
+ const entrypoint = await resolvePath('@some/package')
+
+ // Find first existing file
+ const configPath = await findPath([
+ resolve('./config.ts'),
+ resolve('./config.js'),
+ ])
+ },
+})
+```
+
+## Module Package.json
+
+```json
+{
+ "name": "my-nuxt-module",
+ "version": "1.0.0",
+ "type": "module",
+ "exports": {
+ ".": {
+ "import": "./dist/module.mjs",
+ "require": "./dist/module.cjs"
+ }
+ },
+ "main": "./dist/module.cjs",
+ "module": "./dist/module.mjs",
+ "types": "./dist/types.d.ts",
+ "files": ["dist"],
+ "scripts": {
+ "dev": "nuxi dev playground",
+ "build": "nuxt-module-build build",
+ "prepare": "nuxt-module-build build --stub"
+ },
+ "dependencies": {
+ "@nuxt/kit": "^3.0.0"
+ },
+ "devDependencies": {
+ "@nuxt/module-builder": "latest",
+ "nuxt": "^3.0.0"
+ }
+}
+```
+
+## Disabling Modules
+
+Users can disable a module via config key:
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ // Disable entirely
+ myModule: false,
+
+ // Or with options
+ myModule: {
+ enabled: false,
+ },
+})
+```
+
+## Development Workflow
+
+1. **Create module**: `npx nuxi init -t module my-module`
+2. **Develop**: `npm run dev` (runs playground)
+3. **Build**: `npm run build`
+4. **Test**: `npm run test`
+
+## Best Practices
+
+- Use `createResolver(import.meta.url)` for all path resolution
+- Prefix components to avoid naming conflicts
+- Make options type-safe with `ModuleOptions` interface
+- Use `moduleDependencies` instead of `installModule`
+- Provide sensible defaults for all options
+- Add compatibility requirements in `meta.compatibility`
+- Use virtual files for dynamic configuration
+- Separate client/server plugins appropriately
+
+
diff --git a/skills/nuxt/references/best-practices-data-fetching.md b/skills/nuxt/references/best-practices-data-fetching.md
new file mode 100644
index 0000000..ded5d2e
--- /dev/null
+++ b/skills/nuxt/references/best-practices-data-fetching.md
@@ -0,0 +1,357 @@
+---
+name: data-fetching-best-practices
+description: Patterns and best practices for efficient data fetching in Nuxt
+---
+
+# Data Fetching Best Practices
+
+Effective data fetching patterns for SSR-friendly, performant Nuxt applications.
+
+## Choose the Right Tool
+
+| Scenario | Use |
+|----------|-----|
+| Component initial data | `useFetch` or `useAsyncData` |
+| User interactions (clicks, forms) | `$fetch` |
+| Third-party SDK/API | `useAsyncData` with custom function |
+| Multiple parallel requests | `useAsyncData` with `Promise.all` |
+
+## Await vs Non-Await Usage
+
+The `await` keyword controls whether data fetching **blocks navigation**:
+
+### With `await` - Blocking Navigation
+
+```vue
+
+```
+
+- **Server**: Fetches data and includes it in the payload
+- **Client hydration**: Uses payload data, no re-fetch
+- **Client navigation**: Blocks until data is ready
+
+### Without `await` - Non-Blocking (Lazy)
+
+```vue
+
+
+
+ Loading...
+ {{ data }}
+
+```
+
+Equivalent to using `useLazyFetch`:
+
+```vue
+
+```
+
+### When to Use Each
+
+| Pattern | Use Case |
+|---------|----------|
+| `await useFetch()` | Critical data needed for SEO/initial render |
+| `useFetch({ lazy: true })` | Non-critical data, better perceived performance |
+| `await useLazyFetch()` | Same as lazy, await only ensures initialization |
+
+## Avoid Double Fetching
+
+### ❌ Wrong: Using $fetch Alone in Setup
+
+```vue
+
+```
+
+### ✅ Correct: Use useFetch
+
+```vue
+
+```
+
+## Use Explicit Cache Keys
+
+### ❌ Avoid: Auto-generated Keys
+
+```vue
+
+```
+
+### ✅ Better: Explicit Keys
+
+```vue
+
+```
+
+## Handle Loading States Properly
+
+```vue
+
+
+
+
+
+
+
+
+
+
+
+```
+
+## Use Lazy Fetching for Non-critical Data
+
+```vue
+
+
+
+
+ {{ post?.title }}
+ {{ post?.content }}
+
+
+
+
+
+```
+
+## Minimize Payload Size
+
+### Use `pick` for Simple Filtering
+
+```vue
+
+```
+
+### Use `transform` for Complex Transformations
+
+```vue
+
+```
+
+## Parallel Fetching
+
+### Fetch Independent Data with useAsyncData
+
+```vue
+
+```
+
+### Multiple useFetch Calls
+
+```vue
+
+```
+
+## Efficient Refresh Patterns
+
+### Watch Reactive Dependencies
+
+```vue
+
+```
+
+### Manual Refresh
+
+```vue
+
+```
+
+### Conditional Fetching
+
+```vue
+
+```
+
+## Server-only Fetching
+
+```vue
+
+```
+
+## Error Handling
+
+```vue
+
+
+
+
+
Failed to load: {{ error.message }}
+
Retry
+
+
+```
+
+## Shared Data Across Components
+
+```vue
+
+
+
+
+
+```
+
+## Avoid useAsyncData for Side Effects
+
+### ❌ Wrong: Side Effects in useAsyncData
+
+```vue
+
+```
+
+### ✅ Correct: Use callOnce for Side Effects
+
+```vue
+
+```
+
+
diff --git a/skills/nuxt/references/best-practices-ssr.md b/skills/nuxt/references/best-practices-ssr.md
new file mode 100644
index 0000000..2befef0
--- /dev/null
+++ b/skills/nuxt/references/best-practices-ssr.md
@@ -0,0 +1,355 @@
+---
+name: ssr-best-practices
+description: Avoiding SSR context leaks, hydration mismatches, and proper composable usage
+---
+
+# SSR Best Practices
+
+Patterns for avoiding common SSR pitfalls: context leaks, hydration mismatches, and composable errors.
+
+## The "Nuxt Instance Unavailable" Error
+
+This error occurs when calling Nuxt composables outside the proper context.
+
+### ❌ Wrong: Composable Outside Setup
+
+```ts
+// composables/bad.ts
+// Called at module level - no Nuxt context!
+const config = useRuntimeConfig()
+
+export function useMyComposable() {
+ return config.public.apiBase
+}
+```
+
+### ✅ Correct: Composable Inside Function
+
+```ts
+// composables/good.ts
+export function useMyComposable() {
+ // Called inside the composable - has context
+ const config = useRuntimeConfig()
+ return config.public.apiBase
+}
+```
+
+### Valid Contexts for Composables
+
+Nuxt composables work in:
+- `
+```
+
+### ✅ Correct: Use SSR-safe Alternatives
+
+```vue
+
+```
+
+### ❌ Wrong: Random/Time-based Values
+
+```vue
+
+ {{ Math.random() }}
+ {{ new Date().toLocaleTimeString() }}
+
+```
+
+### ✅ Correct: Use useState for Consistency
+
+```vue
+
+
+
+ {{ randomValue }}
+
+```
+
+### ❌ Wrong: Conditional Rendering on Client State
+
+```vue
+
+
+ Desktop
+
+```
+
+### ✅ Correct: Use CSS or ClientOnly
+
+```vue
+
+
+ Desktop
+ Mobile
+
+
+
+
+ Loading...
+
+
+```
+
+## Browser-only Code
+
+### Use `import.meta.client`
+
+```vue
+
+```
+
+### Use `onMounted` for DOM Access
+
+```vue
+
+```
+
+### Dynamic Imports for Browser Libraries
+
+```vue
+
+```
+
+## Server-only Code
+
+### Use `import.meta.server`
+
+```vue
+
+```
+
+### Server Components
+
+```vue
+
+
+
+
+ {{ data }}
+
+```
+
+## Async Composable Patterns
+
+### ❌ Wrong: Await Before Composable
+
+```vue
+
+```
+
+### ✅ Correct: Get Context First
+
+```vue
+
+```
+
+## Plugin Best Practices
+
+### Client-only Plugins
+
+```ts
+// plugins/analytics.client.ts
+export default defineNuxtPlugin(() => {
+ // Only runs on client
+ initAnalytics()
+})
+```
+
+### Server-only Plugins
+
+```ts
+// plugins/server-init.server.ts
+export default defineNuxtPlugin(() => {
+ // Only runs on server
+ initServerConnections()
+})
+```
+
+### Provide/Inject Pattern
+
+```ts
+// plugins/api.ts
+export default defineNuxtPlugin(() => {
+ const api = createApiClient()
+
+ return {
+ provide: {
+ api,
+ },
+ }
+})
+```
+
+```vue
+
+```
+
+## Third-party Library Integration
+
+### ❌ Wrong: Import at Top Level
+
+```vue
+
+```
+
+### ✅ Correct: Dynamic Import
+
+```vue
+
+```
+
+### Use ClientOnly Component
+
+```vue
+
+
+
+
+ Loading...
+
+
+
+```
+
+## Debugging SSR Issues
+
+### Check Rendering Context
+
+```vue
+
+```
+
+### Use Nuxt DevTools
+
+DevTools shows payload data and hydration state.
+
+### Common Error Messages
+
+| Error | Cause |
+|-------|-------|
+| "Nuxt instance unavailable" | Composable called outside setup context |
+| "Hydration mismatch" | Server/client HTML differs |
+| "window is not defined" | Browser API used during SSR |
+| "document is not defined" | DOM access during SSR |
+
+
diff --git a/skills/nuxt/references/core-cli.md b/skills/nuxt/references/core-cli.md
new file mode 100644
index 0000000..1487836
--- /dev/null
+++ b/skills/nuxt/references/core-cli.md
@@ -0,0 +1,263 @@
+---
+name: cli-commands
+description: Nuxt CLI commands for development, building, and project management
+---
+
+# CLI Commands
+
+Nuxt provides CLI commands via `nuxi` (or `npx nuxt`) for development, building, and project management.
+
+## Project Initialization
+
+### Create New Project
+
+```bash
+# Interactive project creation
+npx nuxi@latest init my-app
+
+# With specific package manager
+npx nuxi@latest init my-app --packageManager pnpm
+
+# With modules
+npx nuxi@latest init my-app --modules "@nuxt/ui,@nuxt/image"
+
+# From template
+npx nuxi@latest init my-app --template v3
+
+# Skip module selection prompt
+npx nuxi@latest init my-app --no-modules
+```
+
+**Options:**
+| Option | Description |
+|--------|-------------|
+| `-t, --template` | Template name |
+| `--packageManager` | npm, pnpm, yarn, or bun |
+| `-M, --modules` | Modules to install (comma-separated) |
+| `--gitInit` | Initialize git repository |
+| `--no-install` | Skip installing dependencies |
+
+## Development
+
+### Start Dev Server
+
+```bash
+# Start development server (default: http://localhost:3000)
+npx nuxt dev
+
+# Custom port
+npx nuxt dev --port 4000
+
+# Open in browser
+npx nuxt dev --open
+
+# Listen on all interfaces (for mobile testing)
+npx nuxt dev --host 0.0.0.0
+
+# With HTTPS
+npx nuxt dev --https
+
+# Clear console on restart
+npx nuxt dev --clear
+
+# Create public tunnel
+npx nuxt dev --tunnel
+```
+
+**Options:**
+| Option | Description |
+|--------|-------------|
+| `-p, --port` | Port to listen on |
+| `-h, --host` | Host to listen on |
+| `-o, --open` | Open in browser |
+| `--https` | Enable HTTPS |
+| `--tunnel` | Create public tunnel (via untun) |
+| `--qr` | Show QR code for mobile |
+| `--clear` | Clear console on restart |
+
+**Environment Variables:**
+- `NUXT_PORT` or `PORT` - Default port
+- `NUXT_HOST` or `HOST` - Default host
+
+## Building
+
+### Production Build
+
+```bash
+# Build for production
+npx nuxt build
+
+# Build with prerendering
+npx nuxt build --prerender
+
+# Build with specific preset
+npx nuxt build --preset node-server
+npx nuxt build --preset cloudflare-pages
+npx nuxt build --preset vercel
+
+# Build with environment
+npx nuxt build --envName staging
+```
+
+Output is created in `.output/` directory.
+
+### Static Generation
+
+```bash
+# Generate static site (prerenders all routes)
+npx nuxt generate
+```
+
+Equivalent to `nuxt build --prerender`. Creates static HTML files for deployment to static hosting.
+
+### Preview Production Build
+
+```bash
+# Preview after build
+npx nuxt preview
+
+# Custom port
+npx nuxt preview --port 4000
+```
+
+## Utilities
+
+### Prepare (Type Generation)
+
+```bash
+# Generate TypeScript types and .nuxt directory
+npx nuxt prepare
+```
+
+Run after cloning or when types are missing.
+
+### Type Check
+
+```bash
+# Run TypeScript type checking
+npx nuxt typecheck
+```
+
+### Analyze Bundle
+
+```bash
+# Analyze production bundle
+npx nuxt analyze
+```
+
+Opens visual bundle analyzer.
+
+### Cleanup
+
+```bash
+# Remove generated files (.nuxt, .output, node_modules/.cache)
+npx nuxt cleanup
+```
+
+### Info
+
+```bash
+# Show environment info (useful for bug reports)
+npx nuxt info
+```
+
+### Upgrade
+
+```bash
+# Upgrade Nuxt to latest version
+npx nuxt upgrade
+
+# Upgrade to nightly release
+npx nuxt upgrade --nightly
+```
+
+## Module Commands
+
+### Add Module
+
+```bash
+# Add a Nuxt module
+npx nuxt module add @nuxt/ui
+npx nuxt module add @nuxt/image
+```
+
+Installs and adds to `nuxt.config.ts`.
+
+### Build Module (for module authors)
+
+```bash
+# Build a Nuxt module
+npx nuxt build-module
+```
+
+## DevTools
+
+```bash
+# Enable DevTools globally
+npx nuxt devtools enable
+
+# Disable DevTools
+npx nuxt devtools disable
+```
+
+## Common Workflows
+
+### Development
+
+```bash
+# Install dependencies and start dev
+pnpm install
+pnpm dev # or npx nuxt dev
+```
+
+### Production Deployment
+
+```bash
+# Build and preview locally
+pnpm build
+pnpm preview
+
+# Or for static hosting
+pnpm generate
+```
+
+### After Cloning
+
+```bash
+# Install deps and prepare types
+pnpm install
+npx nuxt prepare
+```
+
+## Environment-specific Builds
+
+```bash
+# Development build
+npx nuxt build --envName development
+
+# Staging build
+npx nuxt build --envName staging
+
+# Production build (default)
+npx nuxt build --envName production
+```
+
+Corresponds to `$development`, `$env.staging`, `$production` in `nuxt.config.ts`.
+
+## Layer Extension
+
+```bash
+# Dev with additional layer
+npx nuxt dev --extends ./base-layer
+
+# Build with layer
+npx nuxt build --extends ./base-layer
+```
+
+
diff --git a/skills/nuxt/references/core-config.md b/skills/nuxt/references/core-config.md
new file mode 100644
index 0000000..ebce5d5
--- /dev/null
+++ b/skills/nuxt/references/core-config.md
@@ -0,0 +1,162 @@
+---
+name: configuration
+description: Nuxt configuration files including nuxt.config.ts, app.config.ts, and runtime configuration
+---
+
+# Nuxt Configuration
+
+Nuxt uses configuration files to customize application behavior. The main configuration options are `nuxt.config.ts` for build-time settings and `app.config.ts` for runtime settings.
+
+## nuxt.config.ts
+
+The main configuration file at the root of your project:
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ // Configuration options
+ devtools: { enabled: true },
+ modules: ['@nuxt/ui'],
+})
+```
+
+### Environment Overrides
+
+Configure environment-specific settings:
+
+```ts
+export default defineNuxtConfig({
+ $production: {
+ routeRules: {
+ '/**': { isr: true },
+ },
+ },
+ $development: {
+ // Development-specific config
+ },
+ $env: {
+ staging: {
+ // Staging environment config
+ },
+ },
+})
+```
+
+Use `--envName` flag to select environment: `nuxt build --envName staging`
+
+## Runtime Config
+
+For values that need to be overridden via environment variables:
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ runtimeConfig: {
+ // Server-only keys
+ apiSecret: '123',
+ // Keys within public are exposed to client
+ public: {
+ apiBase: '/api',
+ },
+ },
+})
+```
+
+Override with environment variables:
+
+```ini
+# .env
+NUXT_API_SECRET=api_secret_token
+NUXT_PUBLIC_API_BASE=https://api.example.com
+```
+
+Access in components/composables:
+
+```vue
+
+```
+
+## App Config
+
+For public tokens determined at build time (not overridable via env vars):
+
+```ts
+// app/app.config.ts
+export default defineAppConfig({
+ title: 'Hello Nuxt',
+ theme: {
+ dark: true,
+ colors: {
+ primary: '#ff0000',
+ },
+ },
+})
+```
+
+Access in components:
+
+```vue
+
+```
+
+## runtimeConfig vs app.config
+
+| Feature | runtimeConfig | app.config |
+|---------|--------------|------------|
+| Client-side | Hydrated | Bundled |
+| Environment variables | Yes | No |
+| Reactive | Yes | Yes |
+| Hot module replacement | No | Yes |
+| Non-primitive JS types | No | Yes |
+
+**Use runtimeConfig** for secrets and values that change per environment.
+**Use app.config** for public tokens, theme settings, and non-sensitive config.
+
+## External Tool Configuration
+
+Nuxt uses `nuxt.config.ts` as single source of truth. Configure external tools within it:
+
+```ts
+export default defineNuxtConfig({
+ // Nitro configuration
+ nitro: {
+ // nitro options
+ },
+ // Vite configuration
+ vite: {
+ // vite options
+ vue: {
+ // @vitejs/plugin-vue options
+ },
+ },
+ // PostCSS configuration
+ postcss: {
+ // postcss options
+ },
+})
+```
+
+## Vue Configuration
+
+Enable Vue experimental features:
+
+```ts
+export default defineNuxtConfig({
+ vue: {
+ propsDestructure: true,
+ },
+})
+```
+
+
diff --git a/skills/nuxt/references/core-data-fetching.md b/skills/nuxt/references/core-data-fetching.md
new file mode 100644
index 0000000..d662efb
--- /dev/null
+++ b/skills/nuxt/references/core-data-fetching.md
@@ -0,0 +1,236 @@
+---
+name: data-fetching
+description: useFetch, useAsyncData, and $fetch for SSR-friendly data fetching
+---
+
+# Data Fetching
+
+Nuxt provides composables for SSR-friendly data fetching that prevent double-fetching and handle hydration.
+
+## Overview
+
+- `$fetch` - Basic fetch utility (use for client-side events)
+- `useFetch` - SSR-safe wrapper around $fetch (use for component data)
+- `useAsyncData` - SSR-safe wrapper for any async function
+
+## useFetch
+
+Primary composable for fetching data in components:
+
+```vue
+
+
+
+ Loading...
+ Error: {{ error.message }}
+
+
+```
+
+### With Options
+
+```ts
+const { data } = await useFetch('/api/posts', {
+ // Query parameters
+ query: { page: 1, limit: 10 },
+ // Request body (for POST/PUT)
+ body: { title: 'New Post' },
+ // HTTP method
+ method: 'POST',
+ // Only pick specific fields
+ pick: ['id', 'title'],
+ // Transform response
+ transform: (posts) => posts.map(p => ({ ...p, slug: slugify(p.title) })),
+ // Custom key for caching
+ key: 'posts-list',
+ // Don't fetch on server
+ server: false,
+ // Don't block navigation
+ lazy: true,
+ // Don't fetch immediately
+ immediate: false,
+ // Default value
+ default: () => [],
+})
+```
+
+### Reactive Parameters
+
+```vue
+
+```
+
+### Computed URL
+
+```vue
+
+```
+
+## useAsyncData
+
+For wrapping any async function:
+
+```vue
+
+```
+
+### Multiple Requests
+
+```vue
+
+```
+
+## $fetch
+
+For client-side events (form submissions, button clicks):
+
+```vue
+
+```
+
+**Important**: Don't use `$fetch` alone in setup for initial data - it will fetch twice (server + client). Use `useFetch` or `useAsyncData` instead.
+
+## Return Values
+
+All composables return:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `data` | `Ref` | Fetched data |
+| `error` | `Ref` | Error if request failed |
+| `status` | `Ref<'idle' \| 'pending' \| 'success' \| 'error'>` | Request status |
+| `refresh` | `() => Promise` | Refetch data |
+| `execute` | `() => Promise` | Alias for refresh |
+| `clear` | `() => void` | Reset data and error |
+
+## Lazy Fetching
+
+Don't block navigation:
+
+```vue
+
+```
+
+## Refresh & Watch
+
+```vue
+
+```
+
+## Caching
+
+Data is cached by key. Share data across components:
+
+```vue
+
+```
+
+Refresh cached data globally:
+
+```ts
+// Refresh specific key
+await refreshNuxtData('current-user')
+
+// Refresh all data
+await refreshNuxtData()
+
+// Clear cached data
+clearNuxtData('current-user')
+```
+
+## Interceptors
+
+```ts
+const { data } = await useFetch('/api/auth', {
+ onRequest({ options }) {
+ options.headers.set('Authorization', `Bearer ${token}`)
+ },
+ onRequestError({ error }) {
+ console.error('Request failed:', error)
+ },
+ onResponse({ response }) {
+ // Process response
+ },
+ onResponseError({ response }) {
+ if (response.status === 401) {
+ navigateTo('/login')
+ }
+ },
+})
+```
+
+## Passing Headers (SSR)
+
+`useFetch` automatically proxies cookies/headers from client to server. For `$fetch`:
+
+```vue
+
+```
+
+
diff --git a/skills/nuxt/references/core-deployment.md b/skills/nuxt/references/core-deployment.md
new file mode 100644
index 0000000..6fca597
--- /dev/null
+++ b/skills/nuxt/references/core-deployment.md
@@ -0,0 +1,224 @@
+---
+name: deployment
+description: Deploying Nuxt applications to various hosting platforms
+---
+
+# Deployment
+
+Nuxt is platform-agnostic thanks to [Nitro](https://nitro.build), its server engine. You can deploy to almost any platform with minimal configuration—Node.js servers, static hosting, serverless functions, or edge networks.
+
+> **Full list of supported platforms:** https://nitro.build/deploy
+
+## Deployment Modes
+
+### Node.js Server
+
+```bash
+# Build for Node.js
+nuxt build
+
+# Run production server
+node .output/server/index.mjs
+```
+
+Environment variables:
+- `PORT` or `NITRO_PORT` (default: 3000)
+- `HOST` or `NITRO_HOST` (default: 0.0.0.0)
+
+### Static Generation
+
+```bash
+# Generate static site
+nuxt generate
+```
+
+Output in `.output/public/` - deploy to any static host.
+
+### Preset Configuration
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ nitro: {
+ preset: 'vercel', // or 'netlify', 'cloudflare-pages', etc.
+ },
+})
+```
+
+Or via environment variable:
+
+```bash
+NITRO_PRESET=vercel nuxt build
+```
+
+---
+
+## Recommended Platforms
+
+When helping users choose a deployment platform, consider their needs:
+
+### Vercel
+
+**Best for:** Projects wanting zero-config deployment with excellent DX
+
+```bash
+# Install Vercel CLI
+npm i -g vercel
+
+# Deploy
+vercel
+```
+
+**Pros:**
+- Zero configuration for Nuxt (auto-detects)
+- Excellent preview deployments for PRs
+- Built-in analytics and speed insights
+- Edge Functions support
+- Great free tier for personal projects
+
+**Cons:**
+- Can get expensive at scale (bandwidth costs)
+- Vendor lock-in concerns
+- Limited build minutes on free tier
+
+**Recommended when:** User wants fastest setup, values DX, building SaaS or marketing sites.
+
+---
+
+### Netlify
+
+**Best for:** JAMstack sites, static-heavy apps, teams needing forms/identity
+
+```bash
+# Install Netlify CLI
+npm i -g netlify-cli
+
+# Deploy
+netlify deploy --prod
+```
+
+**Pros:**
+- Great free tier with generous bandwidth
+- Built-in forms, identity, and functions
+- Excellent for static sites with some dynamic features
+- Good preview deployments
+- Split testing built-in
+
+**Cons:**
+- SSR/serverless functions can be slower than Vercel
+- Less optimized for full SSR apps
+- Build minutes can run out on free tier
+
+**Recommended when:** User has static-heavy site, needs built-in forms/auth, or prefers Netlify ecosystem.
+
+---
+
+### Cloudflare Pages
+
+**Best for:** Global performance, edge computing, cost-conscious projects
+
+```bash
+# Build with Cloudflare preset
+NITRO_PRESET=cloudflare-pages nuxt build
+```
+
+**Pros:**
+- Unlimited bandwidth on free tier
+- Excellent global edge network (fastest TTFB)
+- Workers for edge computing
+- Very cost-effective at scale
+- D1, KV, R2 for data storage
+
+**Cons:**
+- Workers have execution limits (CPU time)
+- Some Node.js APIs not available in Workers
+- Less mature than Vercel/Netlify for frameworks
+
+**Recommended when:** User prioritizes performance, global reach, or cost at scale.
+
+---
+
+### GitHub Actions + Self-hosted/VPS
+
+**Best for:** Full control, existing infrastructure, CI/CD customization
+
+```yaml
+# .github/workflows/deploy.yml
+name: Deploy
+on:
+ push:
+ branches: [main]
+
+jobs:
+ deploy:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ - uses: actions/setup-node@v4
+ with:
+ node-version: 20
+
+ - run: npm ci
+ - run: npm run build
+
+ # Deploy to your server (example: rsync to VPS)
+ - name: Deploy to server
+ run: rsync -avz .output/ user@server:/app/
+```
+
+**Pros:**
+- Full control over build and deployment
+- No vendor lock-in
+- Can deploy anywhere (VPS, Docker, Kubernetes)
+- Free CI/CD minutes for public repos
+- Customizable workflows
+
+**Cons:**
+- Requires more setup and maintenance
+- Need to manage your own infrastructure
+- No built-in preview deployments
+- SSL, scaling, monitoring are your responsibility
+
+**Recommended when:** User has existing infrastructure, needs full control, or deploying to private/enterprise environments.
+
+---
+
+## Quick Decision Guide
+
+| Need | Recommendation |
+|------|----------------|
+| Fastest setup, small team | **Vercel** |
+| Static site with forms | **Netlify** |
+| Cost-sensitive at scale | **Cloudflare Pages** |
+| Full control / enterprise | **GitHub Actions + VPS** |
+| Docker/Kubernetes | **GitHub Actions + Container Registry** |
+| Serverless APIs | **Vercel** or **AWS Lambda** |
+
+## Docker Deployment
+
+```dockerfile
+FROM node:20-alpine AS builder
+WORKDIR /app
+COPY package*.json ./
+RUN npm ci
+COPY . .
+RUN npm run build
+
+FROM node:20-alpine
+WORKDIR /app
+COPY --from=builder /app/.output .output
+ENV PORT=3000
+EXPOSE 3000
+CMD ["node", ".output/server/index.mjs"]
+```
+
+```bash
+docker build -t my-nuxt-app .
+docker run -p 3000:3000 my-nuxt-app
+```
+
+
diff --git a/skills/nuxt/references/core-directory-structure.md b/skills/nuxt/references/core-directory-structure.md
new file mode 100644
index 0000000..415e112
--- /dev/null
+++ b/skills/nuxt/references/core-directory-structure.md
@@ -0,0 +1,269 @@
+---
+name: directory-structure
+description: Nuxt project folder structure, conventions, and file organization
+---
+
+# Directory Structure
+
+Nuxt uses conventions-based directory structure. Understanding it is key to effective development.
+
+## Standard Project Structure
+
+```
+my-nuxt-app/
+├── app/ # Application source (can be at root level)
+│ ├── app.vue # Root component
+│ ├── app.config.ts # App configuration (runtime)
+│ ├── error.vue # Error page
+│ ├── components/ # Auto-imported Vue components
+│ ├── composables/ # Auto-imported composables
+│ ├── layouts/ # Layout components
+│ ├── middleware/ # Route middleware
+│ ├── pages/ # File-based routing
+│ ├── plugins/ # Vue plugins
+│ └── utils/ # Auto-imported utilities
+├── assets/ # Build-processed assets (CSS, images)
+├── public/ # Static assets (served as-is)
+├── server/ # Server-side code
+│ ├── api/ # API routes (/api/*)
+│ ├── routes/ # Server routes
+│ ├── middleware/ # Server middleware
+│ ├── plugins/ # Nitro plugins
+│ └── utils/ # Server utilities (auto-imported)
+├── content/ # Content files (@nuxt/content)
+├── layers/ # Local layers (auto-scanned)
+├── modules/ # Local modules
+├── nuxt.config.ts # Nuxt configuration
+├── package.json
+└── tsconfig.json
+```
+
+## Key Directories
+
+### `app/` Directory
+
+Contains all application code. Can also be at root level (without `app/` folder).
+
+```ts
+// nuxt.config.ts - customize source directory
+export default defineNuxtConfig({
+ srcDir: 'src/', // Change from 'app/' to 'src/'
+})
+```
+
+### `app/components/`
+
+Vue components auto-imported by name:
+
+```
+components/
+├── Button.vue →
+├── Card.vue →
+├── base/
+│ └── Button.vue →
+├── ui/
+│ ├── Input.vue →
+│ └── Modal.vue →
+└── TheHeader.vue →
+```
+
+**Lazy loading**: Prefix with `Lazy` for dynamic import:
+
+```vue
+
+
+
+```
+
+**Client/Server only**:
+
+```
+components/
+├── Comments.client.vue → Only rendered on client
+└── ServerData.server.vue → Only rendered on server
+```
+
+### `app/composables/`
+
+Vue composables auto-imported (top-level files only):
+
+```
+composables/
+├── useAuth.ts → useAuth()
+├── useFoo.ts → useFoo()
+└── nested/
+ └── utils.ts → NOT auto-imported
+```
+
+Re-export nested composables:
+
+```ts
+// composables/index.ts
+export { useHelper } from './nested/utils'
+```
+
+### `app/pages/`
+
+File-based routing:
+
+```
+pages/
+├── index.vue → /
+├── about.vue → /about
+├── blog/
+│ ├── index.vue → /blog
+│ └── [slug].vue → /blog/:slug
+├── users/
+│ └── [id]/
+│ └── profile.vue → /users/:id/profile
+├── [...slug].vue → /* (catch-all)
+├── [[optional]].vue → /:optional? (optional param)
+└── (marketing)/ → Route group (not in URL)
+ └── pricing.vue → /pricing
+```
+
+**Pages are optional**: Without `pages/`, no vue-router is included.
+
+### `app/layouts/`
+
+Layout components wrapping pages:
+
+```
+layouts/
+├── default.vue → Default layout
+├── admin.vue → Admin layout
+└── blank.vue → No layout
+```
+
+```vue
+
+
+
+
+
+
+
+
+```
+
+Use in pages:
+
+```vue
+
+```
+
+### `app/middleware/`
+
+Route middleware:
+
+```
+middleware/
+├── auth.ts → Named middleware
+├── admin.ts → Named middleware
+└── logger.global.ts → Global middleware (runs on every route)
+```
+
+### `app/plugins/`
+
+Nuxt plugins (auto-registered):
+
+```
+plugins/
+├── 01.analytics.ts → Order with number prefix
+├── 02.auth.ts
+├── vue-query.client.ts → Client-only plugin
+└── server-init.server.ts → Server-only plugin
+```
+
+### `server/` Directory
+
+Nitro server code:
+
+```
+server/
+├── api/
+│ ├── users.ts → GET /api/users
+│ ├── users.post.ts → POST /api/users
+│ └── users/[id].ts → /api/users/:id
+├── routes/
+│ └── sitemap.xml.ts → /sitemap.xml
+├── middleware/
+│ └── auth.ts → Runs on every request
+├── plugins/
+│ └── db.ts → Server startup plugins
+└── utils/
+ └── db.ts → Auto-imported server utilities
+```
+
+### `public/` Directory
+
+Static assets served at root URL:
+
+```
+public/
+├── favicon.ico → /favicon.ico
+├── robots.txt → /robots.txt
+└── images/
+ └── logo.png → /images/logo.png
+```
+
+### `assets/` Directory
+
+Build-processed assets:
+
+```
+assets/
+├── css/
+│ └── main.css
+├── images/
+│ └── hero.png
+└── fonts/
+ └── custom.woff2
+```
+
+Reference in components:
+
+```vue
+
+
+
+
+
+```
+
+## Special Files
+
+| File | Purpose |
+|------|---------|
+| `app.vue` | Root component (optional with pages/) |
+| `app.config.ts` | Runtime app configuration |
+| `error.vue` | Custom error page |
+| `nuxt.config.ts` | Build-time configuration |
+| `.nuxtignore` | Ignore files from Nuxt |
+| `.env` | Environment variables |
+
+## File Naming Conventions
+
+| Pattern | Meaning |
+|---------|---------|
+| `[param]` | Dynamic route parameter |
+| `[[param]]` | Optional parameter |
+| `[...slug]` | Catch-all route |
+| `(group)` | Route group (not in URL) |
+| `.client.vue` | Client-only component |
+| `.server.vue` | Server-only component |
+| `.global.ts` | Global middleware |
+
+
diff --git a/skills/nuxt/references/core-modules.md b/skills/nuxt/references/core-modules.md
new file mode 100644
index 0000000..dab3f95
--- /dev/null
+++ b/skills/nuxt/references/core-modules.md
@@ -0,0 +1,292 @@
+---
+name: nuxt-modules
+description: Creating and using Nuxt modules to extend framework functionality
+---
+
+# Nuxt Modules
+
+Modules extend Nuxt's core functionality. They run at build time and can add components, composables, plugins, and configuration.
+
+## Using Modules
+
+Install and add to `nuxt.config.ts`:
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ modules: [
+ // npm package
+ '@nuxt/ui',
+ // Local module
+ './modules/my-module',
+ // Inline module
+ (options, nuxt) => {
+ console.log('Inline module')
+ },
+ // With options
+ ['@nuxt/image', { provider: 'cloudinary' }],
+ ],
+})
+```
+
+## Creating Modules
+
+### Basic Module
+
+```ts
+// modules/my-module.ts
+export default defineNuxtModule({
+ meta: {
+ name: 'my-module',
+ configKey: 'myModule',
+ },
+ defaults: {
+ enabled: true,
+ },
+ setup(options, nuxt) {
+ if (!options.enabled) return
+
+ console.log('My module is running!')
+ },
+})
+```
+
+### Adding Components
+
+```ts
+// modules/ui/index.ts
+import { addComponent, createResolver } from '@nuxt/kit'
+
+export default defineNuxtModule({
+ setup(options, nuxt) {
+ const { resolve } = createResolver(import.meta.url)
+
+ // Add single component
+ addComponent({
+ name: 'MyButton',
+ filePath: resolve('./runtime/components/MyButton.vue'),
+ })
+
+ // Add components directory
+ addComponentsDir({
+ path: resolve('./runtime/components'),
+ prefix: 'My',
+ })
+ },
+})
+```
+
+### Adding Composables
+
+```ts
+// modules/utils/index.ts
+import { addImports, createResolver } from '@nuxt/kit'
+
+export default defineNuxtModule({
+ setup() {
+ const { resolve } = createResolver(import.meta.url)
+
+ // Add auto-imported composable
+ addImports({
+ name: 'useMyUtil',
+ from: resolve('./runtime/composables/useMyUtil'),
+ })
+
+ // Add directory for auto-imports
+ addImportsDir(resolve('./runtime/composables'))
+ },
+})
+```
+
+### Adding Plugins
+
+```ts
+// modules/analytics/index.ts
+import { addPlugin, createResolver } from '@nuxt/kit'
+
+export default defineNuxtModule({
+ setup() {
+ const { resolve } = createResolver(import.meta.url)
+
+ addPlugin({
+ src: resolve('./runtime/plugin'),
+ mode: 'client', // 'client', 'server', or 'all'
+ })
+ },
+})
+```
+
+Plugin file:
+
+```ts
+// modules/analytics/runtime/plugin.ts
+export default defineNuxtPlugin((nuxtApp) => {
+ nuxtApp.hook('page:finish', () => {
+ console.log('Page loaded')
+ })
+})
+```
+
+### Adding Server Routes
+
+```ts
+// modules/api/index.ts
+import { addServerHandler, createResolver } from '@nuxt/kit'
+
+export default defineNuxtModule({
+ setup() {
+ const { resolve } = createResolver(import.meta.url)
+
+ addServerHandler({
+ route: '/api/my-endpoint',
+ handler: resolve('./runtime/server/api/my-endpoint'),
+ })
+ },
+})
+```
+
+### Extending Config
+
+```ts
+// modules/config/index.ts
+export default defineNuxtModule({
+ setup(options, nuxt) {
+ // Add CSS
+ nuxt.options.css.push('my-module/styles.css')
+
+ // Add runtime config
+ nuxt.options.runtimeConfig.public.myModule = {
+ apiUrl: options.apiUrl,
+ }
+
+ // Extend Vite config
+ nuxt.options.vite.optimizeDeps ||= {}
+ nuxt.options.vite.optimizeDeps.include ||= []
+ nuxt.options.vite.optimizeDeps.include.push('some-package')
+ },
+})
+```
+
+## Module Hooks
+
+```ts
+export default defineNuxtModule({
+ setup(options, nuxt) {
+ // Build-time hooks
+ nuxt.hook('modules:done', () => {
+ console.log('All modules loaded')
+ })
+
+ nuxt.hook('components:dirs', (dirs) => {
+ dirs.push({ path: '~/extra-components' })
+ })
+
+ nuxt.hook('pages:extend', (pages) => {
+ pages.push({
+ name: 'custom-page',
+ path: '/custom',
+ file: resolve('./runtime/pages/custom.vue'),
+ })
+ })
+
+ nuxt.hook('imports:extend', (imports) => {
+ imports.push({ name: 'myHelper', from: 'my-package' })
+ })
+ },
+})
+```
+
+## Module Options
+
+Type-safe options with defaults:
+
+```ts
+export interface ModuleOptions {
+ apiKey: string
+ enabled?: boolean
+ prefix?: string
+}
+
+export default defineNuxtModule({
+ meta: {
+ name: 'my-module',
+ configKey: 'myModule',
+ },
+ defaults: {
+ enabled: true,
+ prefix: 'My',
+ },
+ setup(options, nuxt) {
+ // options is typed as ModuleOptions
+ if (!options.apiKey) {
+ console.warn('API key not provided')
+ }
+ },
+})
+```
+
+Usage:
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ modules: ['my-module'],
+ myModule: {
+ apiKey: 'xxx',
+ prefix: 'Custom',
+ },
+})
+```
+
+## Local Modules
+
+Place in `modules/` directory:
+
+```
+modules/
+├── my-module/
+│ ├── index.ts
+│ └── runtime/
+│ ├── components/
+│ ├── composables/
+│ └── plugin.ts
+```
+
+Auto-registered or manually added:
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ modules: [
+ '~/modules/my-module', // Explicit
+ ],
+})
+```
+
+## Module Dependencies
+
+```ts
+export default defineNuxtModule({
+ meta: {
+ name: 'my-module',
+ },
+ moduleDependencies: {
+ '@nuxt/image': {
+ version: '>=1.0.0',
+ defaults: {
+ provider: 'ipx',
+ },
+ },
+ },
+ setup() {
+ // @nuxt/image is guaranteed to be installed
+ },
+})
+```
+
+
diff --git a/skills/nuxt/references/core-routing.md b/skills/nuxt/references/core-routing.md
new file mode 100644
index 0000000..10696f3
--- /dev/null
+++ b/skills/nuxt/references/core-routing.md
@@ -0,0 +1,226 @@
+---
+name: routing
+description: File-based routing, dynamic routes, navigation, and middleware in Nuxt
+---
+
+# Routing
+
+Nuxt uses file-system routing based on vue-router. Files in `app/pages/` automatically create routes.
+
+## Basic Routing
+
+```
+pages/
+├── index.vue → /
+├── about.vue → /about
+└── posts/
+ ├── index.vue → /posts
+ └── [id].vue → /posts/:id
+```
+
+## Dynamic Routes
+
+Use brackets for dynamic segments:
+
+```
+pages/
+├── users/
+│ └── [id].vue → /users/:id
+├── posts/
+│ └── [...slug].vue → /posts/* (catch-all)
+└── [[optional]].vue → /:optional? (optional param)
+```
+
+Access route parameters:
+
+```vue
+
+```
+
+## Navigation
+
+### NuxtLink Component
+
+```vue
+
+
+ Home
+ About
+ Post 1
+
+
+```
+
+NuxtLink automatically prefetches linked pages when they enter the viewport.
+
+### Programmatic Navigation
+
+```vue
+
+```
+
+## Route Middleware
+
+### Named Middleware
+
+```ts
+// middleware/auth.ts
+export default defineNuxtRouteMiddleware((to, from) => {
+ const isAuthenticated = false // Your auth logic
+
+ if (!isAuthenticated) {
+ return navigateTo('/login')
+ }
+})
+```
+
+Apply to pages:
+
+```vue
+
+```
+
+### Global Middleware
+
+Name files with `.global` suffix:
+
+```ts
+// middleware/logging.global.ts
+export default defineNuxtRouteMiddleware((to, from) => {
+ console.log('Navigating to:', to.path)
+})
+```
+
+### Inline Middleware
+
+```vue
+
+```
+
+## Page Meta
+
+Configure page-level options:
+
+```vue
+
+```
+
+## Route Validation
+
+```vue
+
+```
+
+## Layouts
+
+Define layouts in `app/layouts/`:
+
+```vue
+
+
+
+
+
+
+
+
+```
+
+```vue
+
+
+
+
+```
+
+Use in pages:
+
+```vue
+
+```
+
+Dynamic layout:
+
+```vue
+
+```
+
+## Navigation Hooks
+
+```vue
+
+```
+
+
diff --git a/skills/nuxt/references/features-components-autoimport.md b/skills/nuxt/references/features-components-autoimport.md
new file mode 100644
index 0000000..cd815c8
--- /dev/null
+++ b/skills/nuxt/references/features-components-autoimport.md
@@ -0,0 +1,328 @@
+---
+name: components-auto-imports
+description: Auto-imported components, lazy loading, and hydration strategies
+---
+
+# Components Auto-imports
+
+Nuxt automatically imports Vue components from `app/components/` directory.
+
+## Basic Auto-imports
+
+```
+components/
+├── Button.vue →
+├── Card.vue →
+└── AppHeader.vue →
+```
+
+```vue
+
+
+
+
+ Click me
+
+
+```
+
+## Naming Conventions
+
+### Nested Directory Names
+
+Component names include directory path:
+
+```
+components/
+├── base/
+│ └── Button.vue →
+├── form/
+│ ├── Input.vue →
+│ └── Select.vue →
+└── ui/
+ └── modal/
+ └── Dialog.vue →
+```
+
+### Disable Path Prefix
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ components: [
+ {
+ path: '~/components',
+ pathPrefix: false, // Use filename only
+ },
+ ],
+})
+```
+
+With `pathPrefix: false`:
+```
+components/base/Button.vue →
+```
+
+## Lazy Loading
+
+Prefix with `Lazy` for dynamic imports:
+
+```vue
+
+
+
+
+
+ Show Chart
+
+```
+
+Benefits:
+- Reduces initial bundle size
+- Code-splits component into separate chunk
+- Loads on-demand
+
+## Lazy Hydration Strategies
+
+Control when lazy components become interactive:
+
+### `hydrate-on-visible`
+
+Hydrate when component enters viewport:
+
+```vue
+
+
+
+```
+
+### `hydrate-on-idle`
+
+Hydrate when browser is idle:
+
+```vue
+
+
+
+```
+
+### `hydrate-on-interaction`
+
+Hydrate on user interaction:
+
+```vue
+
+
+
+
+
+
+
+```
+
+### `hydrate-on-media-query`
+
+Hydrate when media query matches:
+
+```vue
+
+
+
+```
+
+### `hydrate-after`
+
+Hydrate after delay (milliseconds):
+
+```vue
+
+
+
+```
+
+### `hydrate-when`
+
+Hydrate on condition:
+
+```vue
+
+
+
+
+
+```
+
+### `hydrate-never`
+
+Never hydrate (static only):
+
+```vue
+
+
+
+```
+
+### Hydration Event
+
+```vue
+
+
+
+
+
+```
+
+## Client/Server Components
+
+### Client-only (`.client.vue`)
+
+```
+components/
+└── BrowserChart.client.vue
+```
+
+```vue
+
+
+
+
+```
+
+### Server-only (`.server.vue`)
+
+```
+components/
+└── ServerMarkdown.server.vue
+```
+
+```vue
+
+
+
+
+```
+
+Requires experimental flag:
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ experimental: {
+ componentIslands: true,
+ },
+})
+```
+
+### Paired Components
+
+```
+components/
+├── Comments.client.vue # Browser version
+└── Comments.server.vue # SSR version
+```
+
+Server version renders during SSR, client version takes over after hydration.
+
+## Dynamic Components
+
+```vue
+
+
+
+
+
+
+```
+
+## Direct Imports
+
+Bypass auto-imports when needed:
+
+```vue
+
+```
+
+## Custom Directories
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ components: [
+ { path: '~/components/ui', prefix: 'Ui' },
+ { path: '~/components/forms', prefix: 'Form' },
+ '~/components', // Default, should come last
+ ],
+})
+```
+
+## Global Components
+
+Register globally (creates async chunks):
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ components: {
+ global: true,
+ dirs: ['~/components'],
+ },
+})
+```
+
+Or use `.global.vue` suffix:
+
+```
+components/
+└── Icon.global.vue → Available globally
+```
+
+## Disabling Component Auto-imports
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ components: {
+ dirs: [], // Disable auto-imports
+ },
+})
+```
+
+## Library Authors
+
+Register components from npm package:
+
+```ts
+// my-ui-lib/nuxt.ts
+import { addComponentsDir, createResolver, defineNuxtModule } from '@nuxt/kit'
+
+export default defineNuxtModule({
+ setup() {
+ const resolver = createResolver(import.meta.url)
+
+ addComponentsDir({
+ path: resolver.resolve('./components'),
+ prefix: 'MyUi',
+ })
+ },
+})
+```
+
+
diff --git a/skills/nuxt/references/features-components.md b/skills/nuxt/references/features-components.md
new file mode 100644
index 0000000..5cdb7b9
--- /dev/null
+++ b/skills/nuxt/references/features-components.md
@@ -0,0 +1,264 @@
+---
+name: built-in-components
+description: NuxtLink, NuxtPage, NuxtLayout, and other built-in Nuxt components
+---
+
+# Built-in Components
+
+Nuxt provides several built-in components for common functionality.
+
+## NuxtLink
+
+Optimized link component with prefetching:
+
+```vue
+
+
+ About
+
+
+ Post 1
+
+
+ Nuxt
+
+
+ Heavy Page
+
+
+ Replace
+
+
+
+ Dashboard
+
+
+```
+
+## NuxtPage
+
+Renders the current page component (used in layouts):
+
+```vue
+
+
+
+
+
+
+```
+
+With page transitions:
+
+```vue
+
+
+
+```
+
+Pass props to page:
+
+```vue
+
+
+
+```
+
+## NuxtLayout
+
+Controls layout rendering:
+
+```vue
+
+
+
+
+
+
+```
+
+Dynamic layout:
+
+```vue
+
+
+
+
+
+
+
+```
+
+Layout with transitions:
+
+```vue
+
+
+
+
+
+```
+
+## NuxtLoadingIndicator
+
+Progress bar for page navigation:
+
+```vue
+
+
+
+
+
+
+
+```
+
+## NuxtErrorBoundary
+
+Catch and handle errors in child components:
+
+```vue
+
+
+
+
+
+
+
Something went wrong: {{ error.message }}
+
Try again
+
+
+
+
+
+
+```
+
+## ClientOnly
+
+Render content only on client-side:
+
+```vue
+
+
+
+
+
+
+ Loading chart...
+
+
+
+```
+
+## DevOnly
+
+Render content only in development:
+
+```vue
+
+
+
+
+
+```
+
+## NuxtIsland
+
+Server components (experimental):
+
+```vue
+
+
+
+```
+
+## NuxtImg and NuxtPicture
+
+Optimized images (requires `@nuxt/image` module):
+
+```vue
+
+
+
+
+
+
+
+
+
+
+```
+
+## Teleport
+
+Render content outside component tree:
+
+```vue
+
+ Open Modal
+
+
+
+
Modal content
+
Close
+
+
+
+```
+
+For SSR, use `` with Teleport:
+
+```vue
+
+
+
+
+
+
+
+```
+
+## NuxtRouteAnnouncer
+
+Accessibility: announces page changes to screen readers:
+
+```vue
+
+
+
+
+
+
+
+```
+
+
diff --git a/skills/nuxt/references/features-composables.md b/skills/nuxt/references/features-composables.md
new file mode 100644
index 0000000..2d4d5f7
--- /dev/null
+++ b/skills/nuxt/references/features-composables.md
@@ -0,0 +1,276 @@
+---
+name: composables-auto-imports
+description: Auto-imported Vue APIs, Nuxt composables, and custom utilities
+---
+
+# Composables Auto-imports
+
+Nuxt automatically imports Vue APIs, Nuxt composables, and your custom composables/utilities.
+
+## Built-in Auto-imports
+
+### Vue APIs
+
+```vue
+
+```
+
+### Nuxt Composables
+
+```vue
+
+```
+
+## Custom Composables (`app/composables/`)
+
+### Creating Composables
+
+```ts
+// composables/useCounter.ts
+export function useCounter(initial = 0) {
+ const count = ref(initial)
+ const increment = () => count.value++
+ const decrement = () => count.value--
+ return { count, increment, decrement }
+}
+```
+
+```ts
+// composables/useAuth.ts
+export function useAuth() {
+ const user = useState('user', () => null)
+ const isLoggedIn = computed(() => !!user.value)
+
+ async function login(credentials: Credentials) {
+ user.value = await $fetch('/api/auth/login', {
+ method: 'POST',
+ body: credentials,
+ })
+ }
+
+ async function logout() {
+ await $fetch('/api/auth/logout', { method: 'POST' })
+ user.value = null
+ }
+
+ return { user, isLoggedIn, login, logout }
+}
+```
+
+### Using Composables
+
+```vue
+
+```
+
+### File Scanning Rules
+
+Only top-level files are scanned:
+
+```
+composables/
+├── useAuth.ts → useAuth() ✓
+├── useCounter.ts → useCounter() ✓
+├── index.ts → exports ✓
+└── nested/
+ └── helper.ts → NOT auto-imported ✗
+```
+
+Re-export nested composables:
+
+```ts
+// composables/index.ts
+export { useHelper } from './nested/helper'
+```
+
+Or configure scanning:
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ imports: {
+ dirs: [
+ 'composables',
+ 'composables/**', // Scan all nested
+ ],
+ },
+})
+```
+
+## Utilities (`app/utils/`)
+
+```ts
+// utils/format.ts
+export function formatDate(date: Date) {
+ return date.toLocaleDateString()
+}
+
+export function formatCurrency(amount: number) {
+ return new Intl.NumberFormat('en-US', {
+ style: 'currency',
+ currency: 'USD',
+ }).format(amount)
+}
+```
+
+```vue
+
+```
+
+## Server Utils (`server/utils/`)
+
+```ts
+// server/utils/db.ts
+export function useDb() {
+ return createDbConnection()
+}
+
+// server/utils/auth.ts
+export function verifyToken(token: string) {
+ return jwt.verify(token, process.env.JWT_SECRET)
+}
+```
+
+```ts
+// server/api/users.ts
+export default defineEventHandler(() => {
+ const db = useDb() // Auto-imported
+ return db.query('SELECT * FROM users')
+})
+```
+
+## Third-party Package Imports
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ imports: {
+ presets: [
+ {
+ from: 'vue-i18n',
+ imports: ['useI18n'],
+ },
+ {
+ from: 'date-fns',
+ imports: ['format', 'parseISO', 'differenceInDays'],
+ },
+ {
+ from: '@vueuse/core',
+ imports: ['useMouse', 'useWindowSize'],
+ },
+ ],
+ },
+})
+```
+
+## Explicit Imports
+
+Use `#imports` alias when needed:
+
+```vue
+
+```
+
+## Composable Context Rules
+
+Nuxt composables must be called in valid context:
+
+```ts
+// ❌ Wrong - module level
+const config = useRuntimeConfig()
+
+export function useMyComposable() {}
+```
+
+```ts
+// ✅ Correct - inside function
+export function useMyComposable() {
+ const config = useRuntimeConfig()
+ return { apiBase: config.public.apiBase }
+}
+```
+
+**Valid contexts:**
+- `
+
+
+
+ Counter: {{ counter }}
+ +
+ -
+
+
+```
+
+## Creating Shared State
+
+Define reusable state composables:
+
+```ts
+// composables/useUser.ts
+export function useUser() {
+ return useState('user', () => null)
+}
+
+export function useLocale() {
+ return useState('locale', () => 'en')
+}
+```
+
+```vue
+
+```
+
+## Initializing State
+
+Use `callOnce` to initialize state with async data:
+
+```vue
+
+```
+
+## Best Practices
+
+### ❌ Don't Define State Outside Setup
+
+```ts
+// ❌ Wrong - causes memory leaks and shared state across requests
+export const globalState = ref({ user: null })
+```
+
+### ✅ Use useState Instead
+
+```ts
+// ✅ Correct - SSR-safe
+export const useGlobalState = () => useState('global', () => ({ user: null }))
+```
+
+## Clearing State
+
+```ts
+// Clear specific state
+clearNuxtState('counter')
+
+// Clear multiple states
+clearNuxtState(['counter', 'user'])
+
+// Clear all state (use with caution)
+clearNuxtState()
+```
+
+## With Pinia
+
+For complex state management, use Pinia:
+
+```bash
+npx nuxi module add pinia
+```
+
+```ts
+// stores/counter.ts
+export const useCounterStore = defineStore('counter', {
+ state: () => ({
+ count: 0,
+ }),
+ actions: {
+ increment() {
+ this.count++
+ },
+ },
+})
+```
+
+```ts
+// stores/user.ts (Composition API style)
+export const useUserStore = defineStore('user', () => {
+ const user = ref(null)
+ const isLoggedIn = computed(() => !!user.value)
+
+ async function login(credentials: Credentials) {
+ user.value = await $fetch('/api/login', {
+ method: 'POST',
+ body: credentials,
+ })
+ }
+
+ return { user, isLoggedIn, login }
+})
+```
+
+```vue
+
+```
+
+## Advanced: Locale Example
+
+```ts
+// composables/useLocale.ts
+export function useLocale() {
+ return useState('locale', () => useDefaultLocale().value)
+}
+
+export function useDefaultLocale(fallback = 'en-US') {
+ const locale = ref(fallback)
+
+ if (import.meta.server) {
+ const reqLocale = useRequestHeaders()['accept-language']?.split(',')[0]
+ if (reqLocale) locale.value = reqLocale
+ }
+ else if (import.meta.client) {
+ const navLang = navigator.language
+ if (navLang) locale.value = navLang
+ }
+
+ return locale
+}
+```
+
+## State Serialization
+
+`useState` values are serialized to JSON. Avoid:
+
+- Functions
+- Classes
+- Symbols
+- Circular references
+
+```ts
+// ❌ Won't work
+useState('fn', () => () => console.log('hi'))
+useState('instance', () => new MyClass())
+
+// ✅ Works
+useState('data', () => ({ name: 'John', age: 30 }))
+useState('items', () => ['a', 'b', 'c'])
+```
+
+
diff --git a/skills/nuxt/references/rendering-modes.md b/skills/nuxt/references/rendering-modes.md
new file mode 100644
index 0000000..8760aed
--- /dev/null
+++ b/skills/nuxt/references/rendering-modes.md
@@ -0,0 +1,237 @@
+---
+name: rendering-modes
+description: Universal rendering, client-side rendering, and hybrid rendering in Nuxt
+---
+
+# Rendering Modes
+
+Nuxt supports multiple rendering modes: universal (SSR), client-side (CSR), and hybrid rendering.
+
+## Universal Rendering (Default)
+
+Server renders HTML, then hydrates on client:
+
+```ts
+// nuxt.config.ts - this is the default
+export default defineNuxtConfig({
+ ssr: true,
+})
+```
+
+**Benefits:**
+- Fast initial page load (HTML is ready)
+- SEO-friendly (content is in HTML)
+- Works without JavaScript initially
+
+**How it works:**
+1. Server executes Vue code, generates HTML
+2. Browser displays HTML immediately
+3. JavaScript loads and hydrates the page
+4. Page becomes fully interactive
+
+## Client-Side Rendering
+
+Render entirely in the browser:
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ ssr: false,
+})
+```
+
+**Benefits:**
+- Simpler development (no SSR constraints)
+- Cheaper hosting (static files only)
+- Works offline
+
+**Use cases:**
+- Admin dashboards
+- SaaS applications
+- Apps behind authentication
+
+### SPA Loading Template
+
+Provide loading UI while app hydrates:
+
+```html
+
+
+
+
+```
+
+## Hybrid Rendering
+
+Mix rendering modes per route using route rules:
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ routeRules: {
+ // Static pages - prerendered at build
+ '/': { prerender: true },
+ '/about': { prerender: true },
+
+ // ISR - regenerate in background
+ '/blog/**': { isr: 3600 }, // Cache for 1 hour
+ '/products/**': { swr: true }, // Stale-while-revalidate
+
+ // Client-only rendering
+ '/admin/**': { ssr: false },
+ '/dashboard/**': { ssr: false },
+
+ // Server-rendered (default)
+ '/api/**': { cors: true },
+ },
+})
+```
+
+### Route Rules Reference
+
+| Rule | Description |
+|------|-------------|
+| `prerender: true` | Pre-render at build time |
+| `ssr: false` | Client-side only |
+| `swr: number \| true` | Stale-while-revalidate caching |
+| `isr: number \| true` | Incremental static regeneration |
+| `cache: { maxAge: number }` | Cache with TTL |
+| `redirect: string` | Redirect to another path |
+| `cors: true` | Add CORS headers |
+| `headers: object` | Custom response headers |
+
+### Inline Route Rules
+
+Define per-page:
+
+```vue
+
+```
+
+## Prerendering
+
+Generate static HTML at build time:
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ // Prerender specific routes
+ routeRules: {
+ '/': { prerender: true },
+ '/about': { prerender: true },
+ '/posts/*': { prerender: true },
+ },
+})
+```
+
+Or use `nuxt generate`:
+
+```bash
+nuxt generate
+```
+
+### Programmatic Prerendering
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ hooks: {
+ 'prerender:routes'({ routes }) {
+ // Add dynamic routes
+ const posts = await fetchPostSlugs()
+ for (const slug of posts) {
+ routes.add(`/posts/${slug}`)
+ }
+ },
+ },
+})
+```
+
+Or in pages:
+
+```ts
+// server/api/posts.ts or a plugin
+prerenderRoutes(['/posts/1', '/posts/2', '/posts/3'])
+```
+
+## Edge-Side Rendering
+
+Render at CDN edge servers:
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ nitro: {
+ preset: 'cloudflare-pages', // or 'vercel-edge', 'netlify-edge'
+ },
+})
+```
+
+Supported platforms:
+- Cloudflare Pages/Workers
+- Vercel Edge Functions
+- Netlify Edge Functions
+
+## Conditional Rendering
+
+Use `import.meta.server` and `import.meta.client`:
+
+```vue
+
+```
+
+For components:
+
+```vue
+
+
+
+
+ Loading...
+
+
+
+```
+
+
diff --git a/skills/onboarding-cro/SKILL.md b/skills/onboarding-cro/SKILL.md
new file mode 100644
index 0000000..dadc7ea
--- /dev/null
+++ b/skills/onboarding-cro/SKILL.md
@@ -0,0 +1,219 @@
+---
+name: onboarding-cro
+version: 1.0.0
+description: When the user wants to optimize post-signup onboarding, user activation, first-run experience, or time-to-value. Also use when the user mentions "onboarding flow," "activation rate," "user activation," "first-run experience," "empty states," "onboarding checklist," "aha moment," or "new user experience." For signup/registration optimization, see signup-flow-cro. For ongoing email sequences, see email-sequence.
+---
+
+# Onboarding CRO
+
+You are an expert in user onboarding and activation. Your goal is to help users reach their "aha moment" as quickly as possible and establish habits that lead to long-term retention.
+
+## Initial Assessment
+
+**Check for product marketing context first:**
+If `.claude/product-marketing-context.md` exists, read it before asking questions. Use that context and only ask for information not already covered or specific to this task.
+
+Before providing recommendations, understand:
+
+1. **Product Context** - What type of product? B2B or B2C? Core value proposition?
+2. **Activation Definition** - What's the "aha moment"? What action indicates a user "gets it"?
+3. **Current State** - What happens after signup? Where do users drop off?
+
+---
+
+## Core Principles
+
+### 1. Time-to-Value Is Everything
+Remove every step between signup and experiencing core value.
+
+### 2. One Goal Per Session
+Focus first session on one successful outcome. Save advanced features for later.
+
+### 3. Do, Don't Show
+Interactive > Tutorial. Doing the thing > Learning about the thing.
+
+### 4. Progress Creates Motivation
+Show advancement. Celebrate completions. Make the path visible.
+
+---
+
+## Defining Activation
+
+### Find Your Aha Moment
+
+The action that correlates most strongly with retention:
+- What do retained users do that churned users don't?
+- What's the earliest indicator of future engagement?
+
+**Examples by product type:**
+- Project management: Create first project + add team member
+- Analytics: Install tracking + see first report
+- Design tool: Create first design + export/share
+- Marketplace: Complete first transaction
+
+### Activation Metrics
+- % of signups who reach activation
+- Time to activation
+- Steps to activation
+- Activation by cohort/source
+
+---
+
+## Onboarding Flow Design
+
+### Immediate Post-Signup (First 30 Seconds)
+
+| Approach | Best For | Risk |
+|----------|----------|------|
+| Product-first | Simple products, B2C, mobile | Blank slate overwhelm |
+| Guided setup | Products needing personalization | Adds friction before value |
+| Value-first | Products with demo data | May not feel "real" |
+
+**Whatever you choose:**
+- Clear single next action
+- No dead ends
+- Progress indication if multi-step
+
+### Onboarding Checklist Pattern
+
+**When to use:**
+- Multiple setup steps required
+- Product has several features to discover
+- Self-serve B2B products
+
+**Best practices:**
+- 3-7 items (not overwhelming)
+- Order by value (most impactful first)
+- Start with quick wins
+- Progress bar/completion %
+- Celebration on completion
+- Dismiss option (don't trap users)
+
+### Empty States
+
+Empty states are onboarding opportunities, not dead ends.
+
+**Good empty state:**
+- Explains what this area is for
+- Shows what it looks like with data
+- Clear primary action to add first item
+- Optional: Pre-populate with example data
+
+### Tooltips and Guided Tours
+
+**When to use:** Complex UI, features that aren't self-evident, power features users might miss
+
+**Best practices:**
+- Max 3-5 steps per tour
+- Dismissable at any time
+- Don't repeat for returning users
+
+---
+
+## Multi-Channel Onboarding
+
+### Email + In-App Coordination
+
+**Trigger-based emails:**
+- Welcome email (immediate)
+- Incomplete onboarding (24h, 72h)
+- Activation achieved (celebration + next step)
+- Feature discovery (days 3, 7, 14)
+
+**Email should:**
+- Reinforce in-app actions, not duplicate them
+- Drive back to product with specific CTA
+- Be personalized based on actions taken
+
+---
+
+## Handling Stalled Users
+
+### Detection
+Define "stalled" criteria (X days inactive, incomplete setup)
+
+### Re-engagement Tactics
+
+1. **Email sequence** - Reminder of value, address blockers, offer help
+2. **In-app recovery** - Welcome back, pick up where left off
+3. **Human touch** - For high-value accounts, personal outreach
+
+---
+
+## Measurement
+
+### Key Metrics
+
+| Metric | Description |
+|--------|-------------|
+| Activation rate | % reaching activation event |
+| Time to activation | How long to first value |
+| Onboarding completion | % completing setup |
+| Day 1/7/30 retention | Return rate by timeframe |
+
+### Funnel Analysis
+
+Track drop-off at each step:
+```
+Signup → Step 1 → Step 2 → Activation → Retention
+100% 80% 60% 40% 25%
+```
+
+Identify biggest drops and focus there.
+
+---
+
+## Output Format
+
+### Onboarding Audit
+For each issue: Finding → Impact → Recommendation → Priority
+
+### Onboarding Flow Design
+- Activation goal
+- Step-by-step flow
+- Checklist items (if applicable)
+- Empty state copy
+- Email sequence triggers
+- Metrics plan
+
+---
+
+## Common Patterns by Product Type
+
+| Product Type | Key Steps |
+|--------------|-----------|
+| B2B SaaS | Setup wizard → First value action → Team invite → Deep setup |
+| Marketplace | Complete profile → Browse → First transaction → Repeat loop |
+| Mobile App | Permissions → Quick win → Push setup → Habit loop |
+| Content Platform | Follow/customize → Consume → Create → Engage |
+
+---
+
+## Experiment Ideas
+
+When recommending experiments, consider tests for:
+- Flow simplification (step count, ordering)
+- Progress and motivation mechanics
+- Personalization by role or goal
+- Support and help availability
+
+**For comprehensive experiment ideas**: See [references/experiments.md](references/experiments.md)
+
+---
+
+## Task-Specific Questions
+
+1. What action most correlates with retention?
+2. What happens immediately after signup?
+3. Where do users currently drop off?
+4. What's your activation rate target?
+5. Do you have cohort analysis on successful vs. churned users?
+
+---
+
+## Related Skills
+
+- **signup-flow-cro**: For optimizing the signup before onboarding
+- **email-sequence**: For onboarding email series
+- **paywall-upgrade-cro**: For converting to paid during/after onboarding
+- **ab-test-setup**: For testing onboarding changes
diff --git a/skills/onboarding-cro/onboarding-cro b/skills/onboarding-cro/onboarding-cro
new file mode 120000
index 0000000..e876b87
--- /dev/null
+++ b/skills/onboarding-cro/onboarding-cro
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/onboarding-cro/
\ No newline at end of file
diff --git a/skills/onboarding-cro/references/experiments.md b/skills/onboarding-cro/references/experiments.md
new file mode 100644
index 0000000..2258410
--- /dev/null
+++ b/skills/onboarding-cro/references/experiments.md
@@ -0,0 +1,248 @@
+# Onboarding Experiment Ideas
+
+Comprehensive list of A/B tests and experiments for user onboarding and activation.
+
+## Flow Simplification Experiments
+
+### Reduce Friction
+
+| Test | Hypothesis |
+|------|------------|
+| Email verification timing | During vs. after onboarding |
+| Empty states vs. dummy data | Pre-populated examples |
+| Pre-filled templates | Accelerate setup with templates |
+| OAuth options | Faster account linking |
+| Required step count | Fewer required steps |
+| Optional vs. required fields | Minimize requirements |
+| Skip options | Allow bypassing non-critical steps |
+
+### Step Sequencing
+
+| Test | Hypothesis |
+|------|------------|
+| Step ordering | Test different sequences |
+| Value-first ordering | Highest-value features first |
+| Friction placement | Move hard steps later |
+| Required vs. optional balance | Ratio of required steps |
+| Single vs. branching paths | One path vs. personalized |
+| Quick start vs. full setup | Minimal path to value |
+
+### Progress & Motivation
+
+| Test | Hypothesis |
+|------|------------|
+| Progress bars | Show completion percentage |
+| Checklist length | 3-5 items vs. 5-7 items |
+| Gamification | Badges, rewards, achievements |
+| Completion messaging | "X% complete" visibility |
+| Starting point | Begin at 20% vs. 0% |
+| Celebration moments | Acknowledge completions |
+
+---
+
+## Guided Experience Experiments
+
+### Product Tours
+
+| Test | Hypothesis |
+|------|------------|
+| Interactive tours | Tools like Navattic, Storylane |
+| Tooltip vs. modal guidance | Subtle vs. attention-grabbing |
+| Video tutorials | For complex workflows |
+| Self-paced vs. guided | User control vs. structured |
+| Tour length | Shorter vs. comprehensive |
+| Tour triggering | Automatic vs. user-initiated |
+
+### CTA Optimization
+
+| Test | Hypothesis |
+|------|------------|
+| CTA text variations | Action-oriented copy testing |
+| CTA placement | Position within screens |
+| In-app tooltips | Feature discovery prompts |
+| Sticky CTAs | Persist during onboarding |
+| CTA contrast | Visual prominence |
+| Secondary CTAs | "Learn more" vs. primary only |
+
+### UI Guidance
+
+| Test | Hypothesis |
+|------|------------|
+| Hotspot highlights | Draw attention to key features |
+| Coachmarks | Contextual tips |
+| Feature announcements | New feature discovery |
+| Contextual help | Help where users need it |
+| Search vs. guided | Self-service vs. directed |
+
+---
+
+## Personalization Experiments
+
+### User Segmentation
+
+| Test | Hypothesis |
+|------|------------|
+| Role-based onboarding | Different paths by role |
+| Goal-based paths | Customize by stated goal |
+| Role-specific dashboards | Relevant default views |
+| Use-case question | Personalize based on answer |
+| Industry-specific paths | Vertical customization |
+| Experience-based | Beginner vs. expert paths |
+
+### Dynamic Content
+
+| Test | Hypothesis |
+|------|------------|
+| Personalized welcome | Name, company, role |
+| Industry examples | Relevant use cases |
+| Dynamic recommendations | Based on user answers |
+| Template suggestions | Pre-filled for segment |
+| Feature highlighting | Relevant to stated goals |
+| Benchmark data | Industry-specific metrics |
+
+---
+
+## Quick Wins & Engagement Experiments
+
+### Time-to-Value
+
+| Test | Hypothesis |
+|------|------------|
+| First quick win | "Complete your first X" |
+| Success messages | After key actions |
+| Progress celebrations | Milestone moments |
+| Next step suggestions | After each completion |
+| Value demonstration | Show what they achieved |
+| Outcome preview | What success looks like |
+
+### Motivation Mechanics
+
+| Test | Hypothesis |
+|------|------------|
+| Achievement badges | Gamification elements |
+| Streaks | Consecutive day engagement |
+| Leaderboards | Social comparison (if appropriate) |
+| Rewards | Incentives for completion |
+| Unlock mechanics | Features revealed progressively |
+
+### Support & Help
+
+| Test | Hypothesis |
+|------|------------|
+| Free onboarding calls | For complex products |
+| Contextual help | Throughout onboarding |
+| Chat support | Availability during onboarding |
+| Proactive outreach | For stuck users |
+| Self-service resources | Help docs, videos |
+| Community access | Peer support early |
+
+---
+
+## Email & Multi-Channel Experiments
+
+### Onboarding Emails
+
+| Test | Hypothesis |
+|------|------------|
+| Founder welcome email | Personal vs. generic |
+| Behavior-based triggers | Action/inaction based |
+| Email timing | Immediate vs. delayed |
+| Email frequency | More vs. fewer touches |
+| Quick tips format | Short actionable content |
+| Video in email | More engaging format |
+
+### Email Content
+
+| Test | Hypothesis |
+|------|------------|
+| Subject lines | Open rate optimization |
+| Personalization depth | Name vs. behavior-based |
+| CTA prominence | Single clear action |
+| Social proof inclusion | Testimonials in email |
+| Urgency messaging | Trial reminders |
+| Plain text vs. designed | Format testing |
+
+### Feedback Loops
+
+| Test | Hypothesis |
+|------|------------|
+| NPS during onboarding | When to ask |
+| Blocking question | "What's stopping you?" |
+| NPS follow-up | Actions based on score |
+| In-app feedback | Thumbs up/down on features |
+| Survey timing | When to request feedback |
+| Feedback incentives | Reward for completing |
+
+---
+
+## Re-engagement Experiments
+
+### Stalled User Recovery
+
+| Test | Hypothesis |
+|------|------------|
+| Re-engagement email timing | When to send |
+| Personal outreach | Human vs. automated |
+| Simplified path | Reduced steps for returners |
+| Incentive offers | Discount or extended trial |
+| Problem identification | Ask what's blocking |
+| Demo offer | Live walkthrough |
+
+### Return Experience
+
+| Test | Hypothesis |
+|------|------------|
+| Welcome back message | Acknowledge return |
+| Progress resume | Pick up where left off |
+| Changed state | What happened while away |
+| Re-onboarding | Fresh start option |
+| Urgency messaging | Trial time remaining |
+
+---
+
+## Technical & UX Experiments
+
+### Performance
+
+| Test | Hypothesis |
+|------|------------|
+| Load time optimization | Faster = higher completion |
+| Progressive loading | Perceived performance |
+| Offline capability | Mobile experience |
+| Error handling | Graceful failure recovery |
+
+### Mobile Onboarding
+
+| Test | Hypothesis |
+|------|------------|
+| Touch targets | Size and spacing |
+| Swipe navigation | Mobile-native patterns |
+| Screen count | Fewer screens needed |
+| Input optimization | Mobile-friendly forms |
+| Permission timing | When to ask |
+
+### Accessibility
+
+| Test | Hypothesis |
+|------|------------|
+| Screen reader support | Accessibility impact |
+| Keyboard navigation | Non-mouse users |
+| Color contrast | Visibility |
+| Font sizing | Readability |
+
+---
+
+## Metrics to Track
+
+For all experiments, measure:
+
+| Metric | Description |
+|--------|-------------|
+| Activation rate | % reaching activation event |
+| Time to activation | Hours/days to first value |
+| Step completion rate | % completing each step |
+| Drop-off points | Where users abandon |
+| Return rate | Users who come back |
+| Day 1/7/30 retention | Engagement over time |
+| Feature adoption | Which features get used |
+| Support requests | Volume during onboarding |
diff --git a/skills/organization-best-practices/SKILL.md b/skills/organization-best-practices/SKILL.md
new file mode 100644
index 0000000..c032018
--- /dev/null
+++ b/skills/organization-best-practices/SKILL.md
@@ -0,0 +1,586 @@
+---
+name: organization-best-practices
+description: This skill provides guidance and enforcement rules for implementing multi-tenant organizations, teams, and role-based access control using Better Auth's organization plugin.
+---
+
+## Setting Up Organizations
+
+When adding organizations to your application, configure the `organization` plugin with appropriate limits and permissions.
+
+```ts
+import { betterAuth } from "better-auth";
+import { organization } from "better-auth/plugins";
+
+export const auth = betterAuth({
+ plugins: [
+ organization({
+ allowUserToCreateOrganization: true,
+ organizationLimit: 5, // Max orgs per user
+ membershipLimit: 100, // Max members per org
+ }),
+ ],
+});
+```
+
+**Note**: After adding the plugin, run `npx @better-auth/cli migrate` to add the required database tables.
+
+### Client-Side Setup
+
+Add the client plugin to access organization methods:
+
+```ts
+import { createAuthClient } from "better-auth/client";
+import { organizationClient } from "better-auth/client/plugins";
+
+export const authClient = createAuthClient({
+ plugins: [organizationClient()],
+});
+```
+
+## Creating Organizations
+
+Organizations are the top-level entity for grouping users. When created, the creator is automatically assigned the `owner` role.
+
+```ts
+const createOrg = async () => {
+ const { data, error } = await authClient.organization.create({
+ name: "My Company",
+ slug: "my-company",
+ logo: "https://example.com/logo.png",
+ metadata: { plan: "pro" },
+ });
+};
+```
+
+### Controlling Organization Creation
+
+Restrict who can create organizations based on user attributes:
+
+```ts
+organization({
+ allowUserToCreateOrganization: async (user) => {
+ return user.emailVerified === true;
+ },
+ organizationLimit: async (user) => {
+ // Premium users get more organizations
+ return user.plan === "premium" ? 20 : 3;
+ },
+});
+```
+
+### Creating Organizations on Behalf of Users
+
+Administrators can create organizations for other users (server-side only):
+
+```ts
+await auth.api.createOrganization({
+ body: {
+ name: "Client Organization",
+ slug: "client-org",
+ userId: "user-id-who-will-be-owner", // `userId` is required
+ },
+});
+```
+
+**Note**: The `userId` parameter cannot be used alongside session headers.
+
+
+## Active Organizations
+
+The active organization is stored in the session and scopes subsequent API calls. Always set an active organization after the user selects one.
+
+```ts
+const setActive = async (organizationId: string) => {
+ const { data, error } = await authClient.organization.setActive({
+ organizationId,
+ });
+};
+```
+
+Many endpoints use the active organization when `organizationId` is not provided:
+
+```ts
+// These use the active organization automatically
+await authClient.organization.listMembers();
+await authClient.organization.listInvitations();
+await authClient.organization.inviteMember({ email: "user@example.com", role: "member" });
+```
+
+### Getting Full Organization Data
+
+Retrieve the active organization with all its members, invitations, and teams:
+
+```ts
+const { data } = await authClient.organization.getFullOrganization();
+// data.organization, data.members, data.invitations, data.teams
+```
+
+## Members
+
+Members are users who belong to an organization. Each member has a role that determines their permissions.
+
+### Adding Members (Server-Side)
+
+Add members directly without invitations (useful for admin operations):
+
+```ts
+await auth.api.addMember({
+ body: {
+ userId: "user-id",
+ role: "member",
+ organizationId: "org-id",
+ },
+});
+```
+
+**Note**: For client-side member additions, use the invitation system instead.
+
+### Assigning Multiple Roles
+
+Members can have multiple roles for fine-grained permissions:
+
+```ts
+await auth.api.addMember({
+ body: {
+ userId: "user-id",
+ role: ["admin", "moderator"],
+ organizationId: "org-id",
+ },
+});
+```
+
+### Removing Members
+
+Remove members by ID or email:
+
+```ts
+await authClient.organization.removeMember({
+ memberIdOrEmail: "user@example.com",
+});
+```
+
+**Important**: The last owner cannot be removed. Assign the owner role to another member first.
+
+### Updating Member Roles
+
+```ts
+await authClient.organization.updateMemberRole({
+ memberId: "member-id",
+ role: "admin",
+});
+```
+
+### Membership Limits
+
+Control the maximum number of members per organization:
+
+```ts
+organization({
+ membershipLimit: async (user, organization) => {
+ if (organization.metadata?.plan === "enterprise") {
+ return 1000;
+ }
+ return 50;
+ },
+});
+```
+
+## Invitations
+
+The invitation system allows admins to invite users via email. Configure email sending to enable invitations.
+
+### Setting Up Invitation Emails
+
+```ts
+import { betterAuth } from "better-auth";
+import { organization } from "better-auth/plugins";
+import { sendEmail } from "./email";
+
+export const auth = betterAuth({
+ plugins: [
+ organization({
+ sendInvitationEmail: async (data) => {
+ const { email, organization, inviter, invitation } = data;
+
+ await sendEmail({
+ to: email,
+ subject: `Join ${organization.name}`,
+ html: `
+ ${inviter.user.name} invited you to join ${organization.name}
+
+ Accept Invitation
+
+ `,
+ });
+ },
+ }),
+ ],
+});
+```
+
+### Sending Invitations
+
+```ts
+await authClient.organization.inviteMember({
+ email: "newuser@example.com",
+ role: "member",
+});
+```
+
+### Creating Shareable Invitation URLs
+
+For sharing via Slack, SMS, or in-app notifications:
+
+```ts
+const { data } = await authClient.organization.getInvitationURL({
+ email: "newuser@example.com",
+ role: "member",
+ callbackURL: "https://yourapp.com/dashboard",
+});
+
+// Share data.url via any channel
+```
+
+**Note**: This endpoint does not call `sendInvitationEmail`. Handle delivery yourself.
+
+### Accepting Invitations
+
+```ts
+await authClient.organization.acceptInvitation({
+ invitationId: "invitation-id",
+});
+```
+
+### Invitation Configuration
+
+```ts
+organization({
+ invitationExpiresIn: 60 * 60 * 24 * 7, // 7 days (default: 48 hours)
+ invitationLimit: 100, // Max pending invitations per org
+ cancelPendingInvitationsOnReInvite: true, // Cancel old invites when re-inviting
+});
+```
+
+## Roles & Permissions
+
+The plugin provides role-based access control (RBAC) with three default roles:
+
+| Role | Description |
+|------|-------------|
+| `owner` | Full access, can delete organization |
+| `admin` | Can manage members, invitations, settings |
+| `member` | Basic access to organization resources |
+
+
+### Checking Permissions
+
+```ts
+const { data } = await authClient.organization.hasPermission({
+ permission: "member:write",
+});
+
+if (data?.hasPermission) {
+ // User can manage members
+}
+```
+
+### Client-Side Permission Checks
+
+For UI rendering without API calls:
+
+```ts
+const canManageMembers = authClient.organization.checkRolePermission({
+ role: "admin",
+ permissions: ["member:write"],
+});
+```
+
+**Note**: For dynamic access control, the client side role permission check will not work. Please use the `hasPermission` endpoint.
+
+## Teams
+
+Teams allow grouping members within an organization.
+
+### Enabling Teams
+
+```ts
+import { organization } from "better-auth/plugins";
+
+export const auth = betterAuth({
+ plugins: [
+ organization({
+ teams: {
+ enabled: true
+ }
+ }),
+ ],
+});
+```
+
+### Creating Teams
+
+```ts
+const { data } = await authClient.organization.createTeam({
+ name: "Engineering",
+});
+```
+
+### Managing Team Members
+
+```ts
+// Add a member to a team (must be org member first)
+await authClient.organization.addTeamMember({
+ teamId: "team-id",
+ userId: "user-id",
+});
+
+// Remove from team (stays in org)
+await authClient.organization.removeTeamMember({
+ teamId: "team-id",
+ userId: "user-id",
+});
+```
+
+### Active Teams
+
+Similar to active organizations, set an active team for the session:
+
+```ts
+await authClient.organization.setActiveTeam({
+ teamId: "team-id",
+});
+```
+
+### Team Limits
+
+```ts
+organization({
+ teams: {
+ maximumTeams: 20, // Max teams per org
+ maximumMembersPerTeam: 50, // Max members per team
+ allowRemovingAllTeams: false, // Prevent removing last team
+ }
+});
+```
+
+## Dynamic Access Control
+
+For applications needing custom roles per organization at runtime, enable dynamic access control.
+
+### Enabling Dynamic Access Control
+
+```ts
+import { organization } from "better-auth/plugins";
+import { dynamicAccessControl } from "@better-auth/organization/addons";
+
+export const auth = betterAuth({
+ plugins: [
+ organization({
+ dynamicAccessControl: {
+ enabled: true
+ }
+ }),
+ ],
+});
+```
+
+### Creating Custom Roles
+
+```ts
+await authClient.organization.createRole({
+ role: "moderator",
+ permission: {
+ member: ["read"],
+ invitation: ["read"],
+ },
+});
+```
+
+### Updating and Deleting Roles
+
+```ts
+// Update role permissions
+await authClient.organization.updateRole({
+ roleId: "role-id",
+ permission: {
+ member: ["read", "write"],
+ },
+});
+
+// Delete a custom role
+await authClient.organization.deleteRole({
+ roleId: "role-id",
+});
+```
+
+**Note**: Pre-defined roles (owner, admin, member) cannot be deleted. Roles assigned to members cannot be deleted until members are reassigned.
+
+## Lifecycle Hooks
+
+Execute custom logic at various points in the organization lifecycle:
+
+```ts
+organization({
+ hooks: {
+ organization: {
+ beforeCreate: async ({ data, user }) => {
+ // Validate or modify data before creation
+ return {
+ data: {
+ ...data,
+ metadata: { ...data.metadata, createdBy: user.id },
+ },
+ };
+ },
+ afterCreate: async ({ organization, member }) => {
+ // Post-creation logic (e.g., send welcome email, create default resources)
+ await createDefaultResources(organization.id);
+ },
+ beforeDelete: async ({ organization }) => {
+ // Cleanup before deletion
+ await archiveOrganizationData(organization.id);
+ },
+ },
+ member: {
+ afterCreate: async ({ member, organization }) => {
+ await notifyAdmins(organization.id, `New member joined`);
+ },
+ },
+ invitation: {
+ afterCreate: async ({ invitation, organization, inviter }) => {
+ await logInvitation(invitation);
+ },
+ },
+ },
+});
+```
+
+## Schema Customization
+
+Customize table names, field names, and add additional fields:
+
+```ts
+organization({
+ schema: {
+ organization: {
+ modelName: "workspace", // Rename table
+ fields: {
+ name: "workspaceName", // Rename fields
+ },
+ additionalFields: {
+ billingId: {
+ type: "string",
+ required: false,
+ },
+ },
+ },
+ member: {
+ additionalFields: {
+ department: {
+ type: "string",
+ required: false,
+ },
+ title: {
+ type: "string",
+ required: false,
+ },
+ },
+ },
+ },
+});
+```
+
+## Security Considerations
+
+### Owner Protection
+
+- The last owner cannot be removed from an organization
+- The last owner cannot leave the organization
+- The owner role cannot be removed from the last owner
+
+Always ensure ownership transfer before removing the current owner:
+
+```ts
+// Transfer ownership first
+await authClient.organization.updateMemberRole({
+ memberId: "new-owner-member-id",
+ role: "owner",
+});
+
+// Then the previous owner can be demoted or removed
+```
+
+### Organization Deletion
+
+Deleting an organization removes all associated data (members, invitations, teams). Prevent accidental deletion:
+
+```ts
+organization({
+ disableOrganizationDeletion: true, // Disable via config
+});
+```
+
+Or implement soft delete via hooks:
+
+```ts
+organization({
+ hooks: {
+ organization: {
+ beforeDelete: async ({ organization }) => {
+ // Archive instead of delete
+ await archiveOrganization(organization.id);
+ throw new Error("Organization archived, not deleted");
+ },
+ },
+ },
+});
+```
+
+### Invitation Security
+
+- Invitations expire after 48 hours by default
+- Only the invited email address can accept an invitation
+- Pending invitations can be cancelled by organization admins
+
+## Complete Configuration Example
+
+```ts
+import { betterAuth } from "better-auth";
+import { organization } from "better-auth/plugins";
+import { sendEmail } from "./email";
+
+export const auth = betterAuth({
+ plugins: [
+ organization({
+ // Organization limits
+ allowUserToCreateOrganization: true,
+ organizationLimit: 10,
+ membershipLimit: 100,
+ creatorRole: "owner",
+
+ // Slugs
+ defaultOrganizationIdField: "slug",
+
+ // Invitations
+ invitationExpiresIn: 60 * 60 * 24 * 7, // 7 days
+ invitationLimit: 50,
+ sendInvitationEmail: async (data) => {
+ await sendEmail({
+ to: data.email,
+ subject: `Join ${data.organization.name}`,
+ html: `Accept `,
+ });
+ },
+
+ // Hooks
+ hooks: {
+ organization: {
+ afterCreate: async ({ organization }) => {
+ console.log(`Organization ${organization.name} created`);
+ },
+ },
+ },
+ }),
+ ],
+});
+```
diff --git a/skills/organization-best-practices/organization-best-practices b/skills/organization-best-practices/organization-best-practices
new file mode 120000
index 0000000..15bd05f
--- /dev/null
+++ b/skills/organization-best-practices/organization-best-practices
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/organization-best-practices/
\ No newline at end of file
diff --git a/skills/page-cro/SKILL.md b/skills/page-cro/SKILL.md
new file mode 100644
index 0000000..437761a
--- /dev/null
+++ b/skills/page-cro/SKILL.md
@@ -0,0 +1,181 @@
+---
+name: page-cro
+version: 1.0.0
+description: When the user wants to optimize, improve, or increase conversions on any marketing page — including homepage, landing pages, pricing pages, feature pages, or blog posts. Also use when the user says "CRO," "conversion rate optimization," "this page isn't converting," "improve conversions," or "why isn't this page working." For signup/registration flows, see signup-flow-cro. For post-signup activation, see onboarding-cro. For forms outside of signup, see form-cro. For popups/modals, see popup-cro.
+---
+
+# Page Conversion Rate Optimization (CRO)
+
+You are a conversion rate optimization expert. Your goal is to analyze marketing pages and provide actionable recommendations to improve conversion rates.
+
+## Initial Assessment
+
+**Check for product marketing context first:**
+If `.claude/product-marketing-context.md` exists, read it before asking questions. Use that context and only ask for information not already covered or specific to this task.
+
+Before providing recommendations, identify:
+
+1. **Page Type**: Homepage, landing page, pricing, feature, blog, about, other
+2. **Primary Conversion Goal**: Sign up, request demo, purchase, subscribe, download, contact sales
+3. **Traffic Context**: Where are visitors coming from? (organic, paid, email, social)
+
+---
+
+## CRO Analysis Framework
+
+Analyze the page across these dimensions, in order of impact:
+
+### 1. Value Proposition Clarity (Highest Impact)
+
+**Check for:**
+- Can a visitor understand what this is and why they should care within 5 seconds?
+- Is the primary benefit clear, specific, and differentiated?
+- Is it written in the customer's language (not company jargon)?
+
+**Common issues:**
+- Feature-focused instead of benefit-focused
+- Too vague or too clever (sacrificing clarity)
+- Trying to say everything instead of the most important thing
+
+### 2. Headline Effectiveness
+
+**Evaluate:**
+- Does it communicate the core value proposition?
+- Is it specific enough to be meaningful?
+- Does it match the traffic source's messaging?
+
+**Strong headline patterns:**
+- Outcome-focused: "Get [desired outcome] without [pain point]"
+- Specificity: Include numbers, timeframes, or concrete details
+- Social proof: "Join 10,000+ teams who..."
+
+### 3. CTA Placement, Copy, and Hierarchy
+
+**Primary CTA assessment:**
+- Is there one clear primary action?
+- Is it visible without scrolling?
+- Does the button copy communicate value, not just action?
+ - Weak: "Submit," "Sign Up," "Learn More"
+ - Strong: "Start Free Trial," "Get My Report," "See Pricing"
+
+**CTA hierarchy:**
+- Is there a logical primary vs. secondary CTA structure?
+- Are CTAs repeated at key decision points?
+
+### 4. Visual Hierarchy and Scannability
+
+**Check:**
+- Can someone scanning get the main message?
+- Are the most important elements visually prominent?
+- Is there enough white space?
+- Do images support or distract from the message?
+
+### 5. Trust Signals and Social Proof
+
+**Types to look for:**
+- Customer logos (especially recognizable ones)
+- Testimonials (specific, attributed, with photos)
+- Case study snippets with real numbers
+- Review scores and counts
+- Security badges (where relevant)
+
+**Placement:** Near CTAs and after benefit claims
+
+### 6. Objection Handling
+
+**Common objections to address:**
+- Price/value concerns
+- "Will this work for my situation?"
+- Implementation difficulty
+- "What if it doesn't work?"
+
+**Address through:** FAQ sections, guarantees, comparison content, process transparency
+
+### 7. Friction Points
+
+**Look for:**
+- Too many form fields
+- Unclear next steps
+- Confusing navigation
+- Required information that shouldn't be required
+- Mobile experience issues
+- Long load times
+
+---
+
+## Output Format
+
+Structure your recommendations as:
+
+### Quick Wins (Implement Now)
+Easy changes with likely immediate impact.
+
+### High-Impact Changes (Prioritize)
+Bigger changes that require more effort but will significantly improve conversions.
+
+### Test Ideas
+Hypotheses worth A/B testing rather than assuming.
+
+### Copy Alternatives
+For key elements (headlines, CTAs), provide 2-3 alternatives with rationale.
+
+---
+
+## Page-Specific Frameworks
+
+### Homepage CRO
+- Clear positioning for cold visitors
+- Quick path to most common conversion
+- Handle both "ready to buy" and "still researching"
+
+### Landing Page CRO
+- Message match with traffic source
+- Single CTA (remove navigation if possible)
+- Complete argument on one page
+
+### Pricing Page CRO
+- Clear plan comparison
+- Recommended plan indication
+- Address "which plan is right for me?" anxiety
+
+### Feature Page CRO
+- Connect feature to benefit
+- Use cases and examples
+- Clear path to try/buy
+
+### Blog Post CRO
+- Contextual CTAs matching content topic
+- Inline CTAs at natural stopping points
+
+---
+
+## Experiment Ideas
+
+When recommending experiments, consider tests for:
+- Hero section (headline, visual, CTA)
+- Trust signals and social proof placement
+- Pricing presentation
+- Form optimization
+- Navigation and UX
+
+**For comprehensive experiment ideas by page type**: See [references/experiments.md](references/experiments.md)
+
+---
+
+## Task-Specific Questions
+
+1. What's your current conversion rate and goal?
+2. Where is traffic coming from?
+3. What does your signup/purchase flow look like after this page?
+4. Do you have user research, heatmaps, or session recordings?
+5. What have you already tried?
+
+---
+
+## Related Skills
+
+- **signup-flow-cro**: If the issue is in the signup process itself
+- **form-cro**: If forms on the page need optimization
+- **popup-cro**: If considering popups as part of the strategy
+- **copywriting**: If the page needs a complete copy rewrite
+- **ab-test-setup**: To properly test recommended changes
diff --git a/skills/page-cro/page-cro b/skills/page-cro/page-cro
new file mode 120000
index 0000000..3b0c13c
--- /dev/null
+++ b/skills/page-cro/page-cro
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/page-cro/
\ No newline at end of file
diff --git a/skills/page-cro/references/experiments.md b/skills/page-cro/references/experiments.md
new file mode 100644
index 0000000..e1b5b3b
--- /dev/null
+++ b/skills/page-cro/references/experiments.md
@@ -0,0 +1,239 @@
+# Page CRO Experiment Ideas
+
+Comprehensive list of A/B tests and experiments organized by page type.
+
+## Homepage Experiments
+
+### Hero Section
+
+| Test | Hypothesis |
+|------|------------|
+| Headline variations | Specific vs. abstract messaging |
+| Subheadline clarity | Add/refine to support headline |
+| CTA above fold | Include or exclude prominent CTA |
+| Hero visual format | Screenshot vs. GIF vs. illustration vs. video |
+| CTA button color | Test contrast and visibility |
+| CTA button text | "Start Free Trial" vs. "Get Started" vs. "See Demo" |
+| Interactive demo | Engage visitors immediately with product |
+
+### Trust & Social Proof
+
+| Test | Hypothesis |
+|------|------------|
+| Logo placement | Hero section vs. below fold |
+| Case study in hero | Show results immediately |
+| Trust badges | Add security, compliance, awards |
+| Social proof in headline | "Join 10,000+ teams" messaging |
+| Testimonial placement | Above fold vs. dedicated section |
+| Video testimonials | More engaging than text quotes |
+
+### Features & Content
+
+| Test | Hypothesis |
+|------|------------|
+| Feature presentation | Icons + descriptions vs. detailed sections |
+| Section ordering | Move high-value features up |
+| Secondary CTAs | Add/remove throughout page |
+| Benefit vs. feature focus | Lead with outcomes |
+| Comparison section | Show vs. competitors or status quo |
+
+### Navigation & UX
+
+| Test | Hypothesis |
+|------|------------|
+| Sticky navigation | Persistent nav with CTA |
+| Nav menu order | High-priority items at edges |
+| Nav CTA button | Add prominent button in nav |
+| Support widget | Live chat vs. AI chatbot |
+| Footer optimization | Clearer secondary conversions |
+| Exit intent popup | Capture abandoning visitors |
+
+---
+
+## Pricing Page Experiments
+
+### Price Presentation
+
+| Test | Hypothesis |
+|------|------------|
+| Annual vs. monthly display | Highlight savings or simplify |
+| Price points | $99 vs. $100 vs. $97 psychology |
+| "Most Popular" badge | Highlight target plan |
+| Number of tiers | 3 vs. 4 vs. 2 visible options |
+| Price anchoring | Order plans to anchor expectations |
+| Custom enterprise tier | Show vs. "Contact Sales" |
+
+### Pricing UX
+
+| Test | Hypothesis |
+|------|------------|
+| Pricing calculator | For usage-based pricing clarity |
+| Guided pricing flow | Multistep wizard vs. comparison table |
+| Feature comparison format | Table vs. expandable sections |
+| Monthly/annual toggle | With savings highlighted |
+| Plan recommendation quiz | Help visitors choose |
+| Checkout flow length | Steps required after plan selection |
+
+### Objection Handling
+
+| Test | Hypothesis |
+|------|------------|
+| FAQ section | Address pricing objections |
+| ROI calculator | Demonstrate value vs. cost |
+| Money-back guarantee | Prominent placement |
+| Per-user breakdowns | Clarity for team plans |
+| Feature inclusion clarity | What's in each tier |
+| Competitor comparison | Side-by-side value comparison |
+
+### Trust Signals
+
+| Test | Hypothesis |
+|------|------------|
+| Value testimonials | Quotes about ROI specifically |
+| Customer logos | Near pricing section |
+| Review scores | G2/Capterra ratings |
+| Case study snippet | Specific pricing/value results |
+
+---
+
+## Demo Request Page Experiments
+
+### Form Optimization
+
+| Test | Hypothesis |
+|------|------------|
+| Field count | Fewer fields, higher completion |
+| Multi-step vs. single | Progress bar encouragement |
+| Form placement | Above fold vs. after content |
+| Phone field | Include vs. exclude |
+| Field enrichment | Hide fields you can auto-fill |
+| Form labels | Inside field vs. above |
+
+### Page Content
+
+| Test | Hypothesis |
+|------|------------|
+| Benefits above form | Reinforce value before ask |
+| Demo preview | Video/GIF showing demo experience |
+| "What You'll Learn" | Set expectations clearly |
+| Testimonials near form | Reduce friction at decision point |
+| FAQ below form | Address common objections |
+| Video vs. text | Format for explaining value |
+
+### CTA & Routing
+
+| Test | Hypothesis |
+|------|------------|
+| CTA text | "Book Your Demo" vs. "Schedule 15-Min Call" |
+| On-demand option | Instant demo alongside live option |
+| Personalized messaging | Based on visitor data/source |
+| Navigation removal | Reduce page distractions |
+| Calendar integration | Inline booking vs. external link |
+| Qualification routing | Self-serve for some, sales for others |
+
+---
+
+## Resource/Blog Page Experiments
+
+### Content CTAs
+
+| Test | Hypothesis |
+|------|------------|
+| Floating CTAs | Sticky CTA on blog posts |
+| CTA placement | Inline vs. end-of-post only |
+| Reading time display | Estimated reading time |
+| Related resources | End-of-article recommendations |
+| Gated vs. free | Content access strategy |
+| Content upgrades | Specific to article topic |
+
+### Resource Section
+
+| Test | Hypothesis |
+|------|------------|
+| Navigation/filtering | Easier to find relevant content |
+| Search functionality | Find specific resources |
+| Featured resources | Highlight best content |
+| Layout format | Grid vs. list view |
+| Topic bundles | Grouped resources by theme |
+| Download tracking | Gate some, track engagement |
+
+---
+
+## Landing Page Experiments
+
+### Message Match
+
+| Test | Hypothesis |
+|------|------------|
+| Headline matching | Match ad copy exactly |
+| Visual matching | Match ad creative |
+| Offer alignment | Same offer as ad promised |
+| Audience-specific pages | Different pages per segment |
+
+### Conversion Focus
+
+| Test | Hypothesis |
+|------|------------|
+| Navigation removal | Single-focus page |
+| CTA repetition | Multiple CTAs throughout |
+| Form vs. button | Direct capture vs. click-through |
+| Urgency/scarcity | If genuine, test messaging |
+| Social proof density | Amount and placement |
+| Video inclusion | Explain offer with video |
+
+### Page Length
+
+| Test | Hypothesis |
+|------|------------|
+| Short vs. long | Quick conversion vs. complete argument |
+| Above-fold only | Minimal scroll required |
+| Section ordering | Most important content first |
+| Footer removal | Eliminate navigation |
+
+---
+
+## Feature Page Experiments
+
+### Feature Presentation
+
+| Test | Hypothesis |
+|------|------------|
+| Demo/screenshot | Show feature in action |
+| Use case examples | How customers use it |
+| Before/after | Impact visualization |
+| Video walkthrough | Feature tour |
+| Interactive demo | Try feature without signup |
+
+### Conversion Path
+
+| Test | Hypothesis |
+|------|------------|
+| Trial CTA | Feature-specific trial offer |
+| Related features | Cross-link to other features |
+| Comparison | vs. competitors' version |
+| Pricing mention | Connect to relevant plan |
+| Case study link | Feature-specific success story |
+
+---
+
+## Cross-Page Experiments
+
+### Site-Wide Tests
+
+| Test | Hypothesis |
+|------|------------|
+| Chat widget | Impact on conversions |
+| Cookie consent UX | Minimize friction |
+| Page load speed | Performance vs. features |
+| Mobile experience | Responsive optimization |
+| Accessibility | Impact on conversion |
+| Personalization | Dynamic content by segment |
+
+### Navigation Tests
+
+| Test | Hypothesis |
+|------|------------|
+| Menu structure | Information architecture |
+| Search placement | Help visitors find content |
+| CTA in nav | Always-visible conversion path |
+| Breadcrumbs | Navigation clarity |
diff --git a/skills/paid-ads/SKILL.md b/skills/paid-ads/SKILL.md
new file mode 100644
index 0000000..95010ed
--- /dev/null
+++ b/skills/paid-ads/SKILL.md
@@ -0,0 +1,313 @@
+---
+name: paid-ads
+version: 1.0.0
+description: "When the user wants help with paid advertising campaigns on Google Ads, Meta (Facebook/Instagram), LinkedIn, Twitter/X, or other ad platforms. Also use when the user mentions 'PPC,' 'paid media,' 'ad copy,' 'ad creative,' 'ROAS,' 'CPA,' 'ad campaign,' 'retargeting,' or 'audience targeting.' This skill covers campaign strategy, ad creation, audience targeting, and optimization."
+---
+
+# Paid Ads
+
+You are an expert performance marketer with direct access to ad platform accounts. Your goal is to help create, optimize, and scale paid advertising campaigns that drive efficient customer acquisition.
+
+## Before Starting
+
+**Check for product marketing context first:**
+If `.claude/product-marketing-context.md` exists, read it before asking questions. Use that context and only ask for information not already covered or specific to this task.
+
+Gather this context (ask if not provided):
+
+### 1. Campaign Goals
+- What's the primary objective? (Awareness, traffic, leads, sales, app installs)
+- What's the target CPA or ROAS?
+- What's the monthly/weekly budget?
+- Any constraints? (Brand guidelines, compliance, geographic)
+
+### 2. Product & Offer
+- What are you promoting? (Product, free trial, lead magnet, demo)
+- What's the landing page URL?
+- What makes this offer compelling?
+
+### 3. Audience
+- Who is the ideal customer?
+- What problem does your product solve for them?
+- What are they searching for or interested in?
+- Do you have existing customer data for lookalikes?
+
+### 4. Current State
+- Have you run ads before? What worked/didn't?
+- Do you have existing pixel/conversion data?
+- What's your current funnel conversion rate?
+
+---
+
+## Platform Selection Guide
+
+| Platform | Best For | Use When |
+|----------|----------|----------|
+| **Google Ads** | High-intent search traffic | People actively search for your solution |
+| **Meta** | Demand generation, visual products | Creating demand, strong creative assets |
+| **LinkedIn** | B2B, decision-makers | Job title/company targeting matters, higher price points |
+| **Twitter/X** | Tech audiences, thought leadership | Audience is active on X, timely content |
+| **TikTok** | Younger demographics, viral creative | Audience skews 18-34, video capacity |
+
+---
+
+## Campaign Structure Best Practices
+
+### Account Organization
+
+```
+Account
+├── Campaign 1: [Objective] - [Audience/Product]
+│ ├── Ad Set 1: [Targeting variation]
+│ │ ├── Ad 1: [Creative variation A]
+│ │ ├── Ad 2: [Creative variation B]
+│ │ └── Ad 3: [Creative variation C]
+│ └── Ad Set 2: [Targeting variation]
+└── Campaign 2...
+```
+
+### Naming Conventions
+
+```
+[Platform]_[Objective]_[Audience]_[Offer]_[Date]
+
+Examples:
+META_Conv_Lookalike-Customers_FreeTrial_2024Q1
+GOOG_Search_Brand_Demo_Ongoing
+LI_LeadGen_CMOs-SaaS_Whitepaper_Mar24
+```
+
+### Budget Allocation
+
+**Testing phase (first 2-4 weeks):**
+- 70% to proven/safe campaigns
+- 30% to testing new audiences/creative
+
+**Scaling phase:**
+- Consolidate budget into winning combinations
+- Increase budgets 20-30% at a time
+- Wait 3-5 days between increases for algorithm learning
+
+---
+
+## Ad Copy Frameworks
+
+### Key Formulas
+
+**Problem-Agitate-Solve (PAS):**
+> [Problem] → [Agitate the pain] → [Introduce solution] → [CTA]
+
+**Before-After-Bridge (BAB):**
+> [Current painful state] → [Desired future state] → [Your product as bridge]
+
+**Social Proof Lead:**
+> [Impressive stat or testimonial] → [What you do] → [CTA]
+
+**For detailed templates and headline formulas**: See [references/ad-copy-templates.md](references/ad-copy-templates.md)
+
+---
+
+## Audience Targeting Overview
+
+### Platform Strengths
+
+| Platform | Key Targeting | Best Signals |
+|----------|---------------|--------------|
+| Google | Keywords, search intent | What they're searching |
+| Meta | Interests, behaviors, lookalikes | Engagement patterns |
+| LinkedIn | Job titles, companies, industries | Professional identity |
+
+### Key Concepts
+
+- **Lookalikes**: Base on best customers (by LTV), not all customers
+- **Retargeting**: Segment by funnel stage (visitors vs. cart abandoners)
+- **Exclusions**: Always exclude existing customers and recent converters
+
+**For detailed targeting strategies by platform**: See [references/audience-targeting.md](references/audience-targeting.md)
+
+---
+
+## Creative Best Practices
+
+### Image Ads
+- Clear product screenshots showing UI
+- Before/after comparisons
+- Stats and numbers as focal point
+- Human faces (real, not stock)
+- Bold, readable text overlay (keep under 20%)
+
+### Video Ads Structure (15-30 sec)
+1. Hook (0-3 sec): Pattern interrupt, question, or bold statement
+2. Problem (3-8 sec): Relatable pain point
+3. Solution (8-20 sec): Show product/benefit
+4. CTA (20-30 sec): Clear next step
+
+**Production tips:**
+- Captions always (85% watch without sound)
+- Vertical for Stories/Reels, square for feed
+- Native feel outperforms polished
+- First 3 seconds determine if they watch
+
+### Creative Testing Hierarchy
+1. Concept/angle (biggest impact)
+2. Hook/headline
+3. Visual style
+4. Body copy
+5. CTA
+
+---
+
+## Campaign Optimization
+
+### Key Metrics by Objective
+
+| Objective | Primary Metrics |
+|-----------|-----------------|
+| Awareness | CPM, Reach, Video view rate |
+| Consideration | CTR, CPC, Time on site |
+| Conversion | CPA, ROAS, Conversion rate |
+
+### Optimization Levers
+
+**If CPA is too high:**
+1. Check landing page (is the problem post-click?)
+2. Tighten audience targeting
+3. Test new creative angles
+4. Improve ad relevance/quality score
+5. Adjust bid strategy
+
+**If CTR is low:**
+- Creative isn't resonating → test new hooks/angles
+- Audience mismatch → refine targeting
+- Ad fatigue → refresh creative
+
+**If CPM is high:**
+- Audience too narrow → expand targeting
+- High competition → try different placements
+- Low relevance score → improve creative fit
+
+### Bid Strategy Progression
+1. Start with manual or cost caps
+2. Gather conversion data (50+ conversions)
+3. Switch to automated with targets based on historical data
+4. Monitor and adjust targets based on results
+
+---
+
+## Retargeting Strategies
+
+### Funnel-Based Approach
+
+| Funnel Stage | Audience | Message | Goal |
+|--------------|----------|---------|------|
+| Top | Blog readers, video viewers | Educational, social proof | Move to consideration |
+| Middle | Pricing/feature page visitors | Case studies, demos | Move to decision |
+| Bottom | Cart abandoners, trial users | Urgency, objection handling | Convert |
+
+### Retargeting Windows
+
+| Stage | Window | Frequency Cap |
+|-------|--------|---------------|
+| Hot (cart/trial) | 1-7 days | Higher OK |
+| Warm (key pages) | 7-30 days | 3-5x/week |
+| Cold (any visit) | 30-90 days | 1-2x/week |
+
+### Exclusions to Set Up
+- Existing customers (unless upsell)
+- Recent converters (7-14 day window)
+- Bounced visitors (<10 sec)
+- Irrelevant pages (careers, support)
+
+---
+
+## Reporting & Analysis
+
+### Weekly Review
+- Spend vs. budget pacing
+- CPA/ROAS vs. targets
+- Top and bottom performing ads
+- Audience performance breakdown
+- Frequency check (fatigue risk)
+- Landing page conversion rate
+
+### Attribution Considerations
+- Platform attribution is inflated
+- Use UTM parameters consistently
+- Compare platform data to GA4
+- Look at blended CAC, not just platform CPA
+
+---
+
+## Platform Setup
+
+Before launching campaigns, ensure proper tracking and account setup.
+
+**For complete setup checklists by platform**: See [references/platform-setup-checklists.md](references/platform-setup-checklists.md)
+
+### Universal Pre-Launch Checklist
+- [ ] Conversion tracking tested with real conversion
+- [ ] Landing page loads fast (<3 sec)
+- [ ] Landing page mobile-friendly
+- [ ] UTM parameters working
+- [ ] Budget set correctly
+- [ ] Targeting matches intended audience
+
+---
+
+## Common Mistakes to Avoid
+
+### Strategy
+- Launching without conversion tracking
+- Too many campaigns (fragmenting budget)
+- Not giving algorithms enough learning time
+- Optimizing for wrong metric
+
+### Targeting
+- Audiences too narrow or too broad
+- Not excluding existing customers
+- Overlapping audiences competing
+
+### Creative
+- Only one ad per ad set
+- Not refreshing creative (fatigue)
+- Mismatch between ad and landing page
+
+### Budget
+- Spreading too thin across campaigns
+- Making big budget changes (disrupts learning)
+- Stopping campaigns during learning phase
+
+---
+
+## Task-Specific Questions
+
+1. What platform(s) are you currently running or want to start with?
+2. What's your monthly ad budget?
+3. What does a successful conversion look like (and what's it worth)?
+4. Do you have existing creative assets or need to create them?
+5. What landing page will ads point to?
+6. Do you have pixel/conversion tracking set up?
+
+---
+
+## Tool Integrations
+
+For implementation, see the [tools registry](../../tools/REGISTRY.md). Key advertising platforms:
+
+| Platform | Best For | MCP | Guide |
+|----------|----------|:---:|-------|
+| **Google Ads** | Search intent, high-intent traffic | ✓ | [google-ads.md](../../tools/integrations/google-ads.md) |
+| **Meta Ads** | Demand gen, visual products, B2C | - | [meta-ads.md](../../tools/integrations/meta-ads.md) |
+| **LinkedIn Ads** | B2B, job title targeting | - | [linkedin-ads.md](../../tools/integrations/linkedin-ads.md) |
+| **TikTok Ads** | Younger demographics, video | - | [tiktok-ads.md](../../tools/integrations/tiktok-ads.md) |
+
+For tracking, see also: [ga4.md](../../tools/integrations/ga4.md), [segment.md](../../tools/integrations/segment.md)
+
+---
+
+## Related Skills
+
+- **copywriting**: For landing page copy that converts ad traffic
+- **analytics-tracking**: For proper conversion tracking setup
+- **ab-test-setup**: For landing page testing to improve ROAS
+- **page-cro**: For optimizing post-click conversion rates
diff --git a/skills/paid-ads/paid-ads b/skills/paid-ads/paid-ads
new file mode 120000
index 0000000..4b9635c
--- /dev/null
+++ b/skills/paid-ads/paid-ads
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/paid-ads/
\ No newline at end of file
diff --git a/skills/paid-ads/references/ad-copy-templates.md b/skills/paid-ads/references/ad-copy-templates.md
new file mode 100644
index 0000000..1b7620b
--- /dev/null
+++ b/skills/paid-ads/references/ad-copy-templates.md
@@ -0,0 +1,200 @@
+# Ad Copy Templates Reference
+
+Detailed formulas and templates for writing high-converting ad copy.
+
+## Primary Text Formulas
+
+### Problem-Agitate-Solve (PAS)
+
+```
+[Problem statement]
+[Agitate the pain]
+[Introduce solution]
+[CTA]
+```
+
+**Example:**
+> Spending hours on manual reporting every week?
+> While you're buried in spreadsheets, your competitors are making decisions.
+> [Product] automates your reports in minutes.
+> Start your free trial →
+
+---
+
+### Before-After-Bridge (BAB)
+
+```
+[Current painful state]
+[Desired future state]
+[Your product as the bridge]
+```
+
+**Example:**
+> Before: Chasing down approvals across email, Slack, and spreadsheets.
+> After: Every approval tracked, automated, and on time.
+> [Product] connects your tools and keeps projects moving.
+
+---
+
+### Social Proof Lead
+
+```
+[Impressive stat or testimonial]
+[What you do]
+[CTA]
+```
+
+**Example:**
+> "We cut our reporting time by 75%." — Sarah K., Marketing Director
+> [Product] automates the reports you hate building.
+> See how it works →
+
+---
+
+### Feature-Benefit Bridge
+
+```
+[Feature]
+[So that...]
+[Which means...]
+```
+
+**Example:**
+> Real-time collaboration on documents
+> So your team always works from the latest version
+> Which means no more version confusion or lost work
+
+---
+
+### Direct Response
+
+```
+[Bold claim/outcome]
+[Proof point]
+[CTA with urgency if genuine]
+```
+
+**Example:**
+> Cut your reporting time by 80%
+> Join 5,000+ marketing teams already using [Product]
+> Start free → First month 50% off
+
+---
+
+## Headline Formulas
+
+### For Search Ads
+
+| Formula | Example |
+|---------|---------|
+| [Keyword] + [Benefit] | "Project Management That Teams Actually Use" |
+| [Action] + [Outcome] | "Automate Reports \| Save 10 Hours Weekly" |
+| [Question] | "Tired of Manual Data Entry?" |
+| [Number] + [Benefit] | "500+ Teams Trust [Product] for [Outcome]" |
+| [Keyword] + [Differentiator] | "CRM Built for Small Teams" |
+| [Price/Offer] + [Keyword] | "Free Project Management \| No Credit Card" |
+
+### For Social Ads
+
+| Type | Example |
+|------|---------|
+| Outcome hook | "How we 3x'd our conversion rate" |
+| Curiosity hook | "The reporting hack no one talks about" |
+| Contrarian hook | "Why we stopped using [common tool]" |
+| Specificity hook | "The exact template we use for..." |
+| Question hook | "What if you could cut your admin time in half?" |
+| Number hook | "7 ways to improve your workflow today" |
+| Story hook | "We almost gave up. Then we found..." |
+
+---
+
+## CTA Variations
+
+### Soft CTAs (awareness/consideration)
+
+Best for: Top of funnel, cold audiences, complex products
+
+- Learn More
+- See How It Works
+- Watch Demo
+- Get the Guide
+- Explore Features
+- See Examples
+- Read the Case Study
+
+### Hard CTAs (conversion)
+
+Best for: Bottom of funnel, warm audiences, clear offers
+
+- Start Free Trial
+- Get Started Free
+- Book a Demo
+- Claim Your Discount
+- Buy Now
+- Sign Up Free
+- Get Instant Access
+
+### Urgency CTAs (use when genuine)
+
+Best for: Limited-time offers, scarcity situations
+
+- Limited Time: 30% Off
+- Offer Ends [Date]
+- Only X Spots Left
+- Last Chance
+- Early Bird Pricing Ends Soon
+
+### Action-Oriented CTAs
+
+Best for: Active voice, clear next step
+
+- Start Saving Time Today
+- Get Your Free Report
+- See Your Score
+- Calculate Your ROI
+- Build Your First Project
+
+---
+
+## Platform-Specific Copy Guidelines
+
+### Google Search Ads
+
+- **Headline limits:** 30 characters each (up to 15 headlines)
+- **Description limits:** 90 characters each (up to 4 descriptions)
+- Include keywords naturally
+- Use all available headline slots
+- Include numbers and stats when possible
+- Test dynamic keyword insertion
+
+### Meta Ads (Facebook/Instagram)
+
+- **Primary text:** 125 characters visible (can be longer, gets truncated)
+- **Headline:** 40 characters recommended
+- Front-load the hook (first line matters most)
+- Emojis can work but test
+- Questions perform well
+- Keep image text under 20%
+
+### LinkedIn Ads
+
+- **Intro text:** 600 characters max (150 recommended)
+- **Headline:** 200 characters max (70 recommended)
+- Professional tone (but not boring)
+- Specific job outcomes resonate
+- Stats and social proof important
+- Avoid consumer-style hype
+
+---
+
+## Copy Testing Priority
+
+When testing ad copy, focus on these elements in order of impact:
+
+1. **Hook/angle** (biggest impact on performance)
+2. **Headline**
+3. **Primary benefit**
+4. **CTA**
+5. **Supporting proof points**
+
+Test one element at a time for clean data.
diff --git a/skills/paid-ads/references/audience-targeting.md b/skills/paid-ads/references/audience-targeting.md
new file mode 100644
index 0000000..a0f5695
--- /dev/null
+++ b/skills/paid-ads/references/audience-targeting.md
@@ -0,0 +1,234 @@
+# Audience Targeting Reference
+
+Detailed targeting strategies for each major ad platform.
+
+## Google Ads Audiences
+
+### Search Campaign Targeting
+
+**Keywords:**
+- Exact match: [keyword] — most precise, lower volume
+- Phrase match: "keyword" — moderate precision and volume
+- Broad match: keyword — highest volume, use with smart bidding
+
+**Audience layering:**
+- Add audiences in "observation" mode first
+- Analyze performance by audience
+- Switch to "targeting" mode for high performers
+
+**RLSA (Remarketing Lists for Search Ads):**
+- Bid higher on past visitors searching your terms
+- Show different ads to returning searchers
+- Exclude converters from prospecting campaigns
+
+### Display/YouTube Targeting
+
+**Custom intent audiences:**
+- Based on recent search behavior
+- Create from your converting keywords
+- High intent, good for prospecting
+
+**In-market audiences:**
+- People actively researching solutions
+- Pre-built by Google
+- Layer with demographics for precision
+
+**Affinity audiences:**
+- Based on interests and habits
+- Better for awareness
+- Broad but can exclude irrelevant
+
+**Customer match:**
+- Upload email lists
+- Retarget existing customers
+- Create lookalikes from best customers
+
+**Similar/lookalike audiences:**
+- Based on your customer match lists
+- Expand reach while maintaining relevance
+- Best when source list is high-quality customers
+
+---
+
+## Meta Audiences
+
+### Core Audiences (Interest/Demographic)
+
+**Interest targeting tips:**
+- Layer interests with AND logic for precision
+- Use Audience Insights to research interests
+- Start broad, let algorithm optimize
+- Exclude existing customers always
+
+**Demographic targeting:**
+- Age and gender (if product-specific)
+- Location (down to zip/postal code)
+- Language
+- Education and work (limited data now)
+
+**Behavior targeting:**
+- Purchase behavior
+- Device usage
+- Travel patterns
+- Life events
+
+### Custom Audiences
+
+**Website visitors:**
+- All visitors (last 180 days max)
+- Specific page visitors
+- Time on site thresholds
+- Frequency (visited X times)
+
+**Customer list:**
+- Upload emails/phone numbers
+- Match rate typically 30-70%
+- Refresh regularly for accuracy
+
+**Engagement audiences:**
+- Video viewers (25%, 50%, 75%, 95%)
+- Page/profile engagers
+- Form openers
+- Instagram engagers
+
+**App activity:**
+- App installers
+- In-app events
+- Purchase events
+
+### Lookalike Audiences
+
+**Source audience quality matters:**
+- Use high-LTV customers, not all customers
+- Purchasers > leads > all visitors
+- Minimum 100 source users, ideally 1,000+
+
+**Size recommendations:**
+- 1% — most similar, smallest reach
+- 1-3% — good balance for most
+- 3-5% — broader, good for scale
+- 5-10% — very broad, awareness only
+
+**Layering strategies:**
+- Lookalike + interest = more precision early
+- Test lookalike-only as you scale
+- Exclude the source audience
+
+---
+
+## LinkedIn Audiences
+
+### Job-Based Targeting
+
+**Job titles:**
+- Be specific (CMO vs. "Marketing")
+- LinkedIn normalizes titles, but verify
+- Stack related titles
+- Exclude irrelevant titles
+
+**Job functions:**
+- Broader than titles
+- Combine with seniority level
+- Good for awareness campaigns
+
+**Seniority levels:**
+- Entry, Senior, Manager, Director, VP, CXO, Partner
+- Layer with function for precision
+
+**Skills:**
+- Self-reported, less reliable
+- Good for technical roles
+- Use as expansion layer
+
+### Company-Based Targeting
+
+**Company size:**
+- 1-10, 11-50, 51-200, 201-500, 501-1000, 1001-5000, 5000+
+- Key filter for B2B
+
+**Industry:**
+- Based on company classification
+- Can be broad, layer with other criteria
+
+**Company names (ABM):**
+- Upload target account list
+- Minimum 300 companies recommended
+- Match rate varies
+
+**Company growth rate:**
+- Hiring rapidly = budget available
+- Good signal for timing
+
+### High-Performing Combinations
+
+| Use Case | Targeting Combination |
+|----------|----------------------|
+| Enterprise sales | Company size 1000+ + VP/CXO + Industry |
+| SMB sales | Company size 11-200 + Manager/Director + Function |
+| Developer tools | Skills + Job function + Company type |
+| ABM campaigns | Company list + Decision-maker titles |
+| Broad awareness | Industry + Seniority + Geography |
+
+---
+
+## Twitter/X Audiences
+
+### Targeting options:
+- Follower lookalikes (accounts similar to followers of X)
+- Interest categories
+- Keywords (in tweets)
+- Conversation topics
+- Events
+- Tailored audiences (your lists)
+
+### Best practices:
+- Follower lookalikes of relevant accounts work well
+- Keyword targeting catches active conversations
+- Lower CPMs than LinkedIn/Meta
+- Less precise, better for awareness
+
+---
+
+## TikTok Audiences
+
+### Targeting options:
+- Demographics (age, gender, location)
+- Interests (TikTok's categories)
+- Behaviors (video interactions)
+- Device (iOS/Android, connection type)
+- Custom audiences (pixel, customer file)
+- Lookalike audiences
+
+### Best practices:
+- Younger skew (18-34 primarily)
+- Interest targeting is broad
+- Creative matters more than targeting
+- Let algorithm optimize with broad targeting
+
+---
+
+## Audience Size Guidelines
+
+| Platform | Minimum Recommended | Ideal Range |
+|----------|-------------------|-------------|
+| Google Search | 1,000+ searches/mo | 5,000-50,000 |
+| Google Display | 100,000+ | 500K-5M |
+| Meta | 100,000+ | 500K-10M |
+| LinkedIn | 50,000+ | 100K-500K |
+| Twitter/X | 50,000+ | 100K-1M |
+| TikTok | 100,000+ | 1M+ |
+
+Too narrow = expensive, slow learning
+Too broad = wasted spend, poor relevance
+
+---
+
+## Exclusion Strategy
+
+Always exclude:
+- Existing customers (unless upsell)
+- Recent converters (7-14 days)
+- Bounced visitors (<10 sec)
+- Employees (by company or email list)
+- Irrelevant page visitors (careers, support)
+- Competitors (if identifiable)
diff --git a/skills/paid-ads/references/platform-setup-checklists.md b/skills/paid-ads/references/platform-setup-checklists.md
new file mode 100644
index 0000000..16fe2a8
--- /dev/null
+++ b/skills/paid-ads/references/platform-setup-checklists.md
@@ -0,0 +1,269 @@
+# Platform Setup Checklists
+
+Complete setup checklists for major ad platforms.
+
+## Google Ads Setup
+
+### Account Foundation
+
+- [ ] Google Ads account created and verified
+- [ ] Billing information added
+- [ ] Time zone and currency set correctly
+- [ ] Account access granted to team members
+
+### Conversion Tracking
+
+- [ ] Google tag installed on all pages
+- [ ] Conversion actions created (purchase, lead, signup)
+- [ ] Conversion values assigned (if applicable)
+- [ ] Enhanced conversions enabled
+- [ ] Test conversions firing correctly
+- [ ] Import conversions from GA4 (optional)
+
+### Analytics Integration
+
+- [ ] Google Analytics 4 linked
+- [ ] Auto-tagging enabled
+- [ ] GA4 audiences available in Google Ads
+- [ ] Cross-domain tracking set up (if multiple domains)
+
+### Audience Setup
+
+- [ ] Remarketing tag verified
+- [ ] Website visitor audiences created:
+ - All visitors (180 days)
+ - Key page visitors (pricing, demo, features)
+ - Converters (for exclusion)
+- [ ] Customer match lists uploaded
+- [ ] Similar audiences enabled
+
+### Campaign Readiness
+
+- [ ] Negative keyword lists created:
+ - Universal negatives (free, jobs, careers, reviews, complaints)
+ - Competitor negatives (if needed)
+ - Irrelevant industry terms
+- [ ] Location targeting set (include/exclude)
+- [ ] Language targeting set
+- [ ] Ad schedule configured (if B2B, business hours)
+- [ ] Device bid adjustments considered
+
+### Ad Extensions
+
+- [ ] Sitelinks (4-6 relevant pages)
+- [ ] Callouts (key benefits, offers)
+- [ ] Structured snippets (features, types, services)
+- [ ] Call extension (if phone leads valuable)
+- [ ] Lead form extension (if using)
+- [ ] Price extensions (if applicable)
+- [ ] Image extensions (where available)
+
+### Brand Protection
+
+- [ ] Brand campaign running (protect branded terms)
+- [ ] Competitor campaigns considered
+- [ ] Brand terms in negative lists for non-brand campaigns
+
+---
+
+## Meta Ads Setup
+
+### Business Manager Foundation
+
+- [ ] Business Manager created
+- [ ] Business verified (if running certain ad types)
+- [ ] Ad account created within Business Manager
+- [ ] Payment method added
+- [ ] Team access configured with proper roles
+
+### Pixel & Tracking
+
+- [ ] Meta Pixel installed on all pages
+- [ ] Standard events configured:
+ - PageView (automatic)
+ - ViewContent (product/feature pages)
+ - Lead (form submissions)
+ - Purchase (conversions)
+ - AddToCart (if e-commerce)
+ - InitiateCheckout (if e-commerce)
+- [ ] Conversions API (CAPI) set up for server-side tracking
+- [ ] Event Match Quality score > 6
+- [ ] Test events in Events Manager
+
+### Domain & Aggregated Events
+
+- [ ] Domain verified in Business Manager
+- [ ] Aggregated Event Measurement configured
+- [ ] Top 8 events prioritized in order of importance
+- [ ] Web events prioritized for iOS 14+ tracking
+
+### Audience Setup
+
+- [ ] Custom audiences created:
+ - Website visitors (all, 30/60/90/180 days)
+ - Key page visitors
+ - Video viewers (25%, 50%, 75%, 95%)
+ - Page/Instagram engagers
+ - Customer list uploaded
+- [ ] Lookalike audiences created (1%, 1-3%)
+- [ ] Saved audiences for common targeting
+
+### Catalog (E-commerce)
+
+- [ ] Product catalog connected
+- [ ] Product feed updating correctly
+- [ ] Catalog sales campaigns enabled
+- [ ] Dynamic product ads configured
+
+### Creative Assets
+
+- [ ] Images in correct sizes:
+ - Feed: 1080x1080 (1:1)
+ - Stories/Reels: 1080x1920 (9:16)
+ - Landscape: 1200x628 (1.91:1)
+- [ ] Videos in correct formats
+- [ ] Ad copy variations ready
+- [ ] UTM parameters in all destination URLs
+
+### Compliance
+
+- [ ] Special Ad Categories declared (if housing, credit, employment, politics)
+- [ ] Landing page complies with Meta policies
+- [ ] No prohibited content in ads
+
+---
+
+## LinkedIn Ads Setup
+
+### Campaign Manager Foundation
+
+- [ ] Campaign Manager account created
+- [ ] Company Page connected
+- [ ] Billing information added
+- [ ] Team access configured
+
+### Insight Tag & Tracking
+
+- [ ] LinkedIn Insight Tag installed on all pages
+- [ ] Tag verified and firing
+- [ ] Conversion tracking configured:
+ - URL-based conversions
+ - Event-specific conversions
+- [ ] Conversion values set (if applicable)
+
+### Audience Setup
+
+- [ ] Matched Audiences created:
+ - Website retargeting audiences
+ - Company list uploaded (for ABM)
+ - Contact list uploaded
+- [ ] Lookalike audiences created
+- [ ] Saved audiences for common targeting
+
+### Lead Gen Forms (if using)
+
+- [ ] Lead gen form templates created
+- [ ] Form fields selected (minimize for conversion)
+- [ ] Privacy policy URL added
+- [ ] Thank you message configured
+- [ ] CRM integration set up (or CSV export process)
+
+### Document Ads (if using)
+
+- [ ] Documents uploaded (PDF, PowerPoint)
+- [ ] Gating configured (full gate or preview)
+- [ ] Lead gen form connected
+
+### Creative Assets
+
+- [ ] Single image ads: 1200x627 (1.91:1) or 1080x1080 (1:1)
+- [ ] Carousel images ready
+- [ ] Video specs met (if using)
+- [ ] Ad copy within character limits:
+ - Intro text: 600 max, 150 recommended
+ - Headline: 200 max, 70 recommended
+
+### Budget Considerations
+
+- [ ] Budget realistic for LinkedIn CPCs ($8-15+ typical)
+- [ ] Audience size validated (50K+ recommended)
+- [ ] Daily vs. lifetime budget decided
+- [ ] Bid strategy selected
+
+---
+
+## Twitter/X Ads Setup
+
+### Account Foundation
+
+- [ ] Ads account created
+- [ ] Payment method added
+- [ ] Account verified (if required)
+
+### Tracking
+
+- [ ] Twitter Pixel installed
+- [ ] Conversion events created
+- [ ] Website tag verified
+
+### Audience Setup
+
+- [ ] Tailored audiences created:
+ - Website visitors
+ - Customer lists
+- [ ] Follower lookalikes identified
+- [ ] Interest and keyword targets researched
+
+### Creative
+
+- [ ] Tweet copy within 280 characters
+- [ ] Images: 1200x675 (1.91:1) or 1200x1200 (1:1)
+- [ ] Video specs met (if using)
+- [ ] Cards configured (website, app, etc.)
+
+---
+
+## TikTok Ads Setup
+
+### Account Foundation
+
+- [ ] TikTok Ads Manager account created
+- [ ] Business verification completed
+- [ ] Payment method added
+
+### Pixel & Tracking
+
+- [ ] TikTok Pixel installed
+- [ ] Events configured (ViewContent, Purchase, etc.)
+- [ ] Events API set up (recommended)
+
+### Audience Setup
+
+- [ ] Custom audiences created
+- [ ] Lookalike audiences created
+- [ ] Interest categories identified
+
+### Creative
+
+- [ ] Vertical video (9:16) ready
+- [ ] Native-feeling content (not too polished)
+- [ ] First 3 seconds are compelling hooks
+- [ ] Captions added (most watch without sound)
+- [ ] Music/sounds selected (licensed if needed)
+
+---
+
+## Universal Pre-Launch Checklist
+
+Before launching any campaign:
+
+- [ ] Conversion tracking tested with real conversion
+- [ ] Landing page loads fast (<3 sec)
+- [ ] Landing page mobile-friendly
+- [ ] UTM parameters working
+- [ ] Budget set correctly (daily vs. lifetime)
+- [ ] Start/end dates correct
+- [ ] Targeting matches intended audience
+- [ ] Ad creative approved
+- [ ] Team notified of launch
+- [ ] Reporting dashboard ready
diff --git a/skills/paywall-upgrade-cro/SKILL.md b/skills/paywall-upgrade-cro/SKILL.md
new file mode 100644
index 0000000..2b18911
--- /dev/null
+++ b/skills/paywall-upgrade-cro/SKILL.md
@@ -0,0 +1,225 @@
+---
+name: paywall-upgrade-cro
+version: 1.0.0
+description: When the user wants to create or optimize in-app paywalls, upgrade screens, upsell modals, or feature gates. Also use when the user mentions "paywall," "upgrade screen," "upgrade modal," "upsell," "feature gate," "convert free to paid," "freemium conversion," "trial expiration screen," "limit reached screen," "plan upgrade prompt," or "in-app pricing." Distinct from public pricing pages (see page-cro) — this skill focuses on in-product upgrade moments where the user has already experienced value.
+---
+
+# Paywall and Upgrade Screen CRO
+
+You are an expert in in-app paywalls and upgrade flows. Your goal is to convert free users to paid, or upgrade users to higher tiers, at moments when they've experienced enough value to justify the commitment.
+
+## Initial Assessment
+
+**Check for product marketing context first:**
+If `.claude/product-marketing-context.md` exists, read it before asking questions. Use that context and only ask for information not already covered or specific to this task.
+
+Before providing recommendations, understand:
+
+1. **Upgrade Context** - Freemium → Paid? Trial → Paid? Tier upgrade? Feature upsell? Usage limit?
+
+2. **Product Model** - What's free? What's behind paywall? What triggers prompts? Current conversion rate?
+
+3. **User Journey** - When does this appear? What have they experienced? What are they trying to do?
+
+---
+
+## Core Principles
+
+### 1. Value Before Ask
+- User should have experienced real value first
+- Upgrade should feel like natural next step
+- Timing: After "aha moment," not before
+
+### 2. Show, Don't Just Tell
+- Demonstrate the value of paid features
+- Preview what they're missing
+- Make the upgrade feel tangible
+
+### 3. Friction-Free Path
+- Easy to upgrade when ready
+- Don't make them hunt for pricing
+
+### 4. Respect the No
+- Don't trap or pressure
+- Make it easy to continue free
+- Maintain trust for future conversion
+
+---
+
+## Paywall Trigger Points
+
+### Feature Gates
+When user clicks a paid-only feature:
+- Clear explanation of why it's paid
+- Show what the feature does
+- Quick path to unlock
+- Option to continue without
+
+### Usage Limits
+When user hits a limit:
+- Clear indication of limit reached
+- Show what upgrading provides
+- Don't block abruptly
+
+### Trial Expiration
+When trial is ending:
+- Early warnings (7, 3, 1 day)
+- Clear "what happens" on expiration
+- Summarize value received
+
+### Time-Based Prompts
+After X days of free use:
+- Gentle upgrade reminder
+- Highlight unused paid features
+- Easy to dismiss
+
+---
+
+## Paywall Screen Components
+
+1. **Headline** - Focus on what they get: "Unlock [Feature] to [Benefit]"
+
+2. **Value Demonstration** - Preview, before/after, "With Pro you could..."
+
+3. **Feature Comparison** - Highlight key differences, current plan marked
+
+4. **Pricing** - Clear, simple, annual vs. monthly options
+
+5. **Social Proof** - Customer quotes, "X teams use this"
+
+6. **CTA** - Specific and value-oriented: "Start Getting [Benefit]"
+
+7. **Escape Hatch** - Clear "Not now" or "Continue with Free"
+
+---
+
+## Specific Paywall Types
+
+### Feature Lock Paywall
+```
+[Lock Icon]
+This feature is available on Pro
+
+[Feature preview/screenshot]
+
+[Feature name] helps you [benefit]:
+• [Capability]
+• [Capability]
+
+[Upgrade to Pro - $X/mo]
+[Maybe Later]
+```
+
+### Usage Limit Paywall
+```
+You've reached your free limit
+
+[Progress bar at 100%]
+
+Free: 3 projects | Pro: Unlimited
+
+[Upgrade to Pro] [Delete a project]
+```
+
+### Trial Expiration Paywall
+```
+Your trial ends in 3 days
+
+What you'll lose:
+• [Feature used]
+• [Data created]
+
+What you've accomplished:
+• Created X projects
+
+[Continue with Pro]
+[Remind me later] [Downgrade]
+```
+
+---
+
+## Timing and Frequency
+
+### When to Show
+- After value moment, before frustration
+- After activation/aha moment
+- When hitting genuine limits
+
+### When NOT to Show
+- During onboarding (too early)
+- When they're in a flow
+- Repeatedly after dismissal
+
+### Frequency Rules
+- Limit per session
+- Cool-down after dismiss (days, not hours)
+- Track annoyance signals
+
+---
+
+## Upgrade Flow Optimization
+
+### From Paywall to Payment
+- Minimize steps
+- Keep in-context if possible
+- Pre-fill known information
+
+### Post-Upgrade
+- Immediate access to features
+- Confirmation and receipt
+- Guide to new features
+
+---
+
+## A/B Testing
+
+### What to Test
+- Trigger timing
+- Headline/copy variations
+- Price presentation
+- Trial length
+- Feature emphasis
+- Design/layout
+
+### Metrics to Track
+- Paywall impression rate
+- Click-through to upgrade
+- Completion rate
+- Revenue per user
+- Churn rate post-upgrade
+
+**For comprehensive experiment ideas**: See [references/experiments.md](references/experiments.md)
+
+---
+
+## Anti-Patterns to Avoid
+
+### Dark Patterns
+- Hiding the close button
+- Confusing plan selection
+- Guilt-trip copy
+
+### Conversion Killers
+- Asking before value delivered
+- Too frequent prompts
+- Blocking critical flows
+- Complicated upgrade process
+
+---
+
+## Task-Specific Questions
+
+1. What's your current free → paid conversion rate?
+2. What triggers upgrade prompts today?
+3. What features are behind the paywall?
+4. What's your "aha moment" for users?
+5. What pricing model? (per seat, usage, flat)
+6. Mobile app, web app, or both?
+
+---
+
+## Related Skills
+
+- **page-cro**: For public pricing page optimization
+- **onboarding-cro**: For driving to aha moment before upgrade
+- **ab-test-setup**: For testing paywall variations
diff --git a/skills/paywall-upgrade-cro/paywall-upgrade-cro b/skills/paywall-upgrade-cro/paywall-upgrade-cro
new file mode 120000
index 0000000..f0d8a1f
--- /dev/null
+++ b/skills/paywall-upgrade-cro/paywall-upgrade-cro
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/paywall-upgrade-cro/
\ No newline at end of file
diff --git a/skills/paywall-upgrade-cro/references/experiments.md b/skills/paywall-upgrade-cro/references/experiments.md
new file mode 100644
index 0000000..980db48
--- /dev/null
+++ b/skills/paywall-upgrade-cro/references/experiments.md
@@ -0,0 +1,155 @@
+# Paywall Experiment Ideas
+
+Comprehensive list of A/B tests and experiments for paywall optimization.
+
+## Trigger & Timing Experiments
+
+### When to Show
+- Test trigger timing: after aha moment vs. at feature attempt
+- Early trial reminder (7 days) vs. late reminder (1 day before)
+- Show after X actions completed vs. after X days
+- Test soft prompts at different engagement thresholds
+- Trigger based on usage patterns vs. time-based only
+
+### Trigger Type
+- Hard gate (can't proceed) vs. soft gate (preview + prompt)
+- Feature lock vs. usage limit as primary trigger
+- In-context modal vs. dedicated upgrade page
+- Banner reminder vs. modal prompt
+- Exit-intent on free plan pages
+
+---
+
+## Paywall Design Experiments
+
+### Layout & Format
+- Full-screen paywall vs. modal overlay
+- Minimal paywall (CTA-focused) vs. feature-rich paywall
+- Single plan display vs. plan comparison
+- Image/preview included vs. text-only
+- Vertical layout vs. horizontal layout on desktop
+
+### Value Presentation
+- Feature list vs. benefit statements
+- Show what they'll lose (loss aversion) vs. what they'll gain
+- Personalized value summary based on usage
+- Before/after demonstration
+- ROI calculator or value quantification
+
+### Visual Elements
+- Add product screenshots or previews
+- Include short demo video or GIF
+- Test illustration vs. product imagery
+- Animated vs. static paywall
+- Progress visualization (what they've accomplished)
+
+---
+
+## Pricing Presentation Experiments
+
+### Price Display
+- Show monthly vs. annual vs. both with toggle
+- Highlight savings for annual ($ amount vs. % off)
+- Price per day framing ("Less than a coffee")
+- Show price after trial vs. emphasize "Start Free"
+- Display price prominently vs. de-emphasize until click
+
+### Plan Options
+- Single recommended plan vs. multiple tiers
+- Add "Most Popular" badge to target plan
+- Test number of visible plans (2 vs. 3)
+- Show enterprise/custom tier vs. hide it
+- Include one-time purchase option alongside subscription
+
+### Discounts & Offers
+- First month/year discount for conversion
+- Limited-time upgrade offer with countdown
+- Loyalty discount based on free usage duration
+- Bundle discount for annual commitment
+- Referral discount for social proof
+
+---
+
+## Copy & Messaging Experiments
+
+### Headlines
+- Benefit-focused ("Unlock unlimited projects") vs. feature-focused ("Get Pro features")
+- Question format ("Ready to do more?") vs. statement format
+- Urgency-based ("Don't lose your work") vs. value-based
+- Personalized headline with user's name or usage data
+- Social proof headline ("Join 10,000+ Pro users")
+
+### CTAs
+- "Start Free Trial" vs. "Upgrade Now" vs. "Continue with Pro"
+- First person ("Start My Trial") vs. second person ("Start Your Trial")
+- Value-specific ("Unlock Unlimited") vs. generic ("Upgrade")
+- Add urgency ("Upgrade Today") vs. no pressure
+- Include price in CTA vs. separate price display
+
+### Objection Handling
+- Add money-back guarantee messaging
+- Show "Cancel anytime" prominently
+- Include FAQ on paywall
+- Address specific objections based on feature gated
+- Add chat/support option on paywall
+
+---
+
+## Trial & Conversion Experiments
+
+### Trial Structure
+- 7-day vs. 14-day vs. 30-day trial length
+- Credit card required vs. not required for trial
+- Full-access trial vs. limited feature trial
+- Trial extension offer for engaged users
+- Second trial offer for expired/churned users
+
+### Trial Expiration
+- Countdown timer visibility (always vs. near end)
+- Email reminders: frequency and timing
+- Grace period after expiration vs. immediate downgrade
+- "Last chance" offer with discount
+- Pause option vs. immediate cancellation
+
+### Upgrade Path
+- One-click upgrade from paywall vs. separate checkout
+- Pre-filled payment info for returning users
+- Multiple payment methods offered
+- Quarterly plan option alongside monthly/annual
+- Team invite flow for solo-to-team conversion
+
+---
+
+## Personalization Experiments
+
+### Usage-Based
+- Personalize paywall copy based on features used
+- Highlight most-used premium features
+- Show usage stats ("You've created 50 projects")
+- Recommend plan based on behavior patterns
+- Dynamic feature emphasis based on user segment
+
+### Segment-Specific
+- Different paywall for power users vs. casual users
+- B2B vs. B2C messaging variations
+- Industry-specific value propositions
+- Role-based feature highlighting
+- Traffic source-based messaging
+
+---
+
+## Frequency & UX Experiments
+
+### Frequency Capping
+- Test number of prompts per session
+- Cool-down period after dismiss (hours vs. days)
+- Escalating urgency over time vs. consistent messaging
+- Once per feature vs. consolidated prompts
+- Re-show rules after major engagement
+
+### Dismiss Behavior
+- "Maybe later" vs. "No thanks" vs. "Remind me tomorrow"
+- Ask reason for declining
+- Offer alternative (lower tier, annual discount)
+- Exit survey on dismiss
+- Friendly vs. neutral decline copy
diff --git a/skills/pdf/LICENSE.txt b/skills/pdf/LICENSE.txt
new file mode 100644
index 0000000..c55ab42
--- /dev/null
+++ b/skills/pdf/LICENSE.txt
@@ -0,0 +1,30 @@
+© 2025 Anthropic, PBC. All rights reserved.
+
+LICENSE: Use of these materials (including all code, prompts, assets, files,
+and other components of this Skill) is governed by your agreement with
+Anthropic regarding use of Anthropic's services. If no separate agreement
+exists, use is governed by Anthropic's Consumer Terms of Service or
+Commercial Terms of Service, as applicable:
+https://www.anthropic.com/legal/consumer-terms
+https://www.anthropic.com/legal/commercial-terms
+Your applicable agreement is referred to as the "Agreement." "Services" are
+as defined in the Agreement.
+
+ADDITIONAL RESTRICTIONS: Notwithstanding anything in the Agreement to the
+contrary, users may not:
+
+- Extract these materials from the Services or retain copies of these
+ materials outside the Services
+- Reproduce or copy these materials, except for temporary copies created
+ automatically during authorized use of the Services
+- Create derivative works based on these materials
+- Distribute, sublicense, or transfer these materials to any third party
+- Make, offer to sell, sell, or import any inventions embodied in these
+ materials
+- Reverse engineer, decompile, or disassemble these materials
+
+The receipt, viewing, or possession of these materials does not convey or
+imply any license or right beyond those expressly granted above.
+
+Anthropic retains all right, title, and interest in these materials,
+including all copyrights, patents, and other intellectual property rights.
diff --git a/skills/pdf/SKILL.md b/skills/pdf/SKILL.md
new file mode 100644
index 0000000..d3e046a
--- /dev/null
+++ b/skills/pdf/SKILL.md
@@ -0,0 +1,314 @@
+---
+name: pdf
+description: Use this skill whenever the user wants to do anything with PDF files. This includes reading or extracting text/tables from PDFs, combining or merging multiple PDFs into one, splitting PDFs apart, rotating pages, adding watermarks, creating new PDFs, filling PDF forms, encrypting/decrypting PDFs, extracting images, and OCR on scanned PDFs to make them searchable. If the user mentions a .pdf file or asks to produce one, use this skill.
+license: Proprietary. LICENSE.txt has complete terms
+---
+
+# PDF Processing Guide
+
+## Overview
+
+This guide covers essential PDF processing operations using Python libraries and command-line tools. For advanced features, JavaScript libraries, and detailed examples, see REFERENCE.md. If you need to fill out a PDF form, read FORMS.md and follow its instructions.
+
+## Quick Start
+
+```python
+from pypdf import PdfReader, PdfWriter
+
+# Read a PDF
+reader = PdfReader("document.pdf")
+print(f"Pages: {len(reader.pages)}")
+
+# Extract text
+text = ""
+for page in reader.pages:
+ text += page.extract_text()
+```
+
+## Python Libraries
+
+### pypdf - Basic Operations
+
+#### Merge PDFs
+```python
+from pypdf import PdfWriter, PdfReader
+
+writer = PdfWriter()
+for pdf_file in ["doc1.pdf", "doc2.pdf", "doc3.pdf"]:
+ reader = PdfReader(pdf_file)
+ for page in reader.pages:
+ writer.add_page(page)
+
+with open("merged.pdf", "wb") as output:
+ writer.write(output)
+```
+
+#### Split PDF
+```python
+reader = PdfReader("input.pdf")
+for i, page in enumerate(reader.pages):
+ writer = PdfWriter()
+ writer.add_page(page)
+ with open(f"page_{i+1}.pdf", "wb") as output:
+ writer.write(output)
+```
+
+#### Extract Metadata
+```python
+reader = PdfReader("document.pdf")
+meta = reader.metadata
+print(f"Title: {meta.title}")
+print(f"Author: {meta.author}")
+print(f"Subject: {meta.subject}")
+print(f"Creator: {meta.creator}")
+```
+
+#### Rotate Pages
+```python
+reader = PdfReader("input.pdf")
+writer = PdfWriter()
+
+page = reader.pages[0]
+page.rotate(90) # Rotate 90 degrees clockwise
+writer.add_page(page)
+
+with open("rotated.pdf", "wb") as output:
+ writer.write(output)
+```
+
+### pdfplumber - Text and Table Extraction
+
+#### Extract Text with Layout
+```python
+import pdfplumber
+
+with pdfplumber.open("document.pdf") as pdf:
+ for page in pdf.pages:
+ text = page.extract_text()
+ print(text)
+```
+
+#### Extract Tables
+```python
+with pdfplumber.open("document.pdf") as pdf:
+ for i, page in enumerate(pdf.pages):
+ tables = page.extract_tables()
+ for j, table in enumerate(tables):
+ print(f"Table {j+1} on page {i+1}:")
+ for row in table:
+ print(row)
+```
+
+#### Advanced Table Extraction
+```python
+import pandas as pd
+
+with pdfplumber.open("document.pdf") as pdf:
+ all_tables = []
+ for page in pdf.pages:
+ tables = page.extract_tables()
+ for table in tables:
+ if table: # Check if table is not empty
+ df = pd.DataFrame(table[1:], columns=table[0])
+ all_tables.append(df)
+
+# Combine all tables
+if all_tables:
+ combined_df = pd.concat(all_tables, ignore_index=True)
+ combined_df.to_excel("extracted_tables.xlsx", index=False)
+```
+
+### reportlab - Create PDFs
+
+#### Basic PDF Creation
+```python
+from reportlab.lib.pagesizes import letter
+from reportlab.pdfgen import canvas
+
+c = canvas.Canvas("hello.pdf", pagesize=letter)
+width, height = letter
+
+# Add text
+c.drawString(100, height - 100, "Hello World!")
+c.drawString(100, height - 120, "This is a PDF created with reportlab")
+
+# Add a line
+c.line(100, height - 140, 400, height - 140)
+
+# Save
+c.save()
+```
+
+#### Create PDF with Multiple Pages
+```python
+from reportlab.lib.pagesizes import letter
+from reportlab.platypus import SimpleDocTemplate, Paragraph, Spacer, PageBreak
+from reportlab.lib.styles import getSampleStyleSheet
+
+doc = SimpleDocTemplate("report.pdf", pagesize=letter)
+styles = getSampleStyleSheet()
+story = []
+
+# Add content
+title = Paragraph("Report Title", styles['Title'])
+story.append(title)
+story.append(Spacer(1, 12))
+
+body = Paragraph("This is the body of the report. " * 20, styles['Normal'])
+story.append(body)
+story.append(PageBreak())
+
+# Page 2
+story.append(Paragraph("Page 2", styles['Heading1']))
+story.append(Paragraph("Content for page 2", styles['Normal']))
+
+# Build PDF
+doc.build(story)
+```
+
+#### Subscripts and Superscripts
+
+**IMPORTANT**: Never use Unicode subscript/superscript characters (₀₁₂₃₄₅₆₇₈₉, ⁰¹²³⁴⁵⁶⁷⁸⁹) in ReportLab PDFs. The built-in fonts do not include these glyphs, causing them to render as solid black boxes.
+
+Instead, use ReportLab's XML markup tags in Paragraph objects:
+```python
+from reportlab.platypus import Paragraph
+from reportlab.lib.styles import getSampleStyleSheet
+
+styles = getSampleStyleSheet()
+
+# Subscripts: use tag
+chemical = Paragraph("H2 O", styles['Normal'])
+
+# Superscripts: use tag
+squared = Paragraph("x2 + y2 ", styles['Normal'])
+```
+
+For canvas-drawn text (not Paragraph objects), manually adjust font the size and position rather than using Unicode subscripts/superscripts.
+
+## Command-Line Tools
+
+### pdftotext (poppler-utils)
+```bash
+# Extract text
+pdftotext input.pdf output.txt
+
+# Extract text preserving layout
+pdftotext -layout input.pdf output.txt
+
+# Extract specific pages
+pdftotext -f 1 -l 5 input.pdf output.txt # Pages 1-5
+```
+
+### qpdf
+```bash
+# Merge PDFs
+qpdf --empty --pages file1.pdf file2.pdf -- merged.pdf
+
+# Split pages
+qpdf input.pdf --pages . 1-5 -- pages1-5.pdf
+qpdf input.pdf --pages . 6-10 -- pages6-10.pdf
+
+# Rotate pages
+qpdf input.pdf output.pdf --rotate=+90:1 # Rotate page 1 by 90 degrees
+
+# Remove password
+qpdf --password=mypassword --decrypt encrypted.pdf decrypted.pdf
+```
+
+### pdftk (if available)
+```bash
+# Merge
+pdftk file1.pdf file2.pdf cat output merged.pdf
+
+# Split
+pdftk input.pdf burst
+
+# Rotate
+pdftk input.pdf rotate 1east output rotated.pdf
+```
+
+## Common Tasks
+
+### Extract Text from Scanned PDFs
+```python
+# Requires: pip install pytesseract pdf2image
+import pytesseract
+from pdf2image import convert_from_path
+
+# Convert PDF to images
+images = convert_from_path('scanned.pdf')
+
+# OCR each page
+text = ""
+for i, image in enumerate(images):
+ text += f"Page {i+1}:\n"
+ text += pytesseract.image_to_string(image)
+ text += "\n\n"
+
+print(text)
+```
+
+### Add Watermark
+```python
+from pypdf import PdfReader, PdfWriter
+
+# Create watermark (or load existing)
+watermark = PdfReader("watermark.pdf").pages[0]
+
+# Apply to all pages
+reader = PdfReader("document.pdf")
+writer = PdfWriter()
+
+for page in reader.pages:
+ page.merge_page(watermark)
+ writer.add_page(page)
+
+with open("watermarked.pdf", "wb") as output:
+ writer.write(output)
+```
+
+### Extract Images
+```bash
+# Using pdfimages (poppler-utils)
+pdfimages -j input.pdf output_prefix
+
+# This extracts all images as output_prefix-000.jpg, output_prefix-001.jpg, etc.
+```
+
+### Password Protection
+```python
+from pypdf import PdfReader, PdfWriter
+
+reader = PdfReader("input.pdf")
+writer = PdfWriter()
+
+for page in reader.pages:
+ writer.add_page(page)
+
+# Add password
+writer.encrypt("userpassword", "ownerpassword")
+
+with open("encrypted.pdf", "wb") as output:
+ writer.write(output)
+```
+
+## Quick Reference
+
+| Task | Best Tool | Command/Code |
+|------|-----------|--------------|
+| Merge PDFs | pypdf | `writer.add_page(page)` |
+| Split PDFs | pypdf | One page per file |
+| Extract text | pdfplumber | `page.extract_text()` |
+| Extract tables | pdfplumber | `page.extract_tables()` |
+| Create PDFs | reportlab | Canvas or Platypus |
+| Command line merge | qpdf | `qpdf --empty --pages ...` |
+| OCR scanned PDFs | pytesseract | Convert to image first |
+| Fill PDF forms | pdf-lib or pypdf (see FORMS.md) | See FORMS.md |
+
+## Next Steps
+
+- For advanced pypdfium2 usage, see REFERENCE.md
+- For JavaScript libraries (pdf-lib), see REFERENCE.md
+- If you need to fill out a PDF form, follow the instructions in FORMS.md
+- For troubleshooting guides, see REFERENCE.md
diff --git a/skills/pdf/forms.md b/skills/pdf/forms.md
new file mode 100644
index 0000000..6e7e1e0
--- /dev/null
+++ b/skills/pdf/forms.md
@@ -0,0 +1,294 @@
+**CRITICAL: You MUST complete these steps in order. Do not skip ahead to writing code.**
+
+If you need to fill out a PDF form, first check to see if the PDF has fillable form fields. Run this script from this file's directory:
+ `python scripts/check_fillable_fields `, and depending on the result go to either the "Fillable fields" or "Non-fillable fields" and follow those instructions.
+
+# Fillable fields
+If the PDF has fillable form fields:
+- Run this script from this file's directory: `python scripts/extract_form_field_info.py `. It will create a JSON file with a list of fields in this format:
+```
+[
+ {
+ "field_id": (unique ID for the field),
+ "page": (page number, 1-based),
+ "rect": ([left, bottom, right, top] bounding box in PDF coordinates, y=0 is the bottom of the page),
+ "type": ("text", "checkbox", "radio_group", or "choice"),
+ },
+ // Checkboxes have "checked_value" and "unchecked_value" properties:
+ {
+ "field_id": (unique ID for the field),
+ "page": (page number, 1-based),
+ "type": "checkbox",
+ "checked_value": (Set the field to this value to check the checkbox),
+ "unchecked_value": (Set the field to this value to uncheck the checkbox),
+ },
+ // Radio groups have a "radio_options" list with the possible choices.
+ {
+ "field_id": (unique ID for the field),
+ "page": (page number, 1-based),
+ "type": "radio_group",
+ "radio_options": [
+ {
+ "value": (set the field to this value to select this radio option),
+ "rect": (bounding box for the radio button for this option)
+ },
+ // Other radio options
+ ]
+ },
+ // Multiple choice fields have a "choice_options" list with the possible choices:
+ {
+ "field_id": (unique ID for the field),
+ "page": (page number, 1-based),
+ "type": "choice",
+ "choice_options": [
+ {
+ "value": (set the field to this value to select this option),
+ "text": (display text of the option)
+ },
+ // Other choice options
+ ],
+ }
+]
+```
+- Convert the PDF to PNGs (one image for each page) with this script (run from this file's directory):
+`python scripts/convert_pdf_to_images.py `
+Then analyze the images to determine the purpose of each form field (make sure to convert the bounding box PDF coordinates to image coordinates).
+- Create a `field_values.json` file in this format with the values to be entered for each field:
+```
+[
+ {
+ "field_id": "last_name", // Must match the field_id from `extract_form_field_info.py`
+ "description": "The user's last name",
+ "page": 1, // Must match the "page" value in field_info.json
+ "value": "Simpson"
+ },
+ {
+ "field_id": "Checkbox12",
+ "description": "Checkbox to be checked if the user is 18 or over",
+ "page": 1,
+ "value": "/On" // If this is a checkbox, use its "checked_value" value to check it. If it's a radio button group, use one of the "value" values in "radio_options".
+ },
+ // more fields
+]
+```
+- Run the `fill_fillable_fields.py` script from this file's directory to create a filled-in PDF:
+`python scripts/fill_fillable_fields.py `
+This script will verify that the field IDs and values you provide are valid; if it prints error messages, correct the appropriate fields and try again.
+
+# Non-fillable fields
+If the PDF doesn't have fillable form fields, you'll add text annotations. First try to extract coordinates from the PDF structure (more accurate), then fall back to visual estimation if needed.
+
+## Step 1: Try Structure Extraction First
+
+Run this script to extract text labels, lines, and checkboxes with their exact PDF coordinates:
+`python scripts/extract_form_structure.py form_structure.json`
+
+This creates a JSON file containing:
+- **labels**: Every text element with exact coordinates (x0, top, x1, bottom in PDF points)
+- **lines**: Horizontal lines that define row boundaries
+- **checkboxes**: Small square rectangles that are checkboxes (with center coordinates)
+- **row_boundaries**: Row top/bottom positions calculated from horizontal lines
+
+**Check the results**: If `form_structure.json` has meaningful labels (text elements that correspond to form fields), use **Approach A: Structure-Based Coordinates**. If the PDF is scanned/image-based and has few or no labels, use **Approach B: Visual Estimation**.
+
+---
+
+## Approach A: Structure-Based Coordinates (Preferred)
+
+Use this when `extract_form_structure.py` found text labels in the PDF.
+
+### A.1: Analyze the Structure
+
+Read form_structure.json and identify:
+
+1. **Label groups**: Adjacent text elements that form a single label (e.g., "Last" + "Name")
+2. **Row structure**: Labels with similar `top` values are in the same row
+3. **Field columns**: Entry areas start after label ends (x0 = label.x1 + gap)
+4. **Checkboxes**: Use the checkbox coordinates directly from the structure
+
+**Coordinate system**: PDF coordinates where y=0 is at TOP of page, y increases downward.
+
+### A.2: Check for Missing Elements
+
+The structure extraction may not detect all form elements. Common cases:
+- **Circular checkboxes**: Only square rectangles are detected as checkboxes
+- **Complex graphics**: Decorative elements or non-standard form controls
+- **Faded or light-colored elements**: May not be extracted
+
+If you see form fields in the PDF images that aren't in form_structure.json, you'll need to use **visual analysis** for those specific fields (see "Hybrid Approach" below).
+
+### A.3: Create fields.json with PDF Coordinates
+
+For each field, calculate entry coordinates from the extracted structure:
+
+**Text fields:**
+- entry x0 = label x1 + 5 (small gap after label)
+- entry x1 = next label's x0, or row boundary
+- entry top = same as label top
+- entry bottom = row boundary line below, or label bottom + row_height
+
+**Checkboxes:**
+- Use the checkbox rectangle coordinates directly from form_structure.json
+- entry_bounding_box = [checkbox.x0, checkbox.top, checkbox.x1, checkbox.bottom]
+
+Create fields.json using `pdf_width` and `pdf_height` (signals PDF coordinates):
+```json
+{
+ "pages": [
+ {"page_number": 1, "pdf_width": 612, "pdf_height": 792}
+ ],
+ "form_fields": [
+ {
+ "page_number": 1,
+ "description": "Last name entry field",
+ "field_label": "Last Name",
+ "label_bounding_box": [43, 63, 87, 73],
+ "entry_bounding_box": [92, 63, 260, 79],
+ "entry_text": {"text": "Smith", "font_size": 10}
+ },
+ {
+ "page_number": 1,
+ "description": "US Citizen Yes checkbox",
+ "field_label": "Yes",
+ "label_bounding_box": [260, 200, 280, 210],
+ "entry_bounding_box": [285, 197, 292, 205],
+ "entry_text": {"text": "X"}
+ }
+ ]
+}
+```
+
+**Important**: Use `pdf_width`/`pdf_height` and coordinates directly from form_structure.json.
+
+### A.4: Validate Bounding Boxes
+
+Before filling, check your bounding boxes for errors:
+`python scripts/check_bounding_boxes.py fields.json`
+
+This checks for intersecting bounding boxes and entry boxes that are too small for the font size. Fix any reported errors before filling.
+
+---
+
+## Approach B: Visual Estimation (Fallback)
+
+Use this when the PDF is scanned/image-based and structure extraction found no usable text labels (e.g., all text shows as "(cid:X)" patterns).
+
+### B.1: Convert PDF to Images
+
+`python scripts/convert_pdf_to_images.py `
+
+### B.2: Initial Field Identification
+
+Examine each page image to identify form sections and get **rough estimates** of field locations:
+- Form field labels and their approximate positions
+- Entry areas (lines, boxes, or blank spaces for text input)
+- Checkboxes and their approximate locations
+
+For each field, note approximate pixel coordinates (they don't need to be precise yet).
+
+### B.3: Zoom Refinement (CRITICAL for accuracy)
+
+For each field, crop a region around the estimated position to refine coordinates precisely.
+
+**Create a zoomed crop using ImageMagick:**
+```bash
+magick -crop x++ +repage
+```
+
+Where:
+- `, ` = top-left corner of crop region (use your rough estimate minus padding)
+- `, ` = size of crop region (field area plus ~50px padding on each side)
+
+**Example:** To refine a "Name" field estimated around (100, 150):
+```bash
+magick images_dir/page_1.png -crop 300x80+50+120 +repage crops/name_field.png
+```
+
+(Note: if the `magick` command isn't available, try `convert` with the same arguments).
+
+**Examine the cropped image** to determine precise coordinates:
+1. Identify the exact pixel where the entry area begins (after the label)
+2. Identify where the entry area ends (before next field or edge)
+3. Identify the top and bottom of the entry line/box
+
+**Convert crop coordinates back to full image coordinates:**
+- full_x = crop_x + crop_offset_x
+- full_y = crop_y + crop_offset_y
+
+Example: If the crop started at (50, 120) and the entry box starts at (52, 18) within the crop:
+- entry_x0 = 52 + 50 = 102
+- entry_top = 18 + 120 = 138
+
+**Repeat for each field**, grouping nearby fields into single crops when possible.
+
+### B.4: Create fields.json with Refined Coordinates
+
+Create fields.json using `image_width` and `image_height` (signals image coordinates):
+```json
+{
+ "pages": [
+ {"page_number": 1, "image_width": 1700, "image_height": 2200}
+ ],
+ "form_fields": [
+ {
+ "page_number": 1,
+ "description": "Last name entry field",
+ "field_label": "Last Name",
+ "label_bounding_box": [120, 175, 242, 198],
+ "entry_bounding_box": [255, 175, 720, 218],
+ "entry_text": {"text": "Smith", "font_size": 10}
+ }
+ ]
+}
+```
+
+**Important**: Use `image_width`/`image_height` and the refined pixel coordinates from the zoom analysis.
+
+### B.5: Validate Bounding Boxes
+
+Before filling, check your bounding boxes for errors:
+`python scripts/check_bounding_boxes.py fields.json`
+
+This checks for intersecting bounding boxes and entry boxes that are too small for the font size. Fix any reported errors before filling.
+
+---
+
+## Hybrid Approach: Structure + Visual
+
+Use this when structure extraction works for most fields but misses some elements (e.g., circular checkboxes, unusual form controls).
+
+1. **Use Approach A** for fields that were detected in form_structure.json
+2. **Convert PDF to images** for visual analysis of missing fields
+3. **Use zoom refinement** (from Approach B) for the missing fields
+4. **Combine coordinates**: For fields from structure extraction, use `pdf_width`/`pdf_height`. For visually-estimated fields, you must convert image coordinates to PDF coordinates:
+ - pdf_x = image_x * (pdf_width / image_width)
+ - pdf_y = image_y * (pdf_height / image_height)
+5. **Use a single coordinate system** in fields.json - convert all to PDF coordinates with `pdf_width`/`pdf_height`
+
+---
+
+## Step 2: Validate Before Filling
+
+**Always validate bounding boxes before filling:**
+`python scripts/check_bounding_boxes.py fields.json`
+
+This checks for:
+- Intersecting bounding boxes (which would cause overlapping text)
+- Entry boxes that are too small for the specified font size
+
+Fix any reported errors in fields.json before proceeding.
+
+## Step 3: Fill the Form
+
+The fill script auto-detects the coordinate system and handles conversion:
+`python scripts/fill_pdf_form_with_annotations.py fields.json `
+
+## Step 4: Verify Output
+
+Convert the filled PDF to images and verify text placement:
+`python scripts/convert_pdf_to_images.py `
+
+If text is mispositioned:
+- **Approach A**: Check that you're using PDF coordinates from form_structure.json with `pdf_width`/`pdf_height`
+- **Approach B**: Check that image dimensions match and coordinates are accurate pixels
+- **Hybrid**: Ensure coordinate conversions are correct for visually-estimated fields
diff --git a/skills/pdf/pdf b/skills/pdf/pdf
new file mode 120000
index 0000000..f3cf17c
--- /dev/null
+++ b/skills/pdf/pdf
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/pdf/
\ No newline at end of file
diff --git a/skills/pdf/reference.md b/skills/pdf/reference.md
new file mode 100644
index 0000000..41400bf
--- /dev/null
+++ b/skills/pdf/reference.md
@@ -0,0 +1,612 @@
+# PDF Processing Advanced Reference
+
+This document contains advanced PDF processing features, detailed examples, and additional libraries not covered in the main skill instructions.
+
+## pypdfium2 Library (Apache/BSD License)
+
+### Overview
+pypdfium2 is a Python binding for PDFium (Chromium's PDF library). It's excellent for fast PDF rendering, image generation, and serves as a PyMuPDF replacement.
+
+### Render PDF to Images
+```python
+import pypdfium2 as pdfium
+from PIL import Image
+
+# Load PDF
+pdf = pdfium.PdfDocument("document.pdf")
+
+# Render page to image
+page = pdf[0] # First page
+bitmap = page.render(
+ scale=2.0, # Higher resolution
+ rotation=0 # No rotation
+)
+
+# Convert to PIL Image
+img = bitmap.to_pil()
+img.save("page_1.png", "PNG")
+
+# Process multiple pages
+for i, page in enumerate(pdf):
+ bitmap = page.render(scale=1.5)
+ img = bitmap.to_pil()
+ img.save(f"page_{i+1}.jpg", "JPEG", quality=90)
+```
+
+### Extract Text with pypdfium2
+```python
+import pypdfium2 as pdfium
+
+pdf = pdfium.PdfDocument("document.pdf")
+for i, page in enumerate(pdf):
+ text = page.get_text()
+ print(f"Page {i+1} text length: {len(text)} chars")
+```
+
+## JavaScript Libraries
+
+### pdf-lib (MIT License)
+
+pdf-lib is a powerful JavaScript library for creating and modifying PDF documents in any JavaScript environment.
+
+#### Load and Manipulate Existing PDF
+```javascript
+import { PDFDocument } from 'pdf-lib';
+import fs from 'fs';
+
+async function manipulatePDF() {
+ // Load existing PDF
+ const existingPdfBytes = fs.readFileSync('input.pdf');
+ const pdfDoc = await PDFDocument.load(existingPdfBytes);
+
+ // Get page count
+ const pageCount = pdfDoc.getPageCount();
+ console.log(`Document has ${pageCount} pages`);
+
+ // Add new page
+ const newPage = pdfDoc.addPage([600, 400]);
+ newPage.drawText('Added by pdf-lib', {
+ x: 100,
+ y: 300,
+ size: 16
+ });
+
+ // Save modified PDF
+ const pdfBytes = await pdfDoc.save();
+ fs.writeFileSync('modified.pdf', pdfBytes);
+}
+```
+
+#### Create Complex PDFs from Scratch
+```javascript
+import { PDFDocument, rgb, StandardFonts } from 'pdf-lib';
+import fs from 'fs';
+
+async function createPDF() {
+ const pdfDoc = await PDFDocument.create();
+
+ // Add fonts
+ const helveticaFont = await pdfDoc.embedFont(StandardFonts.Helvetica);
+ const helveticaBold = await pdfDoc.embedFont(StandardFonts.HelveticaBold);
+
+ // Add page
+ const page = pdfDoc.addPage([595, 842]); // A4 size
+ const { width, height } = page.getSize();
+
+ // Add text with styling
+ page.drawText('Invoice #12345', {
+ x: 50,
+ y: height - 50,
+ size: 18,
+ font: helveticaBold,
+ color: rgb(0.2, 0.2, 0.8)
+ });
+
+ // Add rectangle (header background)
+ page.drawRectangle({
+ x: 40,
+ y: height - 100,
+ width: width - 80,
+ height: 30,
+ color: rgb(0.9, 0.9, 0.9)
+ });
+
+ // Add table-like content
+ const items = [
+ ['Item', 'Qty', 'Price', 'Total'],
+ ['Widget', '2', '$50', '$100'],
+ ['Gadget', '1', '$75', '$75']
+ ];
+
+ let yPos = height - 150;
+ items.forEach(row => {
+ let xPos = 50;
+ row.forEach(cell => {
+ page.drawText(cell, {
+ x: xPos,
+ y: yPos,
+ size: 12,
+ font: helveticaFont
+ });
+ xPos += 120;
+ });
+ yPos -= 25;
+ });
+
+ const pdfBytes = await pdfDoc.save();
+ fs.writeFileSync('created.pdf', pdfBytes);
+}
+```
+
+#### Advanced Merge and Split Operations
+```javascript
+import { PDFDocument } from 'pdf-lib';
+import fs from 'fs';
+
+async function mergePDFs() {
+ // Create new document
+ const mergedPdf = await PDFDocument.create();
+
+ // Load source PDFs
+ const pdf1Bytes = fs.readFileSync('doc1.pdf');
+ const pdf2Bytes = fs.readFileSync('doc2.pdf');
+
+ const pdf1 = await PDFDocument.load(pdf1Bytes);
+ const pdf2 = await PDFDocument.load(pdf2Bytes);
+
+ // Copy pages from first PDF
+ const pdf1Pages = await mergedPdf.copyPages(pdf1, pdf1.getPageIndices());
+ pdf1Pages.forEach(page => mergedPdf.addPage(page));
+
+ // Copy specific pages from second PDF (pages 0, 2, 4)
+ const pdf2Pages = await mergedPdf.copyPages(pdf2, [0, 2, 4]);
+ pdf2Pages.forEach(page => mergedPdf.addPage(page));
+
+ const mergedPdfBytes = await mergedPdf.save();
+ fs.writeFileSync('merged.pdf', mergedPdfBytes);
+}
+```
+
+### pdfjs-dist (Apache License)
+
+PDF.js is Mozilla's JavaScript library for rendering PDFs in the browser.
+
+#### Basic PDF Loading and Rendering
+```javascript
+import * as pdfjsLib from 'pdfjs-dist';
+
+// Configure worker (important for performance)
+pdfjsLib.GlobalWorkerOptions.workerSrc = './pdf.worker.js';
+
+async function renderPDF() {
+ // Load PDF
+ const loadingTask = pdfjsLib.getDocument('document.pdf');
+ const pdf = await loadingTask.promise;
+
+ console.log(`Loaded PDF with ${pdf.numPages} pages`);
+
+ // Get first page
+ const page = await pdf.getPage(1);
+ const viewport = page.getViewport({ scale: 1.5 });
+
+ // Render to canvas
+ const canvas = document.createElement('canvas');
+ const context = canvas.getContext('2d');
+ canvas.height = viewport.height;
+ canvas.width = viewport.width;
+
+ const renderContext = {
+ canvasContext: context,
+ viewport: viewport
+ };
+
+ await page.render(renderContext).promise;
+ document.body.appendChild(canvas);
+}
+```
+
+#### Extract Text with Coordinates
+```javascript
+import * as pdfjsLib from 'pdfjs-dist';
+
+async function extractText() {
+ const loadingTask = pdfjsLib.getDocument('document.pdf');
+ const pdf = await loadingTask.promise;
+
+ let fullText = '';
+
+ // Extract text from all pages
+ for (let i = 1; i <= pdf.numPages; i++) {
+ const page = await pdf.getPage(i);
+ const textContent = await page.getTextContent();
+
+ const pageText = textContent.items
+ .map(item => item.str)
+ .join(' ');
+
+ fullText += `\n--- Page ${i} ---\n${pageText}`;
+
+ // Get text with coordinates for advanced processing
+ const textWithCoords = textContent.items.map(item => ({
+ text: item.str,
+ x: item.transform[4],
+ y: item.transform[5],
+ width: item.width,
+ height: item.height
+ }));
+ }
+
+ console.log(fullText);
+ return fullText;
+}
+```
+
+#### Extract Annotations and Forms
+```javascript
+import * as pdfjsLib from 'pdfjs-dist';
+
+async function extractAnnotations() {
+ const loadingTask = pdfjsLib.getDocument('annotated.pdf');
+ const pdf = await loadingTask.promise;
+
+ for (let i = 1; i <= pdf.numPages; i++) {
+ const page = await pdf.getPage(i);
+ const annotations = await page.getAnnotations();
+
+ annotations.forEach(annotation => {
+ console.log(`Annotation type: ${annotation.subtype}`);
+ console.log(`Content: ${annotation.contents}`);
+ console.log(`Coordinates: ${JSON.stringify(annotation.rect)}`);
+ });
+ }
+}
+```
+
+## Advanced Command-Line Operations
+
+### poppler-utils Advanced Features
+
+#### Extract Text with Bounding Box Coordinates
+```bash
+# Extract text with bounding box coordinates (essential for structured data)
+pdftotext -bbox-layout document.pdf output.xml
+
+# The XML output contains precise coordinates for each text element
+```
+
+#### Advanced Image Conversion
+```bash
+# Convert to PNG images with specific resolution
+pdftoppm -png -r 300 document.pdf output_prefix
+
+# Convert specific page range with high resolution
+pdftoppm -png -r 600 -f 1 -l 3 document.pdf high_res_pages
+
+# Convert to JPEG with quality setting
+pdftoppm -jpeg -jpegopt quality=85 -r 200 document.pdf jpeg_output
+```
+
+#### Extract Embedded Images
+```bash
+# Extract all embedded images with metadata
+pdfimages -j -p document.pdf page_images
+
+# List image info without extracting
+pdfimages -list document.pdf
+
+# Extract images in their original format
+pdfimages -all document.pdf images/img
+```
+
+### qpdf Advanced Features
+
+#### Complex Page Manipulation
+```bash
+# Split PDF into groups of pages
+qpdf --split-pages=3 input.pdf output_group_%02d.pdf
+
+# Extract specific pages with complex ranges
+qpdf input.pdf --pages input.pdf 1,3-5,8,10-end -- extracted.pdf
+
+# Merge specific pages from multiple PDFs
+qpdf --empty --pages doc1.pdf 1-3 doc2.pdf 5-7 doc3.pdf 2,4 -- combined.pdf
+```
+
+#### PDF Optimization and Repair
+```bash
+# Optimize PDF for web (linearize for streaming)
+qpdf --linearize input.pdf optimized.pdf
+
+# Remove unused objects and compress
+qpdf --optimize-level=all input.pdf compressed.pdf
+
+# Attempt to repair corrupted PDF structure
+qpdf --check input.pdf
+qpdf --fix-qdf damaged.pdf repaired.pdf
+
+# Show detailed PDF structure for debugging
+qpdf --show-all-pages input.pdf > structure.txt
+```
+
+#### Advanced Encryption
+```bash
+# Add password protection with specific permissions
+qpdf --encrypt user_pass owner_pass 256 --print=none --modify=none -- input.pdf encrypted.pdf
+
+# Check encryption status
+qpdf --show-encryption encrypted.pdf
+
+# Remove password protection (requires password)
+qpdf --password=secret123 --decrypt encrypted.pdf decrypted.pdf
+```
+
+## Advanced Python Techniques
+
+### pdfplumber Advanced Features
+
+#### Extract Text with Precise Coordinates
+```python
+import pdfplumber
+
+with pdfplumber.open("document.pdf") as pdf:
+ page = pdf.pages[0]
+
+ # Extract all text with coordinates
+ chars = page.chars
+ for char in chars[:10]: # First 10 characters
+ print(f"Char: '{char['text']}' at x:{char['x0']:.1f} y:{char['y0']:.1f}")
+
+ # Extract text by bounding box (left, top, right, bottom)
+ bbox_text = page.within_bbox((100, 100, 400, 200)).extract_text()
+```
+
+#### Advanced Table Extraction with Custom Settings
+```python
+import pdfplumber
+import pandas as pd
+
+with pdfplumber.open("complex_table.pdf") as pdf:
+ page = pdf.pages[0]
+
+ # Extract tables with custom settings for complex layouts
+ table_settings = {
+ "vertical_strategy": "lines",
+ "horizontal_strategy": "lines",
+ "snap_tolerance": 3,
+ "intersection_tolerance": 15
+ }
+ tables = page.extract_tables(table_settings)
+
+ # Visual debugging for table extraction
+ img = page.to_image(resolution=150)
+ img.save("debug_layout.png")
+```
+
+### reportlab Advanced Features
+
+#### Create Professional Reports with Tables
+```python
+from reportlab.platypus import SimpleDocTemplate, Table, TableStyle, Paragraph
+from reportlab.lib.styles import getSampleStyleSheet
+from reportlab.lib import colors
+
+# Sample data
+data = [
+ ['Product', 'Q1', 'Q2', 'Q3', 'Q4'],
+ ['Widgets', '120', '135', '142', '158'],
+ ['Gadgets', '85', '92', '98', '105']
+]
+
+# Create PDF with table
+doc = SimpleDocTemplate("report.pdf")
+elements = []
+
+# Add title
+styles = getSampleStyleSheet()
+title = Paragraph("Quarterly Sales Report", styles['Title'])
+elements.append(title)
+
+# Add table with advanced styling
+table = Table(data)
+table.setStyle(TableStyle([
+ ('BACKGROUND', (0, 0), (-1, 0), colors.grey),
+ ('TEXTCOLOR', (0, 0), (-1, 0), colors.whitesmoke),
+ ('ALIGN', (0, 0), (-1, -1), 'CENTER'),
+ ('FONTNAME', (0, 0), (-1, 0), 'Helvetica-Bold'),
+ ('FONTSIZE', (0, 0), (-1, 0), 14),
+ ('BOTTOMPADDING', (0, 0), (-1, 0), 12),
+ ('BACKGROUND', (0, 1), (-1, -1), colors.beige),
+ ('GRID', (0, 0), (-1, -1), 1, colors.black)
+]))
+elements.append(table)
+
+doc.build(elements)
+```
+
+## Complex Workflows
+
+### Extract Figures/Images from PDF
+
+#### Method 1: Using pdfimages (fastest)
+```bash
+# Extract all images with original quality
+pdfimages -all document.pdf images/img
+```
+
+#### Method 2: Using pypdfium2 + Image Processing
+```python
+import pypdfium2 as pdfium
+from PIL import Image
+import numpy as np
+
+def extract_figures(pdf_path, output_dir):
+ pdf = pdfium.PdfDocument(pdf_path)
+
+ for page_num, page in enumerate(pdf):
+ # Render high-resolution page
+ bitmap = page.render(scale=3.0)
+ img = bitmap.to_pil()
+
+ # Convert to numpy for processing
+ img_array = np.array(img)
+
+ # Simple figure detection (non-white regions)
+ mask = np.any(img_array != [255, 255, 255], axis=2)
+
+ # Find contours and extract bounding boxes
+ # (This is simplified - real implementation would need more sophisticated detection)
+
+ # Save detected figures
+ # ... implementation depends on specific needs
+```
+
+### Batch PDF Processing with Error Handling
+```python
+import os
+import glob
+from pypdf import PdfReader, PdfWriter
+import logging
+
+logging.basicConfig(level=logging.INFO)
+logger = logging.getLogger(__name__)
+
+def batch_process_pdfs(input_dir, operation='merge'):
+ pdf_files = glob.glob(os.path.join(input_dir, "*.pdf"))
+
+ if operation == 'merge':
+ writer = PdfWriter()
+ for pdf_file in pdf_files:
+ try:
+ reader = PdfReader(pdf_file)
+ for page in reader.pages:
+ writer.add_page(page)
+ logger.info(f"Processed: {pdf_file}")
+ except Exception as e:
+ logger.error(f"Failed to process {pdf_file}: {e}")
+ continue
+
+ with open("batch_merged.pdf", "wb") as output:
+ writer.write(output)
+
+ elif operation == 'extract_text':
+ for pdf_file in pdf_files:
+ try:
+ reader = PdfReader(pdf_file)
+ text = ""
+ for page in reader.pages:
+ text += page.extract_text()
+
+ output_file = pdf_file.replace('.pdf', '.txt')
+ with open(output_file, 'w', encoding='utf-8') as f:
+ f.write(text)
+ logger.info(f"Extracted text from: {pdf_file}")
+
+ except Exception as e:
+ logger.error(f"Failed to extract text from {pdf_file}: {e}")
+ continue
+```
+
+### Advanced PDF Cropping
+```python
+from pypdf import PdfWriter, PdfReader
+
+reader = PdfReader("input.pdf")
+writer = PdfWriter()
+
+# Crop page (left, bottom, right, top in points)
+page = reader.pages[0]
+page.mediabox.left = 50
+page.mediabox.bottom = 50
+page.mediabox.right = 550
+page.mediabox.top = 750
+
+writer.add_page(page)
+with open("cropped.pdf", "wb") as output:
+ writer.write(output)
+```
+
+## Performance Optimization Tips
+
+### 1. For Large PDFs
+- Use streaming approaches instead of loading entire PDF in memory
+- Use `qpdf --split-pages` for splitting large files
+- Process pages individually with pypdfium2
+
+### 2. For Text Extraction
+- `pdftotext -bbox-layout` is fastest for plain text extraction
+- Use pdfplumber for structured data and tables
+- Avoid `pypdf.extract_text()` for very large documents
+
+### 3. For Image Extraction
+- `pdfimages` is much faster than rendering pages
+- Use low resolution for previews, high resolution for final output
+
+### 4. For Form Filling
+- pdf-lib maintains form structure better than most alternatives
+- Pre-validate form fields before processing
+
+### 5. Memory Management
+```python
+# Process PDFs in chunks
+def process_large_pdf(pdf_path, chunk_size=10):
+ reader = PdfReader(pdf_path)
+ total_pages = len(reader.pages)
+
+ for start_idx in range(0, total_pages, chunk_size):
+ end_idx = min(start_idx + chunk_size, total_pages)
+ writer = PdfWriter()
+
+ for i in range(start_idx, end_idx):
+ writer.add_page(reader.pages[i])
+
+ # Process chunk
+ with open(f"chunk_{start_idx//chunk_size}.pdf", "wb") as output:
+ writer.write(output)
+```
+
+## Troubleshooting Common Issues
+
+### Encrypted PDFs
+```python
+# Handle password-protected PDFs
+from pypdf import PdfReader
+
+try:
+ reader = PdfReader("encrypted.pdf")
+ if reader.is_encrypted:
+ reader.decrypt("password")
+except Exception as e:
+ print(f"Failed to decrypt: {e}")
+```
+
+### Corrupted PDFs
+```bash
+# Use qpdf to repair
+qpdf --check corrupted.pdf
+qpdf --replace-input corrupted.pdf
+```
+
+### Text Extraction Issues
+```python
+# Fallback to OCR for scanned PDFs
+import pytesseract
+from pdf2image import convert_from_path
+
+def extract_text_with_ocr(pdf_path):
+ images = convert_from_path(pdf_path)
+ text = ""
+ for i, image in enumerate(images):
+ text += pytesseract.image_to_string(image)
+ return text
+```
+
+## License Information
+
+- **pypdf**: BSD License
+- **pdfplumber**: MIT License
+- **pypdfium2**: Apache/BSD License
+- **reportlab**: BSD License
+- **poppler-utils**: GPL-2 License
+- **qpdf**: Apache License
+- **pdf-lib**: MIT License
+- **pdfjs-dist**: Apache License
\ No newline at end of file
diff --git a/skills/pdf/scripts/check_bounding_boxes.py b/skills/pdf/scripts/check_bounding_boxes.py
new file mode 100644
index 0000000..2cc5e34
--- /dev/null
+++ b/skills/pdf/scripts/check_bounding_boxes.py
@@ -0,0 +1,65 @@
+from dataclasses import dataclass
+import json
+import sys
+
+
+
+
+@dataclass
+class RectAndField:
+ rect: list[float]
+ rect_type: str
+ field: dict
+
+
+def get_bounding_box_messages(fields_json_stream) -> list[str]:
+ messages = []
+ fields = json.load(fields_json_stream)
+ messages.append(f"Read {len(fields['form_fields'])} fields")
+
+ def rects_intersect(r1, r2):
+ disjoint_horizontal = r1[0] >= r2[2] or r1[2] <= r2[0]
+ disjoint_vertical = r1[1] >= r2[3] or r1[3] <= r2[1]
+ return not (disjoint_horizontal or disjoint_vertical)
+
+ rects_and_fields = []
+ for f in fields["form_fields"]:
+ rects_and_fields.append(RectAndField(f["label_bounding_box"], "label", f))
+ rects_and_fields.append(RectAndField(f["entry_bounding_box"], "entry", f))
+
+ has_error = False
+ for i, ri in enumerate(rects_and_fields):
+ for j in range(i + 1, len(rects_and_fields)):
+ rj = rects_and_fields[j]
+ if ri.field["page_number"] == rj.field["page_number"] and rects_intersect(ri.rect, rj.rect):
+ has_error = True
+ if ri.field is rj.field:
+ messages.append(f"FAILURE: intersection between label and entry bounding boxes for `{ri.field['description']}` ({ri.rect}, {rj.rect})")
+ else:
+ messages.append(f"FAILURE: intersection between {ri.rect_type} bounding box for `{ri.field['description']}` ({ri.rect}) and {rj.rect_type} bounding box for `{rj.field['description']}` ({rj.rect})")
+ if len(messages) >= 20:
+ messages.append("Aborting further checks; fix bounding boxes and try again")
+ return messages
+ if ri.rect_type == "entry":
+ if "entry_text" in ri.field:
+ font_size = ri.field["entry_text"].get("font_size", 14)
+ entry_height = ri.rect[3] - ri.rect[1]
+ if entry_height < font_size:
+ has_error = True
+ messages.append(f"FAILURE: entry bounding box height ({entry_height}) for `{ri.field['description']}` is too short for the text content (font size: {font_size}). Increase the box height or decrease the font size.")
+ if len(messages) >= 20:
+ messages.append("Aborting further checks; fix bounding boxes and try again")
+ return messages
+
+ if not has_error:
+ messages.append("SUCCESS: All bounding boxes are valid")
+ return messages
+
+if __name__ == "__main__":
+ if len(sys.argv) != 2:
+ print("Usage: check_bounding_boxes.py [fields.json]")
+ sys.exit(1)
+ with open(sys.argv[1]) as f:
+ messages = get_bounding_box_messages(f)
+ for msg in messages:
+ print(msg)
diff --git a/skills/pdf/scripts/check_fillable_fields.py b/skills/pdf/scripts/check_fillable_fields.py
new file mode 100644
index 0000000..36dfb95
--- /dev/null
+++ b/skills/pdf/scripts/check_fillable_fields.py
@@ -0,0 +1,11 @@
+import sys
+from pypdf import PdfReader
+
+
+
+
+reader = PdfReader(sys.argv[1])
+if (reader.get_fields()):
+ print("This PDF has fillable form fields")
+else:
+ print("This PDF does not have fillable form fields; you will need to visually determine where to enter data")
diff --git a/skills/pdf/scripts/convert_pdf_to_images.py b/skills/pdf/scripts/convert_pdf_to_images.py
new file mode 100644
index 0000000..7939cef
--- /dev/null
+++ b/skills/pdf/scripts/convert_pdf_to_images.py
@@ -0,0 +1,33 @@
+import os
+import sys
+
+from pdf2image import convert_from_path
+
+
+
+
+def convert(pdf_path, output_dir, max_dim=1000):
+ images = convert_from_path(pdf_path, dpi=200)
+
+ for i, image in enumerate(images):
+ width, height = image.size
+ if width > max_dim or height > max_dim:
+ scale_factor = min(max_dim / width, max_dim / height)
+ new_width = int(width * scale_factor)
+ new_height = int(height * scale_factor)
+ image = image.resize((new_width, new_height))
+
+ image_path = os.path.join(output_dir, f"page_{i+1}.png")
+ image.save(image_path)
+ print(f"Saved page {i+1} as {image_path} (size: {image.size})")
+
+ print(f"Converted {len(images)} pages to PNG images")
+
+
+if __name__ == "__main__":
+ if len(sys.argv) != 3:
+ print("Usage: convert_pdf_to_images.py [input pdf] [output directory]")
+ sys.exit(1)
+ pdf_path = sys.argv[1]
+ output_directory = sys.argv[2]
+ convert(pdf_path, output_directory)
diff --git a/skills/pdf/scripts/create_validation_image.py b/skills/pdf/scripts/create_validation_image.py
new file mode 100644
index 0000000..10eadd8
--- /dev/null
+++ b/skills/pdf/scripts/create_validation_image.py
@@ -0,0 +1,37 @@
+import json
+import sys
+
+from PIL import Image, ImageDraw
+
+
+
+
+def create_validation_image(page_number, fields_json_path, input_path, output_path):
+ with open(fields_json_path, 'r') as f:
+ data = json.load(f)
+
+ img = Image.open(input_path)
+ draw = ImageDraw.Draw(img)
+ num_boxes = 0
+
+ for field in data["form_fields"]:
+ if field["page_number"] == page_number:
+ entry_box = field['entry_bounding_box']
+ label_box = field['label_bounding_box']
+ draw.rectangle(entry_box, outline='red', width=2)
+ draw.rectangle(label_box, outline='blue', width=2)
+ num_boxes += 2
+
+ img.save(output_path)
+ print(f"Created validation image at {output_path} with {num_boxes} bounding boxes")
+
+
+if __name__ == "__main__":
+ if len(sys.argv) != 5:
+ print("Usage: create_validation_image.py [page number] [fields.json file] [input image path] [output image path]")
+ sys.exit(1)
+ page_number = int(sys.argv[1])
+ fields_json_path = sys.argv[2]
+ input_image_path = sys.argv[3]
+ output_image_path = sys.argv[4]
+ create_validation_image(page_number, fields_json_path, input_image_path, output_image_path)
diff --git a/skills/pdf/scripts/extract_form_field_info.py b/skills/pdf/scripts/extract_form_field_info.py
new file mode 100644
index 0000000..64cd470
--- /dev/null
+++ b/skills/pdf/scripts/extract_form_field_info.py
@@ -0,0 +1,122 @@
+import json
+import sys
+
+from pypdf import PdfReader
+
+
+
+
+def get_full_annotation_field_id(annotation):
+ components = []
+ while annotation:
+ field_name = annotation.get('/T')
+ if field_name:
+ components.append(field_name)
+ annotation = annotation.get('/Parent')
+ return ".".join(reversed(components)) if components else None
+
+
+def make_field_dict(field, field_id):
+ field_dict = {"field_id": field_id}
+ ft = field.get('/FT')
+ if ft == "/Tx":
+ field_dict["type"] = "text"
+ elif ft == "/Btn":
+ field_dict["type"] = "checkbox"
+ states = field.get("/_States_", [])
+ if len(states) == 2:
+ if "/Off" in states:
+ field_dict["checked_value"] = states[0] if states[0] != "/Off" else states[1]
+ field_dict["unchecked_value"] = "/Off"
+ else:
+ print(f"Unexpected state values for checkbox `${field_id}`. Its checked and unchecked values may not be correct; if you're trying to check it, visually verify the results.")
+ field_dict["checked_value"] = states[0]
+ field_dict["unchecked_value"] = states[1]
+ elif ft == "/Ch":
+ field_dict["type"] = "choice"
+ states = field.get("/_States_", [])
+ field_dict["choice_options"] = [{
+ "value": state[0],
+ "text": state[1],
+ } for state in states]
+ else:
+ field_dict["type"] = f"unknown ({ft})"
+ return field_dict
+
+
+def get_field_info(reader: PdfReader):
+ fields = reader.get_fields()
+
+ field_info_by_id = {}
+ possible_radio_names = set()
+
+ for field_id, field in fields.items():
+ if field.get("/Kids"):
+ if field.get("/FT") == "/Btn":
+ possible_radio_names.add(field_id)
+ continue
+ field_info_by_id[field_id] = make_field_dict(field, field_id)
+
+
+ radio_fields_by_id = {}
+
+ for page_index, page in enumerate(reader.pages):
+ annotations = page.get('/Annots', [])
+ for ann in annotations:
+ field_id = get_full_annotation_field_id(ann)
+ if field_id in field_info_by_id:
+ field_info_by_id[field_id]["page"] = page_index + 1
+ field_info_by_id[field_id]["rect"] = ann.get('/Rect')
+ elif field_id in possible_radio_names:
+ try:
+ on_values = [v for v in ann["/AP"]["/N"] if v != "/Off"]
+ except KeyError:
+ continue
+ if len(on_values) == 1:
+ rect = ann.get("/Rect")
+ if field_id not in radio_fields_by_id:
+ radio_fields_by_id[field_id] = {
+ "field_id": field_id,
+ "type": "radio_group",
+ "page": page_index + 1,
+ "radio_options": [],
+ }
+ radio_fields_by_id[field_id]["radio_options"].append({
+ "value": on_values[0],
+ "rect": rect,
+ })
+
+ fields_with_location = []
+ for field_info in field_info_by_id.values():
+ if "page" in field_info:
+ fields_with_location.append(field_info)
+ else:
+ print(f"Unable to determine location for field id: {field_info.get('field_id')}, ignoring")
+
+ def sort_key(f):
+ if "radio_options" in f:
+ rect = f["radio_options"][0]["rect"] or [0, 0, 0, 0]
+ else:
+ rect = f.get("rect") or [0, 0, 0, 0]
+ adjusted_position = [-rect[1], rect[0]]
+ return [f.get("page"), adjusted_position]
+
+ sorted_fields = fields_with_location + list(radio_fields_by_id.values())
+ sorted_fields.sort(key=sort_key)
+
+ return sorted_fields
+
+
+def write_field_info(pdf_path: str, json_output_path: str):
+ reader = PdfReader(pdf_path)
+ field_info = get_field_info(reader)
+ with open(json_output_path, "w") as f:
+ json.dump(field_info, f, indent=2)
+ print(f"Wrote {len(field_info)} fields to {json_output_path}")
+
+
+if __name__ == "__main__":
+ if len(sys.argv) != 3:
+ print("Usage: extract_form_field_info.py [input pdf] [output json]")
+ sys.exit(1)
+ write_field_info(sys.argv[1], sys.argv[2])
diff --git a/skills/pdf/scripts/extract_form_structure.py b/skills/pdf/scripts/extract_form_structure.py
new file mode 100755
index 0000000..f219e7d
--- /dev/null
+++ b/skills/pdf/scripts/extract_form_structure.py
@@ -0,0 +1,115 @@
+"""
+Extract form structure from a non-fillable PDF.
+
+This script analyzes the PDF to find:
+- Text labels with their exact coordinates
+- Horizontal lines (row boundaries)
+- Checkboxes (small rectangles)
+
+Output: A JSON file with the form structure that can be used to generate
+accurate field coordinates for filling.
+
+Usage: python extract_form_structure.py
+"""
+
+import json
+import sys
+import pdfplumber
+
+
+def extract_form_structure(pdf_path):
+ structure = {
+ "pages": [],
+ "labels": [],
+ "lines": [],
+ "checkboxes": [],
+ "row_boundaries": []
+ }
+
+ with pdfplumber.open(pdf_path) as pdf:
+ for page_num, page in enumerate(pdf.pages, 1):
+ structure["pages"].append({
+ "page_number": page_num,
+ "width": float(page.width),
+ "height": float(page.height)
+ })
+
+ words = page.extract_words()
+ for word in words:
+ structure["labels"].append({
+ "page": page_num,
+ "text": word["text"],
+ "x0": round(float(word["x0"]), 1),
+ "top": round(float(word["top"]), 1),
+ "x1": round(float(word["x1"]), 1),
+ "bottom": round(float(word["bottom"]), 1)
+ })
+
+ for line in page.lines:
+ if abs(float(line["x1"]) - float(line["x0"])) > page.width * 0.5:
+ structure["lines"].append({
+ "page": page_num,
+ "y": round(float(line["top"]), 1),
+ "x0": round(float(line["x0"]), 1),
+ "x1": round(float(line["x1"]), 1)
+ })
+
+ for rect in page.rects:
+ width = float(rect["x1"]) - float(rect["x0"])
+ height = float(rect["bottom"]) - float(rect["top"])
+ if 5 <= width <= 15 and 5 <= height <= 15 and abs(width - height) < 2:
+ structure["checkboxes"].append({
+ "page": page_num,
+ "x0": round(float(rect["x0"]), 1),
+ "top": round(float(rect["top"]), 1),
+ "x1": round(float(rect["x1"]), 1),
+ "bottom": round(float(rect["bottom"]), 1),
+ "center_x": round((float(rect["x0"]) + float(rect["x1"])) / 2, 1),
+ "center_y": round((float(rect["top"]) + float(rect["bottom"])) / 2, 1)
+ })
+
+ lines_by_page = {}
+ for line in structure["lines"]:
+ page = line["page"]
+ if page not in lines_by_page:
+ lines_by_page[page] = []
+ lines_by_page[page].append(line["y"])
+
+ for page, y_coords in lines_by_page.items():
+ y_coords = sorted(set(y_coords))
+ for i in range(len(y_coords) - 1):
+ structure["row_boundaries"].append({
+ "page": page,
+ "row_top": y_coords[i],
+ "row_bottom": y_coords[i + 1],
+ "row_height": round(y_coords[i + 1] - y_coords[i], 1)
+ })
+
+ return structure
+
+
+def main():
+ if len(sys.argv) != 3:
+ print("Usage: extract_form_structure.py ")
+ sys.exit(1)
+
+ pdf_path = sys.argv[1]
+ output_path = sys.argv[2]
+
+ print(f"Extracting structure from {pdf_path}...")
+ structure = extract_form_structure(pdf_path)
+
+ with open(output_path, "w") as f:
+ json.dump(structure, f, indent=2)
+
+ print(f"Found:")
+ print(f" - {len(structure['pages'])} pages")
+ print(f" - {len(structure['labels'])} text labels")
+ print(f" - {len(structure['lines'])} horizontal lines")
+ print(f" - {len(structure['checkboxes'])} checkboxes")
+ print(f" - {len(structure['row_boundaries'])} row boundaries")
+ print(f"Saved to {output_path}")
+
+
+if __name__ == "__main__":
+ main()
diff --git a/skills/pdf/scripts/fill_fillable_fields.py b/skills/pdf/scripts/fill_fillable_fields.py
new file mode 100644
index 0000000..51c2600
--- /dev/null
+++ b/skills/pdf/scripts/fill_fillable_fields.py
@@ -0,0 +1,98 @@
+import json
+import sys
+
+from pypdf import PdfReader, PdfWriter
+
+from extract_form_field_info import get_field_info
+
+
+
+
+def fill_pdf_fields(input_pdf_path: str, fields_json_path: str, output_pdf_path: str):
+ with open(fields_json_path) as f:
+ fields = json.load(f)
+ fields_by_page = {}
+ for field in fields:
+ if "value" in field:
+ field_id = field["field_id"]
+ page = field["page"]
+ if page not in fields_by_page:
+ fields_by_page[page] = {}
+ fields_by_page[page][field_id] = field["value"]
+
+ reader = PdfReader(input_pdf_path)
+
+ has_error = False
+ field_info = get_field_info(reader)
+ fields_by_ids = {f["field_id"]: f for f in field_info}
+ for field in fields:
+ existing_field = fields_by_ids.get(field["field_id"])
+ if not existing_field:
+ has_error = True
+ print(f"ERROR: `{field['field_id']}` is not a valid field ID")
+ elif field["page"] != existing_field["page"]:
+ has_error = True
+ print(f"ERROR: Incorrect page number for `{field['field_id']}` (got {field['page']}, expected {existing_field['page']})")
+ else:
+ if "value" in field:
+ err = validation_error_for_field_value(existing_field, field["value"])
+ if err:
+ print(err)
+ has_error = True
+ if has_error:
+ sys.exit(1)
+
+ writer = PdfWriter(clone_from=reader)
+ for page, field_values in fields_by_page.items():
+ writer.update_page_form_field_values(writer.pages[page - 1], field_values, auto_regenerate=False)
+
+ writer.set_need_appearances_writer(True)
+
+ with open(output_pdf_path, "wb") as f:
+ writer.write(f)
+
+
+def validation_error_for_field_value(field_info, field_value):
+ field_type = field_info["type"]
+ field_id = field_info["field_id"]
+ if field_type == "checkbox":
+ checked_val = field_info["checked_value"]
+ unchecked_val = field_info["unchecked_value"]
+ if field_value != checked_val and field_value != unchecked_val:
+ return f'ERROR: Invalid value "{field_value}" for checkbox field "{field_id}". The checked value is "{checked_val}" and the unchecked value is "{unchecked_val}"'
+ elif field_type == "radio_group":
+ option_values = [opt["value"] for opt in field_info["radio_options"]]
+ if field_value not in option_values:
+ return f'ERROR: Invalid value "{field_value}" for radio group field "{field_id}". Valid values are: {option_values}'
+ elif field_type == "choice":
+ choice_values = [opt["value"] for opt in field_info["choice_options"]]
+ if field_value not in choice_values:
+ return f'ERROR: Invalid value "{field_value}" for choice field "{field_id}". Valid values are: {choice_values}'
+ return None
+
+
+def monkeypatch_pydpf_method():
+ from pypdf.generic import DictionaryObject
+ from pypdf.constants import FieldDictionaryAttributes
+
+ original_get_inherited = DictionaryObject.get_inherited
+
+ def patched_get_inherited(self, key: str, default = None):
+ result = original_get_inherited(self, key, default)
+ if key == FieldDictionaryAttributes.Opt:
+ if isinstance(result, list) and all(isinstance(v, list) and len(v) == 2 for v in result):
+ result = [r[0] for r in result]
+ return result
+
+ DictionaryObject.get_inherited = patched_get_inherited
+
+
+if __name__ == "__main__":
+ if len(sys.argv) != 4:
+ print("Usage: fill_fillable_fields.py [input pdf] [field_values.json] [output pdf]")
+ sys.exit(1)
+ monkeypatch_pydpf_method()
+ input_pdf = sys.argv[1]
+ fields_json = sys.argv[2]
+ output_pdf = sys.argv[3]
+ fill_pdf_fields(input_pdf, fields_json, output_pdf)
diff --git a/skills/pdf/scripts/fill_pdf_form_with_annotations.py b/skills/pdf/scripts/fill_pdf_form_with_annotations.py
new file mode 100644
index 0000000..b430069
--- /dev/null
+++ b/skills/pdf/scripts/fill_pdf_form_with_annotations.py
@@ -0,0 +1,107 @@
+import json
+import sys
+
+from pypdf import PdfReader, PdfWriter
+from pypdf.annotations import FreeText
+
+
+
+
+def transform_from_image_coords(bbox, image_width, image_height, pdf_width, pdf_height):
+ x_scale = pdf_width / image_width
+ y_scale = pdf_height / image_height
+
+ left = bbox[0] * x_scale
+ right = bbox[2] * x_scale
+
+ top = pdf_height - (bbox[1] * y_scale)
+ bottom = pdf_height - (bbox[3] * y_scale)
+
+ return left, bottom, right, top
+
+
+def transform_from_pdf_coords(bbox, pdf_height):
+ left = bbox[0]
+ right = bbox[2]
+
+ pypdf_top = pdf_height - bbox[1]
+ pypdf_bottom = pdf_height - bbox[3]
+
+ return left, pypdf_bottom, right, pypdf_top
+
+
+def fill_pdf_form(input_pdf_path, fields_json_path, output_pdf_path):
+
+ with open(fields_json_path, "r") as f:
+ fields_data = json.load(f)
+
+ reader = PdfReader(input_pdf_path)
+ writer = PdfWriter()
+
+ writer.append(reader)
+
+ pdf_dimensions = {}
+ for i, page in enumerate(reader.pages):
+ mediabox = page.mediabox
+ pdf_dimensions[i + 1] = [mediabox.width, mediabox.height]
+
+ annotations = []
+ for field in fields_data["form_fields"]:
+ page_num = field["page_number"]
+
+ page_info = next(p for p in fields_data["pages"] if p["page_number"] == page_num)
+ pdf_width, pdf_height = pdf_dimensions[page_num]
+
+ if "pdf_width" in page_info:
+ transformed_entry_box = transform_from_pdf_coords(
+ field["entry_bounding_box"],
+ float(pdf_height)
+ )
+ else:
+ image_width = page_info["image_width"]
+ image_height = page_info["image_height"]
+ transformed_entry_box = transform_from_image_coords(
+ field["entry_bounding_box"],
+ image_width, image_height,
+ float(pdf_width), float(pdf_height)
+ )
+
+ if "entry_text" not in field or "text" not in field["entry_text"]:
+ continue
+ entry_text = field["entry_text"]
+ text = entry_text["text"]
+ if not text:
+ continue
+
+ font_name = entry_text.get("font", "Arial")
+ font_size = str(entry_text.get("font_size", 14)) + "pt"
+ font_color = entry_text.get("font_color", "000000")
+
+ annotation = FreeText(
+ text=text,
+ rect=transformed_entry_box,
+ font=font_name,
+ font_size=font_size,
+ font_color=font_color,
+ border_color=None,
+ background_color=None,
+ )
+ annotations.append(annotation)
+ writer.add_annotation(page_number=page_num - 1, annotation=annotation)
+
+ with open(output_pdf_path, "wb") as output:
+ writer.write(output)
+
+ print(f"Successfully filled PDF form and saved to {output_pdf_path}")
+ print(f"Added {len(annotations)} text annotations")
+
+
+if __name__ == "__main__":
+ if len(sys.argv) != 4:
+ print("Usage: fill_pdf_form_with_annotations.py [input pdf] [fields.json] [output pdf]")
+ sys.exit(1)
+ input_pdf = sys.argv[1]
+ fields_json = sys.argv[2]
+ output_pdf = sys.argv[3]
+
+ fill_pdf_form(input_pdf, fields_json, output_pdf)
diff --git a/skills/pinia/GENERATION.md b/skills/pinia/GENERATION.md
new file mode 100644
index 0000000..a2c0f71
--- /dev/null
+++ b/skills/pinia/GENERATION.md
@@ -0,0 +1,5 @@
+# Generation Info
+
+- **Source:** `sources/pinia`
+- **Git SHA:** `55dbfc5c20d4461748996aa74d8c0913e89fb98e`
+- **Generated:** 2026-01-28
diff --git a/skills/pinia/SKILL.md b/skills/pinia/SKILL.md
new file mode 100644
index 0000000..89cfa7d
--- /dev/null
+++ b/skills/pinia/SKILL.md
@@ -0,0 +1,59 @@
+---
+name: pinia
+description: Pinia official Vue state management library, type-safe and extensible. Use when defining stores, working with state/getters/actions, or implementing store patterns in Vue apps.
+metadata:
+ author: Anthony Fu
+ version: "2026.1.28"
+ source: Generated from https://github.com/vuejs/pinia, scripts located at https://github.com/antfu/skills
+---
+
+# Pinia
+
+Pinia is the official state management library for Vue, designed to be intuitive and type-safe. It supports both Options API and Composition API styles, with first-class TypeScript support and devtools integration.
+
+> The skill is based on Pinia v3.0.4, generated at 2026-01-28.
+
+## Core References
+
+| Topic | Description | Reference |
+|-------|-------------|-----------|
+| Stores | Defining stores, state, getters, actions, storeToRefs, subscriptions | [core-stores](references/core-stores.md) |
+
+## Features
+
+### Extensibility
+
+| Topic | Description | Reference |
+|-------|-------------|-----------|
+| Plugins | Extend stores with custom properties, state, and behavior | [features-plugins](references/features-plugins.md) |
+
+### Composability
+
+| Topic | Description | Reference |
+|-------|-------------|-----------|
+| Composables | Using Vue composables within stores (VueUse, etc.) | [features-composables](references/features-composables.md) |
+| Composing Stores | Store-to-store communication, avoiding circular dependencies | [features-composing-stores](references/features-composing-stores.md) |
+
+## Best Practices
+
+| Topic | Description | Reference |
+|-------|-------------|-----------|
+| Testing | Unit testing with @pinia/testing, mocking, stubbing | [best-practices-testing](references/best-practices-testing.md) |
+| Outside Components | Using stores in navigation guards, plugins, middlewares | [best-practices-outside-component](references/best-practices-outside-component.md) |
+
+## Advanced
+
+| Topic | Description | Reference |
+|-------|-------------|-----------|
+| SSR | Server-side rendering, state hydration | [advanced-ssr](references/advanced-ssr.md) |
+| Nuxt | Nuxt integration, auto-imports, SSR best practices | [advanced-nuxt](references/advanced-nuxt.md) |
+| HMR | Hot module replacement for development | [advanced-hmr](references/advanced-hmr.md) |
+
+## Key Recommendations
+
+- **Prefer Setup Stores** for complex logic, composables, and watchers
+- **Use `storeToRefs()`** when destructuring state/getters to preserve reactivity
+- **Actions can be destructured directly** - they're bound to the store
+- **Call stores inside functions** not at module scope, especially for SSR
+- **Add HMR support** to each store for better development experience
+- **Use `@pinia/testing`** for component tests with mocked stores
diff --git a/skills/pinia/references/advanced-hmr.md b/skills/pinia/references/advanced-hmr.md
new file mode 100644
index 0000000..3eef5c7
--- /dev/null
+++ b/skills/pinia/references/advanced-hmr.md
@@ -0,0 +1,61 @@
+---
+name: hot-module-replacement
+description: Enable HMR to preserve store state during development
+---
+
+# Hot Module Replacement (HMR)
+
+Pinia supports HMR to edit stores without page reload, preserving existing state.
+
+## Setup
+
+Add this snippet after each store definition:
+
+```ts
+import { defineStore, acceptHMRUpdate } from 'pinia'
+
+export const useAuth = defineStore('auth', {
+ // store options...
+})
+
+if (import.meta.hot) {
+ import.meta.hot.accept(acceptHMRUpdate(useAuth, import.meta.hot))
+}
+```
+
+## Setup Store Example
+
+```ts
+import { defineStore, acceptHMRUpdate } from 'pinia'
+
+export const useCounterStore = defineStore('counter', () => {
+ const count = ref(0)
+ const increment = () => count.value++
+ return { count, increment }
+})
+
+if (import.meta.hot) {
+ import.meta.hot.accept(acceptHMRUpdate(useCounterStore, import.meta.hot))
+}
+```
+
+## Bundler Support
+
+- **Vite:** Officially supported via `import.meta.hot`
+- **Webpack:** Uses `import.meta.webpackHot`
+- Any bundler implementing the `import.meta.hot` spec should work
+
+## Nuxt
+
+With `@pinia/nuxt`, `acceptHMRUpdate` is auto-imported but you still need to add the HMR snippet manually.
+
+## Benefits
+
+- Edit store logic without losing state
+- Add/remove state, actions, and getters on the fly
+- Faster development iteration
+
+
diff --git a/skills/pinia/references/advanced-nuxt.md b/skills/pinia/references/advanced-nuxt.md
new file mode 100644
index 0000000..569da63
--- /dev/null
+++ b/skills/pinia/references/advanced-nuxt.md
@@ -0,0 +1,119 @@
+---
+name: nuxt-integration
+description: Using Pinia with Nuxt - auto-imports, SSR, and best practices
+---
+
+# Nuxt Integration
+
+Pinia works seamlessly with Nuxt 3/4, handling SSR, serialization, and XSS protection automatically.
+
+## Installation
+
+```bash
+npx nuxi@latest module add pinia
+```
+
+This installs both `@pinia/nuxt` and `pinia`. If `pinia` isn't installed, add it manually.
+
+> **npm users:** If you get `ERESOLVE unable to resolve dependency tree`, add to `package.json`:
+> ```json
+> "overrides": { "vue": "latest" }
+> ```
+
+## Configuration
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ modules: ['@pinia/nuxt'],
+})
+```
+
+## Auto Imports
+
+These are automatically available:
+- `usePinia()` - get pinia instance
+- `defineStore()` - define stores
+- `storeToRefs()` - extract reactive refs
+- `acceptHMRUpdate()` - HMR support
+
+**All stores in `app/stores/` (Nuxt 4) or `stores/` are auto-imported.**
+
+### Custom Store Directories
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ modules: ['@pinia/nuxt'],
+ pinia: {
+ storesDirs: ['./stores/**', './custom-folder/stores/**'],
+ },
+})
+```
+
+## Fetching Data in Pages
+
+Use `callOnce()` for SSR-friendly data fetching:
+
+```vue
+
+```
+
+### Refetch on Navigation
+
+```vue
+
+```
+
+## Using Stores Outside Components
+
+In navigation guards, middlewares, or other stores, pass the `pinia` instance:
+
+```ts
+// middleware/auth.ts
+export default defineNuxtRouteMiddleware((to) => {
+ const nuxtApp = useNuxtApp()
+ const store = useStore(nuxtApp.$pinia)
+
+ if (to.meta.requiresAuth && !store.isLoggedIn) {
+ return navigateTo('/login')
+ }
+})
+```
+
+Most of the time, you don't need this - just use stores in components or other injection-aware contexts.
+
+## Pinia Plugins with Nuxt
+
+Create a Nuxt plugin:
+
+```ts
+// plugins/myPiniaPlugin.ts
+import { PiniaPluginContext } from 'pinia'
+
+function MyPiniaPlugin({ store }: PiniaPluginContext) {
+ store.$subscribe((mutation) => {
+ console.log(`[🍍 ${mutation.storeId}]: ${mutation.type}`)
+ })
+ return { creationTime: new Date() }
+}
+
+export default defineNuxtPlugin(({ $pinia }) => {
+ $pinia.use(MyPiniaPlugin)
+})
+```
+
+
diff --git a/skills/pinia/references/advanced-ssr.md b/skills/pinia/references/advanced-ssr.md
new file mode 100644
index 0000000..2972f3a
--- /dev/null
+++ b/skills/pinia/references/advanced-ssr.md
@@ -0,0 +1,121 @@
+---
+name: server-side-rendering
+description: SSR setup, state hydration, and avoiding cross-request state pollution
+---
+
+# Server Side Rendering (SSR)
+
+Pinia works with SSR when stores are called at the top of `setup`, getters, or actions.
+
+> **Using Nuxt?** See the [Nuxt integration](advanced-nuxt.md) instead.
+
+## Basic Usage
+
+```vue
+
+```
+
+## Using Store Outside setup()
+
+Pass the `pinia` instance explicitly:
+
+```ts
+const pinia = createPinia()
+const app = createApp(App)
+app.use(router)
+app.use(pinia)
+
+router.beforeEach((to) => {
+ // ✅ Pass pinia for correct SSR context
+ const main = useMainStore(pinia)
+
+ if (to.meta.requiresAuth && !main.isLoggedIn) {
+ return '/login'
+ }
+})
+```
+
+## serverPrefetch()
+
+Access pinia via `this.$pinia`:
+
+```ts
+export default {
+ serverPrefetch() {
+ const store = useStore(this.$pinia)
+ return store.fetchData()
+ },
+}
+```
+
+## onServerPrefetch()
+
+Works normally:
+
+```vue
+
+```
+
+## State Hydration
+
+Serialize state on server and hydrate on client.
+
+### Server Side
+
+Use [devalue](https://github.com/Rich-Harris/devalue) for XSS-safe serialization:
+
+```ts
+import devalue from 'devalue'
+import { createPinia } from 'pinia'
+
+const pinia = createPinia()
+const app = createApp(App)
+app.use(router)
+app.use(pinia)
+
+// After rendering, state is available
+const serializedState = devalue(pinia.state.value)
+// Inject into HTML as global variable
+```
+
+### Client Side
+
+Hydrate before any `useStore()` call:
+
+```ts
+const pinia = createPinia()
+const app = createApp(App)
+app.use(pinia)
+
+// Hydrate from serialized state (e.g., from window.__pinia)
+if (typeof window !== 'undefined') {
+ pinia.state.value = JSON.parse(window.__pinia)
+}
+```
+
+## SSR Examples
+
+- [Vitesse template](https://github.com/antfu/vitesse/blob/main/src/modules/pinia.ts)
+- [vite-plugin-ssr](https://vite-plugin-ssr.com/pinia)
+
+## Key Points
+
+1. Call stores inside functions, not at module scope
+2. Pass `pinia` instance when using stores outside components in SSR
+3. Hydrate state before calling any `useStore()`
+4. Use `devalue` or similar for safe serialization
+5. Avoid cross-request state pollution by creating fresh pinia per request
+
+
diff --git a/skills/pinia/references/best-practices-outside-component.md b/skills/pinia/references/best-practices-outside-component.md
new file mode 100644
index 0000000..126f7a6
--- /dev/null
+++ b/skills/pinia/references/best-practices-outside-component.md
@@ -0,0 +1,115 @@
+---
+name: using-stores-outside-components
+description: Correctly using stores in navigation guards, plugins, and other non-component contexts
+---
+
+# Using Stores Outside Components
+
+Stores need the `pinia` instance, which is automatically injected in components. Outside components, you may need to provide it manually.
+
+## Single Page Applications
+
+Call stores **after** pinia is installed:
+
+```ts
+import { useUserStore } from '@/stores/user'
+import { createPinia } from 'pinia'
+import { createApp } from 'vue'
+import App from './App.vue'
+
+// ❌ Fails - pinia not created yet
+const userStore = useUserStore()
+
+const pinia = createPinia()
+const app = createApp(App)
+app.use(pinia)
+
+// ✅ Works - pinia is active
+const userStore = useUserStore()
+```
+
+## Navigation Guards
+
+**Wrong:** Call at module level
+
+```ts
+import { createRouter } from 'vue-router'
+const router = createRouter({ /* ... */ })
+
+// ❌ May fail depending on import order
+const store = useUserStore()
+
+router.beforeEach((to) => {
+ if (store.isLoggedIn) { /* ... */ }
+})
+```
+
+**Correct:** Call inside the guard
+
+```ts
+router.beforeEach((to) => {
+ // ✅ Called after pinia is installed
+ const store = useUserStore()
+
+ if (to.meta.requiresAuth && !store.isLoggedIn) {
+ return '/login'
+ }
+})
+```
+
+## SSR Applications
+
+Always pass the `pinia` instance to `useStore()`:
+
+```ts
+const pinia = createPinia()
+const app = createApp(App)
+app.use(router)
+app.use(pinia)
+
+router.beforeEach((to) => {
+ // ✅ Pass pinia instance
+ const main = useMainStore(pinia)
+
+ if (to.meta.requiresAuth && !main.isLoggedIn) {
+ return '/login'
+ }
+})
+```
+
+## serverPrefetch()
+
+Access pinia via `this.$pinia`:
+
+```ts
+export default {
+ serverPrefetch() {
+ const store = useStore(this.$pinia)
+ return store.fetchData()
+ },
+}
+```
+
+## onServerPrefetch()
+
+Works normally in `
+```
+
+## Key Takeaway
+
+Defer `useStore()` calls to functions that run after pinia is installed, rather than calling at module scope.
+
+
diff --git a/skills/pinia/references/best-practices-testing.md b/skills/pinia/references/best-practices-testing.md
new file mode 100644
index 0000000..7227cd4
--- /dev/null
+++ b/skills/pinia/references/best-practices-testing.md
@@ -0,0 +1,212 @@
+---
+name: testing
+description: Unit testing stores and components with @pinia/testing
+---
+
+# Testing Stores
+
+## Unit Testing Stores
+
+Create a fresh pinia instance for each test:
+
+```ts
+import { setActivePinia, createPinia } from 'pinia'
+import { useCounterStore } from '../src/stores/counter'
+
+describe('Counter Store', () => {
+ beforeEach(() => {
+ setActivePinia(createPinia())
+ })
+
+ it('increments', () => {
+ const counter = useCounterStore()
+ expect(counter.n).toBe(0)
+ counter.increment()
+ expect(counter.n).toBe(1)
+ })
+})
+```
+
+### With Plugins
+
+```ts
+import { setActivePinia, createPinia } from 'pinia'
+import { createApp } from 'vue'
+import { somePlugin } from '../src/stores/plugin'
+
+const app = createApp({})
+
+beforeEach(() => {
+ const pinia = createPinia().use(somePlugin)
+ app.use(pinia)
+ setActivePinia(pinia)
+})
+```
+
+## Testing Components
+
+Install `@pinia/testing`:
+
+```bash
+npm i -D @pinia/testing
+```
+
+Use `createTestingPinia()`:
+
+```ts
+import { mount } from '@vue/test-utils'
+import { createTestingPinia } from '@pinia/testing'
+import { useSomeStore } from '@/stores/myStore'
+
+const wrapper = mount(Counter, {
+ global: {
+ plugins: [createTestingPinia()],
+ },
+})
+
+const store = useSomeStore()
+
+// Manipulate state directly
+store.name = 'new name'
+store.$patch({ name: 'new name' })
+
+// Actions are stubbed by default
+store.someAction()
+expect(store.someAction).toHaveBeenCalledTimes(1)
+```
+
+## Initial State
+
+Set initial state for tests:
+
+```ts
+const wrapper = mount(Counter, {
+ global: {
+ plugins: [
+ createTestingPinia({
+ initialState: {
+ counter: { n: 20 }, // Store name → initial state
+ },
+ }),
+ ],
+ },
+})
+```
+
+## Action Stubbing
+
+### Execute Real Actions
+
+```ts
+createTestingPinia({ stubActions: false })
+```
+
+### Selective Stubbing
+
+```ts
+// Only stub specific actions
+createTestingPinia({
+ stubActions: ['increment', 'reset'],
+})
+
+// Or use a function
+createTestingPinia({
+ stubActions: (actionName, store) => {
+ if (actionName.startsWith('set')) return true
+ return false
+ },
+})
+```
+
+### Mock Action Return Values
+
+```ts
+import type { Mock } from 'vitest'
+
+// After getting store
+store.someAction.mockResolvedValue('mocked value')
+```
+
+## Mocking Getters
+
+Getters are writable in tests:
+
+```ts
+const pinia = createTestingPinia()
+const counter = useCounterStore(pinia)
+
+counter.double = 3 // Override computed value
+
+// Reset to default behavior
+counter.double = undefined
+counter.double // Now computed normally
+```
+
+## Custom Spy Function
+
+If not using Jest/Vitest with globals:
+
+```ts
+import { vi } from 'vitest'
+
+createTestingPinia({
+ createSpy: vi.fn,
+})
+```
+
+With Sinon:
+
+```ts
+import sinon from 'sinon'
+
+createTestingPinia({
+ createSpy: sinon.spy,
+})
+```
+
+## Pinia Plugins in Tests
+
+Pass plugins to `createTestingPinia()`:
+
+```ts
+import { somePlugin } from '../src/stores/plugin'
+
+createTestingPinia({
+ stubActions: false,
+ plugins: [somePlugin],
+})
+```
+
+**Don't use** `testingPinia.use(MyPlugin)` - pass plugins in options.
+
+## Type-Safe Mocked Store
+
+```ts
+import type { Mock } from 'vitest'
+import type { Store, StoreDefinition } from 'pinia'
+
+function mockedStore unknown>(
+ useStore: TStoreDef
+): TStoreDef extends StoreDefinition
+ ? Store, {
+ [K in keyof Actions]: Actions[K] extends (...args: any[]) => any
+ ? Mock
+ : Actions[K]
+ }>
+ : ReturnType {
+ return useStore() as any
+}
+
+// Usage
+const store = mockedStore(useSomeStore)
+store.someAction.mockResolvedValue('value') // Typed!
+```
+
+## E2E Tests
+
+No special handling needed - Pinia works normally.
+
+
diff --git a/skills/pinia/references/core-stores.md b/skills/pinia/references/core-stores.md
new file mode 100644
index 0000000..ea6a72b
--- /dev/null
+++ b/skills/pinia/references/core-stores.md
@@ -0,0 +1,389 @@
+---
+name: stores
+description: Defining stores, state, getters, and actions in Pinia
+---
+
+# Pinia Stores
+
+Stores are defined using `defineStore()` with a unique name. Each store has three core concepts: **state**, **getters**, and **actions**.
+
+## Defining Stores
+
+### Option Stores
+
+Similar to Vue's Options API:
+
+```ts
+import { defineStore } from 'pinia'
+
+export const useCounterStore = defineStore('counter', {
+ state: () => ({
+ count: 0,
+ name: 'Eduardo',
+ }),
+ getters: {
+ doubleCount: (state) => state.count * 2,
+ },
+ actions: {
+ increment() {
+ this.count++
+ },
+ },
+})
+```
+
+Think of `state` as `data`, `getters` as `computed`, and `actions` as `methods`.
+
+### Setup Stores (Recommended)
+
+Uses Composition API syntax - more flexible and powerful:
+
+```ts
+import { ref, computed } from 'vue'
+import { defineStore } from 'pinia'
+
+export const useCounterStore = defineStore('counter', () => {
+ const count = ref(0)
+ const name = ref('Eduardo')
+ const doubleCount = computed(() => count.value * 2)
+
+ function increment() {
+ count.value++
+ }
+
+ return { count, name, doubleCount, increment }
+})
+```
+
+In Setup Stores: `ref()` → state, `computed()` → getters, `function()` → actions.
+
+**Important:** You must return all state properties for Pinia to track them.
+
+### Using Stores
+
+```vue
+
+```
+
+### Destructuring with storeToRefs
+
+```vue
+
+```
+
+---
+
+## State
+
+State is defined as a function returning the initial state.
+
+### TypeScript
+
+Type inference works automatically. For complex types:
+
+```ts
+interface UserInfo {
+ name: string
+ age: number
+}
+
+export const useUserStore = defineStore('user', {
+ state: () => ({
+ userList: [] as UserInfo[],
+ user: null as UserInfo | null,
+ }),
+})
+```
+
+Or use an interface for the return type:
+
+```ts
+interface State {
+ userList: UserInfo[]
+ user: UserInfo | null
+}
+
+export const useUserStore = defineStore('user', {
+ state: (): State => ({
+ userList: [],
+ user: null,
+ }),
+})
+```
+
+### Accessing and Modifying
+
+```ts
+const store = useStore()
+store.count++
+```
+
+```vue
+
+```
+
+### Mutating with $patch
+
+Apply multiple changes at once:
+
+```ts
+// Object syntax
+store.$patch({
+ count: store.count + 1,
+ name: 'DIO',
+})
+
+// Function syntax (for complex mutations)
+store.$patch((state) => {
+ state.items.push({ name: 'shoes', quantity: 1 })
+ state.hasChanged = true
+})
+```
+
+### Resetting State
+
+Option Stores have built-in `$reset()`. For Setup Stores, implement your own:
+
+```ts
+export const useCounterStore = defineStore('counter', () => {
+ const count = ref(0)
+
+ function $reset() {
+ count.value = 0
+ }
+
+ return { count, $reset }
+})
+```
+
+### Subscribing to State Changes
+
+```ts
+cartStore.$subscribe((mutation, state) => {
+ mutation.type // 'direct' | 'patch object' | 'patch function'
+ mutation.storeId // 'cart'
+ mutation.payload // patch object (only for 'patch object')
+
+ localStorage.setItem('cart', JSON.stringify(state))
+})
+
+// Options
+cartStore.$subscribe(callback, { flush: 'sync' }) // Immediate
+cartStore.$subscribe(callback, { detached: true }) // Keep after unmount
+```
+
+---
+
+## Getters
+
+Getters are computed values, equivalent to Vue's `computed()`.
+
+### Basic Getters
+
+```ts
+getters: {
+ doubleCount: (state) => state.count * 2,
+}
+```
+
+### Accessing Other Getters
+
+Use `this` with explicit return type:
+
+```ts
+getters: {
+ doubleCount: (state) => state.count * 2,
+ doublePlusOne(): number {
+ return this.doubleCount + 1
+ },
+},
+```
+
+### Getters with Arguments
+
+Return a function (note: loses caching):
+
+```ts
+getters: {
+ getUserById: (state) => {
+ return (userId: string) => state.users.find((user) => user.id === userId)
+ },
+},
+```
+
+Cache within parameterized getters:
+
+```ts
+getters: {
+ getActiveUserById(state) {
+ const activeUsers = state.users.filter((user) => user.active)
+ return (userId: string) => activeUsers.find((user) => user.id === userId)
+ },
+},
+```
+
+### Accessing Other Stores in Getters
+
+```ts
+import { useOtherStore } from './other-store'
+
+getters: {
+ combined(state) {
+ const otherStore = useOtherStore()
+ return state.localData + otherStore.data
+ },
+},
+```
+
+---
+
+## Actions
+
+Actions are methods for business logic. Unlike getters, they can be asynchronous.
+
+### Defining Actions
+
+```ts
+actions: {
+ increment() {
+ this.count++
+ },
+ randomizeCounter() {
+ this.count = Math.round(100 * Math.random())
+ },
+},
+```
+
+### Async Actions
+
+```ts
+actions: {
+ async registerUser(login: string, password: string) {
+ try {
+ this.userData = await api.post({ login, password })
+ } catch (error) {
+ return error
+ }
+ },
+},
+```
+
+### Accessing Other Stores in Actions
+
+```ts
+import { useAuthStore } from './auth-store'
+
+actions: {
+ async fetchUserPreferences() {
+ const auth = useAuthStore()
+ if (auth.isAuthenticated) {
+ this.preferences = await fetchPreferences()
+ }
+ },
+},
+```
+
+**SSR:** Call all `useStore()` before any `await`:
+
+```ts
+async orderCart() {
+ // ✅ Call stores before await
+ const user = useUserStore()
+
+ await apiOrderCart(user.token, this.items)
+ // ❌ Don't call useStore() after await in SSR
+}
+```
+
+### Subscribing to Actions
+
+```ts
+const unsubscribe = someStore.$onAction(
+ ({ name, store, args, after, onError }) => {
+ const startTime = Date.now()
+ console.log(`Start "${name}" with params [${args.join(', ')}]`)
+
+ after((result) => {
+ console.log(`Finished "${name}" after ${Date.now() - startTime}ms`)
+ })
+
+ onError((error) => {
+ console.warn(`Failed "${name}": ${error}`)
+ })
+ }
+)
+
+unsubscribe() // Cleanup
+```
+
+Keep subscription after component unmount:
+
+```ts
+someStore.$onAction(callback, true)
+```
+
+---
+
+## Options API Helpers
+
+```ts
+import { mapState, mapWritableState, mapActions } from 'pinia'
+import { useCounterStore } from '../stores/counter'
+
+export default {
+ computed: {
+ // Readonly state/getters
+ ...mapState(useCounterStore, ['count', 'doubleCount']),
+ // Writable state
+ ...mapWritableState(useCounterStore, ['count']),
+ },
+ methods: {
+ ...mapActions(useCounterStore, ['increment']),
+ },
+}
+```
+
+---
+
+## Accessing Global Providers in Setup Stores
+
+```ts
+import { inject } from 'vue'
+import { useRoute } from 'vue-router'
+import { defineStore } from 'pinia'
+
+export const useSearchFilters = defineStore('search-filters', () => {
+ const route = useRoute()
+ const appProvided = inject('appProvided')
+
+ // Don't return these - access them directly in components
+ return { /* ... */ }
+})
+```
+
+
diff --git a/skills/pinia/references/features-composables.md b/skills/pinia/references/features-composables.md
new file mode 100644
index 0000000..79f7d94
--- /dev/null
+++ b/skills/pinia/references/features-composables.md
@@ -0,0 +1,114 @@
+---
+name: composables-in-stores
+description: Using Vue composables within Pinia stores
+---
+
+# Composables in Stores
+
+Pinia stores can leverage Vue composables for reusable stateful logic.
+
+## Option Stores
+
+Call composables inside the `state` property, but only those returning writable refs:
+
+```ts
+import { defineStore } from 'pinia'
+import { useLocalStorage } from '@vueuse/core'
+
+export const useAuthStore = defineStore('auth', {
+ state: () => ({
+ user: useLocalStorage('pinia/auth/login', 'bob'),
+ }),
+})
+```
+
+**Works:** Composables returning `ref()`:
+- `useLocalStorage`
+- `useAsyncState`
+
+**Doesn't work in Option Stores:**
+- Composables exposing functions
+- Composables exposing readonly data
+
+## Setup Stores
+
+More flexible - can use almost any composable:
+
+```ts
+import { defineStore } from 'pinia'
+import { useMediaControls } from '@vueuse/core'
+import { ref } from 'vue'
+
+export const useVideoPlayer = defineStore('video', () => {
+ const videoElement = ref()
+ const src = ref('/data/video.mp4')
+ const { playing, volume, currentTime, togglePictureInPicture } =
+ useMediaControls(videoElement, { src })
+
+ function loadVideo(element: HTMLVideoElement, newSrc: string) {
+ videoElement.value = element
+ src.value = newSrc
+ }
+
+ return {
+ src,
+ playing,
+ volume,
+ currentTime,
+ loadVideo,
+ togglePictureInPicture,
+ }
+})
+```
+
+**Note:** Don't return non-serializable DOM refs like `videoElement` - they're internal implementation details.
+
+## SSR Considerations
+
+### Option Stores with hydrate()
+
+Define a `hydrate()` function to handle client-side hydration:
+
+```ts
+import { defineStore } from 'pinia'
+import { useLocalStorage } from '@vueuse/core'
+
+export const useAuthStore = defineStore('auth', {
+ state: () => ({
+ user: useLocalStorage('pinia/auth/login', 'bob'),
+ }),
+
+ hydrate(state, initialState) {
+ // Ignore server state, read from browser
+ state.user = useLocalStorage('pinia/auth/login', 'bob')
+ },
+})
+```
+
+### Setup Stores with skipHydrate()
+
+Mark state that shouldn't hydrate from server:
+
+```ts
+import { defineStore, skipHydrate } from 'pinia'
+import { useEyeDropper, useLocalStorage } from '@vueuse/core'
+
+export const useColorStore = defineStore('colors', () => {
+ const { isSupported, open, sRGBHex } = useEyeDropper()
+ const lastColor = useLocalStorage('lastColor', sRGBHex)
+
+ return {
+ // Skip hydration for client-only state
+ lastColor: skipHydrate(lastColor),
+ open, // Function - no hydration needed
+ isSupported, // Boolean - not reactive
+ }
+})
+```
+
+`skipHydrate()` only applies to state properties (refs), not functions or non-reactive values.
+
+
diff --git a/skills/pinia/references/features-composing-stores.md b/skills/pinia/references/features-composing-stores.md
new file mode 100644
index 0000000..948f8b5
--- /dev/null
+++ b/skills/pinia/references/features-composing-stores.md
@@ -0,0 +1,134 @@
+---
+name: composing-stores
+description: Store-to-store communication and avoiding circular dependencies
+---
+
+# Composing Stores
+
+Stores can use each other for shared state and logic.
+
+## Rule: Avoid Circular Dependencies
+
+Two stores cannot directly read each other's state during setup:
+
+```ts
+// ❌ Infinite loop
+const useX = defineStore('x', () => {
+ const y = useY()
+ y.name // Don't read here!
+ return { name: ref('X') }
+})
+
+const useY = defineStore('y', () => {
+ const x = useX()
+ x.name // Don't read here!
+ return { name: ref('Y') }
+})
+```
+
+**Solution:** Read in getters, computed, or actions:
+
+```ts
+const useX = defineStore('x', () => {
+ const y = useY()
+
+ // ✅ Read in computed/actions
+ function doSomething() {
+ const yName = y.name
+ }
+
+ return { name: ref('X'), doSomething }
+})
+```
+
+## Setup Stores: Use Store at Top
+
+```ts
+import { defineStore } from 'pinia'
+import { useUserStore } from './user'
+
+export const useCartStore = defineStore('cart', () => {
+ const user = useUserStore()
+ const list = ref([])
+
+ const summary = computed(() => {
+ return `Hi ${user.name}, you have ${list.value.length} items`
+ })
+
+ function purchase() {
+ return apiPurchase(user.id, list.value)
+ }
+
+ return { list, summary, purchase }
+})
+```
+
+## Shared Getters
+
+Call `useStore()` inside a getter:
+
+```ts
+import { useUserStore } from './user'
+
+export const useCartStore = defineStore('cart', {
+ getters: {
+ summary(state) {
+ const user = useUserStore()
+ return `Hi ${user.name}, you have ${state.list.length} items`
+ },
+ },
+})
+```
+
+## Shared Actions
+
+Call `useStore()` inside an action:
+
+```ts
+import { useUserStore } from './user'
+import { apiOrderCart } from './api'
+
+export const useCartStore = defineStore('cart', {
+ actions: {
+ async orderCart() {
+ const user = useUserStore()
+
+ try {
+ await apiOrderCart(user.token, this.items)
+ this.emptyCart()
+ } catch (err) {
+ displayError(err)
+ }
+ },
+ },
+})
+```
+
+## SSR: Call Stores Before Await
+
+In async actions, call all stores before any `await`:
+
+```ts
+actions: {
+ async orderCart() {
+ // ✅ All useStore() calls before await
+ const user = useUserStore()
+ const analytics = useAnalyticsStore()
+
+ try {
+ await apiOrderCart(user.token, this.items)
+ // ❌ Don't call useStore() after await (SSR issue)
+ // const otherStore = useOtherStore()
+ } catch (err) {
+ displayError(err)
+ }
+ },
+}
+```
+
+This ensures the correct Pinia instance is used during SSR.
+
+
diff --git a/skills/pinia/references/features-plugins.md b/skills/pinia/references/features-plugins.md
new file mode 100644
index 0000000..c4355ac
--- /dev/null
+++ b/skills/pinia/references/features-plugins.md
@@ -0,0 +1,203 @@
+---
+name: plugins
+description: Extend stores with custom properties, methods, and behavior
+---
+
+# Plugins
+
+Plugins extend all stores with custom properties, methods, or behavior.
+
+## Basic Plugin
+
+```ts
+import { createPinia } from 'pinia'
+
+function SecretPiniaPlugin() {
+ return { secret: 'the cake is a lie' }
+}
+
+const pinia = createPinia()
+pinia.use(SecretPiniaPlugin)
+
+// In any store
+const store = useStore()
+store.secret // 'the cake is a lie'
+```
+
+## Plugin Context
+
+Plugins receive a context object:
+
+```ts
+import { PiniaPluginContext } from 'pinia'
+
+export function myPiniaPlugin(context: PiniaPluginContext) {
+ context.pinia // pinia instance
+ context.app // Vue app instance
+ context.store // store being augmented
+ context.options // store definition options
+}
+```
+
+## Adding Properties
+
+Return an object to add properties (tracked in devtools):
+
+```ts
+pinia.use(() => ({ hello: 'world' }))
+```
+
+Or set directly on store:
+
+```ts
+pinia.use(({ store }) => {
+ store.hello = 'world'
+ // For devtools visibility in dev mode
+ if (process.env.NODE_ENV === 'development') {
+ store._customProperties.add('hello')
+ }
+})
+```
+
+## Adding State
+
+Add to both `store` and `store.$state` for SSR/devtools:
+
+```ts
+import { toRef, ref } from 'vue'
+
+pinia.use(({ store }) => {
+ if (!store.$state.hasOwnProperty('hasError')) {
+ const hasError = ref(false)
+ store.$state.hasError = hasError
+ }
+ store.hasError = toRef(store.$state, 'hasError')
+})
+```
+
+## Adding External Properties
+
+Wrap non-reactive objects with `markRaw()`:
+
+```ts
+import { markRaw } from 'vue'
+import { router } from './router'
+
+pinia.use(({ store }) => {
+ store.router = markRaw(router)
+})
+```
+
+## Custom Store Options
+
+Define custom options consumed by plugins:
+
+```ts
+// Store definition
+defineStore('search', {
+ actions: {
+ searchContacts() { /* ... */ },
+ },
+ debounce: {
+ searchContacts: 300,
+ },
+})
+
+// Plugin reads custom option
+import debounce from 'lodash/debounce'
+
+pinia.use(({ options, store }) => {
+ if (options.debounce) {
+ return Object.keys(options.debounce).reduce((acc, action) => {
+ acc[action] = debounce(store[action], options.debounce[action])
+ return acc
+ }, {})
+ }
+})
+```
+
+For Setup Stores, pass options as third argument:
+
+```ts
+defineStore(
+ 'search',
+ () => { /* ... */ },
+ {
+ debounce: { searchContacts: 300 },
+ }
+)
+```
+
+## TypeScript Augmentation
+
+### Custom Properties
+
+```ts
+import 'pinia'
+import type { Router } from 'vue-router'
+
+declare module 'pinia' {
+ export interface PiniaCustomProperties {
+ router: Router
+ hello: string
+ }
+}
+```
+
+### Custom State
+
+```ts
+declare module 'pinia' {
+ export interface PiniaCustomStateProperties {
+ hasError: boolean
+ }
+}
+```
+
+### Custom Options
+
+```ts
+declare module 'pinia' {
+ export interface DefineStoreOptionsBase {
+ debounce?: Partial, number>>
+ }
+}
+```
+
+## Subscribe in Plugins
+
+```ts
+pinia.use(({ store }) => {
+ store.$subscribe(() => {
+ // React to state changes
+ })
+ store.$onAction(() => {
+ // React to actions
+ })
+})
+```
+
+## Nuxt Plugin
+
+Create a Nuxt plugin to add Pinia plugins:
+
+```ts
+// plugins/myPiniaPlugin.ts
+import { PiniaPluginContext } from 'pinia'
+
+function MyPiniaPlugin({ store }: PiniaPluginContext) {
+ store.$subscribe((mutation) => {
+ console.log(`[🍍 ${mutation.storeId}]: ${mutation.type}`)
+ })
+ return { creationTime: new Date() }
+}
+
+export default defineNuxtPlugin(({ $pinia }) => {
+ $pinia.use(MyPiniaPlugin)
+})
+```
+
+
diff --git a/skills/pnpm/GENERATION.md b/skills/pnpm/GENERATION.md
new file mode 100644
index 0000000..f650dd7
--- /dev/null
+++ b/skills/pnpm/GENERATION.md
@@ -0,0 +1,5 @@
+# Generation Info
+
+- **Source:** `sources/pnpm`
+- **Git SHA:** `a1d6d5aef9d5f369fa2f0d8a54f1edbaff8b23b3`
+- **Generated:** 2026-01-28
diff --git a/skills/pnpm/SKILL.md b/skills/pnpm/SKILL.md
new file mode 100644
index 0000000..9b28506
--- /dev/null
+++ b/skills/pnpm/SKILL.md
@@ -0,0 +1,42 @@
+---
+name: pnpm
+description: Node.js package manager with strict dependency resolution. Use when running pnpm specific commands, configuring workspaces, or managing dependencies with catalogs, patches, or overrides.
+metadata:
+ author: Anthony Fu
+ version: "2026.1.28"
+ source: Generated from https://github.com/pnpm/pnpm, scripts located at https://github.com/antfu/skills
+---
+
+pnpm is a fast, disk space efficient package manager. It uses a content-addressable store to deduplicate packages across all projects on a machine, saving significant disk space. pnpm enforces strict dependency resolution by default, preventing phantom dependencies. Configuration should preferably be placed in `pnpm-workspace.yaml` for pnpm-specific settings.
+
+**Important:** When working with pnpm projects, agents should check for `pnpm-workspace.yaml` and `.npmrc` files to understand workspace structure and configuration. Always use `--frozen-lockfile` in CI environments.
+
+> The skill is based on pnpm 10.x, generated at 2026-01-28.
+
+## Core
+
+| Topic | Description | Reference |
+|-------|-------------|-----------|
+| CLI Commands | Install, add, remove, update, run, exec, dlx, and workspace commands | [core-cli](references/core-cli.md) |
+| Configuration | pnpm-workspace.yaml, .npmrc settings, and package.json fields | [core-config](references/core-config.md) |
+| Workspaces | Monorepo support with filtering, workspace protocol, and shared lockfile | [core-workspaces](references/core-workspaces.md) |
+| Store | Content-addressable storage, hard links, and disk efficiency | [core-store](references/core-store.md) |
+
+## Features
+
+| Topic | Description | Reference |
+|-------|-------------|-----------|
+| Catalogs | Centralized dependency version management for workspaces | [features-catalogs](references/features-catalogs.md) |
+| Overrides | Force specific versions of dependencies including transitive | [features-overrides](references/features-overrides.md) |
+| Patches | Modify third-party packages with custom fixes | [features-patches](references/features-patches.md) |
+| Aliases | Install packages under custom names using npm: protocol | [features-aliases](references/features-aliases.md) |
+| Hooks | Customize resolution with .pnpmfile.cjs hooks | [features-hooks](references/features-hooks.md) |
+| Peer Dependencies | Auto-install, strict mode, and dependency rules | [features-peer-deps](references/features-peer-deps.md) |
+
+## Best Practices
+
+| Topic | Description | Reference |
+|-------|-------------|-----------|
+| CI/CD Setup | GitHub Actions, GitLab CI, Docker, and caching strategies | [best-practices-ci](references/best-practices-ci.md) |
+| Migration | Migrating from npm/Yarn, handling phantom deps, monorepo migration | [best-practices-migration](references/best-practices-migration.md) |
+| Performance | Install optimizations, store caching, workspace parallelization | [best-practices-performance](references/best-practices-performance.md) |
diff --git a/skills/pnpm/references/best-practices-ci.md b/skills/pnpm/references/best-practices-ci.md
new file mode 100644
index 0000000..80c5d49
--- /dev/null
+++ b/skills/pnpm/references/best-practices-ci.md
@@ -0,0 +1,285 @@
+---
+name: pnpm-ci-cd-setup
+description: Optimizing pnpm for continuous integration and deployment workflows
+---
+
+# pnpm CI/CD Setup
+
+Best practices for using pnpm in CI/CD environments for fast, reliable builds.
+
+## GitHub Actions
+
+### Basic Setup
+
+```yaml
+name: CI
+
+on: [push, pull_request]
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+
+ - uses: pnpm/action-setup@v4
+ with:
+ version: 9
+
+ - uses: actions/setup-node@v4
+ with:
+ node-version: 20
+ cache: 'pnpm'
+
+ - run: pnpm install --frozen-lockfile
+ - run: pnpm test
+ - run: pnpm build
+```
+
+### With Store Caching
+
+For larger projects, cache the pnpm store:
+
+```yaml
+- uses: pnpm/action-setup@v4
+ with:
+ version: 9
+
+- name: Get pnpm store directory
+ shell: bash
+ run: |
+ echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
+
+- uses: actions/cache@v4
+ name: Setup pnpm cache
+ with:
+ path: ${{ env.STORE_PATH }}
+ key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
+ restore-keys: |
+ ${{ runner.os }}-pnpm-store-
+
+- run: pnpm install --frozen-lockfile
+```
+
+### Matrix Testing
+
+```yaml
+jobs:
+ test:
+ runs-on: ${{ matrix.os }}
+ strategy:
+ matrix:
+ os: [ubuntu-latest, windows-latest, macos-latest]
+ node: [18, 20, 22]
+ steps:
+ - uses: actions/checkout@v4
+ - uses: pnpm/action-setup@v4
+ - uses: actions/setup-node@v4
+ with:
+ node-version: ${{ matrix.node }}
+ cache: 'pnpm'
+ - run: pnpm install --frozen-lockfile
+ - run: pnpm test
+```
+
+## GitLab CI
+
+```yaml
+image: node:20
+
+stages:
+ - install
+ - test
+ - build
+
+variables:
+ PNPM_HOME: /root/.local/share/pnpm
+ PATH: $PNPM_HOME:$PATH
+
+before_script:
+ - corepack enable
+ - corepack prepare pnpm@latest --activate
+
+cache:
+ key: ${CI_COMMIT_REF_SLUG}
+ paths:
+ - .pnpm-store
+
+install:
+ stage: install
+ script:
+ - pnpm config set store-dir .pnpm-store
+ - pnpm install --frozen-lockfile
+
+test:
+ stage: test
+ script:
+ - pnpm test
+
+build:
+ stage: build
+ script:
+ - pnpm build
+```
+
+## Docker
+
+### Multi-Stage Build
+
+```dockerfile
+# Build stage
+FROM node:20-slim AS builder
+
+# Enable corepack for pnpm
+RUN corepack enable
+
+WORKDIR /app
+
+# Copy package files first for layer caching
+COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./
+COPY packages/*/package.json ./packages/
+
+# Install dependencies
+RUN pnpm install --frozen-lockfile
+
+# Copy source and build
+COPY . .
+RUN pnpm build
+
+# Production stage
+FROM node:20-slim AS runner
+
+RUN corepack enable
+WORKDIR /app
+
+COPY --from=builder /app/dist ./dist
+COPY --from=builder /app/package.json ./
+COPY --from=builder /app/pnpm-lock.yaml ./
+
+# Production install
+RUN pnpm install --frozen-lockfile --prod
+
+CMD ["node", "dist/index.js"]
+```
+
+### Optimized for Monorepos
+
+```dockerfile
+FROM node:20-slim AS builder
+RUN corepack enable
+WORKDIR /app
+
+# Copy workspace config
+COPY pnpm-lock.yaml pnpm-workspace.yaml ./
+
+# Copy all package.json files maintaining structure
+COPY packages/core/package.json ./packages/core/
+COPY packages/api/package.json ./packages/api/
+
+# Install all dependencies
+RUN pnpm install --frozen-lockfile
+
+# Copy source
+COPY . .
+
+# Build specific package
+RUN pnpm --filter @myorg/api build
+```
+
+## Key CI Flags
+
+### --frozen-lockfile
+
+**Always use in CI.** Fails if `pnpm-lock.yaml` needs updates:
+
+```bash
+pnpm install --frozen-lockfile
+```
+
+### --prefer-offline
+
+Use cached packages when available:
+
+```bash
+pnpm install --frozen-lockfile --prefer-offline
+```
+
+### --ignore-scripts
+
+Skip lifecycle scripts for faster installs (use cautiously):
+
+```bash
+pnpm install --frozen-lockfile --ignore-scripts
+```
+
+## Corepack Integration
+
+Use Corepack to manage pnpm version:
+
+```json
+// package.json
+{
+ "packageManager": "pnpm@9.0.0"
+}
+```
+
+```yaml
+# GitHub Actions
+- run: corepack enable
+- run: pnpm install --frozen-lockfile
+```
+
+## Monorepo CI Strategies
+
+### Build Changed Packages Only
+
+```yaml
+- name: Build changed packages
+ run: |
+ pnpm --filter "...[origin/main]" build
+```
+
+### Parallel Jobs per Package
+
+```yaml
+jobs:
+ detect-changes:
+ runs-on: ubuntu-latest
+ outputs:
+ packages: ${{ steps.changes.outputs.packages }}
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ fetch-depth: 0
+ - id: changes
+ run: |
+ echo "packages=$(pnpm --filter '...[origin/main]' list --json | jq -c '[.[].name]')" >> $GITHUB_OUTPUT
+
+ test:
+ needs: detect-changes
+ if: needs.detect-changes.outputs.packages != '[]'
+ runs-on: ubuntu-latest
+ strategy:
+ matrix:
+ package: ${{ fromJson(needs.detect-changes.outputs.packages) }}
+ steps:
+ - uses: actions/checkout@v4
+ - uses: pnpm/action-setup@v4
+ - run: pnpm install --frozen-lockfile
+ - run: pnpm --filter ${{ matrix.package }} test
+```
+
+## Best Practices Summary
+
+1. **Always use `--frozen-lockfile`** in CI
+2. **Cache the pnpm store** for faster installs
+3. **Use Corepack** for consistent pnpm versions
+4. **Specify `packageManager`** in package.json
+5. **Use `--filter`** in monorepos to build only what changed
+6. **Multi-stage Docker builds** for smaller images
+
+
diff --git a/skills/pnpm/references/best-practices-migration.md b/skills/pnpm/references/best-practices-migration.md
new file mode 100644
index 0000000..16d7c55
--- /dev/null
+++ b/skills/pnpm/references/best-practices-migration.md
@@ -0,0 +1,291 @@
+---
+name: migration-to-pnpm
+description: Migrating from npm or Yarn to pnpm with minimal friction
+---
+
+# Migration to pnpm
+
+Guide for migrating existing projects from npm or Yarn to pnpm.
+
+## Quick Migration
+
+### From npm
+
+```bash
+# Remove npm lockfile and node_modules
+rm -rf node_modules package-lock.json
+
+# Install with pnpm
+pnpm install
+```
+
+### From Yarn
+
+```bash
+# Remove yarn lockfile and node_modules
+rm -rf node_modules yarn.lock
+
+# Install with pnpm
+pnpm install
+```
+
+### Import Existing Lockfile
+
+pnpm can import existing lockfiles:
+
+```bash
+# Import from npm or yarn lockfile
+pnpm import
+
+# This creates pnpm-lock.yaml from:
+# - package-lock.json (npm)
+# - yarn.lock (yarn)
+# - npm-shrinkwrap.json (npm)
+```
+
+## Handling Common Issues
+
+### Phantom Dependencies
+
+pnpm is strict about dependencies. If code imports a package not in `package.json`, it will fail.
+
+**Problem:**
+```js
+// Works with npm (hoisted), fails with pnpm
+import lodash from 'lodash' // Not in dependencies, installed by another package
+```
+
+**Solution:** Add missing dependencies explicitly:
+```bash
+pnpm add lodash
+```
+
+### Missing Peer Dependencies
+
+pnpm reports peer dependency issues by default.
+
+**Option 1:** Let pnpm auto-install:
+```ini
+# .npmrc (default in pnpm v8+)
+auto-install-peers=true
+```
+
+**Option 2:** Install manually:
+```bash
+pnpm add react react-dom
+```
+
+**Option 3:** Suppress warnings if acceptable:
+```json
+{
+ "pnpm": {
+ "peerDependencyRules": {
+ "ignoreMissing": ["react"]
+ }
+ }
+}
+```
+
+### Symlink Issues
+
+Some tools don't work with symlinks. Use hoisted mode:
+
+```ini
+# .npmrc
+node-linker=hoisted
+```
+
+Or hoist specific packages:
+
+```ini
+public-hoist-pattern[]=*eslint*
+public-hoist-pattern[]=*babel*
+```
+
+### Native Module Rebuilds
+
+If native modules fail, try:
+
+```bash
+# Rebuild all native modules
+pnpm rebuild
+
+# Or reinstall
+rm -rf node_modules
+pnpm install
+```
+
+## Monorepo Migration
+
+### From npm Workspaces
+
+1. Create `pnpm-workspace.yaml`:
+ ```yaml
+ packages:
+ - 'packages/*'
+ ```
+
+2. Update internal dependencies to use workspace protocol:
+ ```json
+ {
+ "dependencies": {
+ "@myorg/utils": "workspace:^"
+ }
+ }
+ ```
+
+3. Install:
+ ```bash
+ rm -rf node_modules packages/*/node_modules package-lock.json
+ pnpm install
+ ```
+
+### From Yarn Workspaces
+
+1. Remove Yarn-specific files:
+ ```bash
+ rm yarn.lock .yarnrc.yml
+ rm -rf .yarn
+ ```
+
+2. Create `pnpm-workspace.yaml` matching `workspaces` in package.json:
+ ```yaml
+ packages:
+ - 'packages/*'
+ ```
+
+3. Update `package.json` - remove Yarn workspace config if not needed:
+ ```json
+ {
+ // Remove "workspaces" field (optional, pnpm uses pnpm-workspace.yaml)
+ }
+ ```
+
+4. Convert workspace references:
+ ```json
+ // From Yarn
+ "@myorg/utils": "*"
+
+ // To pnpm
+ "@myorg/utils": "workspace:*"
+ ```
+
+### From Lerna
+
+pnpm can replace Lerna for most use cases:
+
+```bash
+# Lerna: run script in all packages
+lerna run build
+
+# pnpm equivalent
+pnpm -r run build
+
+# Lerna: run in specific package
+lerna run build --scope=@myorg/app
+
+# pnpm equivalent
+pnpm --filter @myorg/app run build
+
+# Lerna: publish
+lerna publish
+
+# pnpm: use changesets instead
+pnpm add -Dw @changesets/cli
+pnpm changeset
+pnpm changeset version
+pnpm publish -r
+```
+
+## Configuration Migration
+
+### .npmrc Settings
+
+Most npm/Yarn settings work in pnpm's `.npmrc`:
+
+```ini
+# Registry settings (same as npm)
+registry=https://registry.npmjs.org/
+@myorg:registry=https://npm.myorg.com/
+
+# Auth tokens (same as npm)
+//registry.npmjs.org/:_authToken=${NPM_TOKEN}
+
+# pnpm-specific additions
+auto-install-peers=true
+strict-peer-dependencies=false
+```
+
+### Scripts Migration
+
+Most scripts work unchanged. Update pnpm-specific patterns:
+
+```json
+{
+ "scripts": {
+ // npm: recursive scripts
+ "build:all": "npm run build --workspaces",
+ // pnpm: use -r flag
+ "build:all": "pnpm -r run build",
+
+ // npm: run in specific workspace
+ "dev:app": "npm run dev -w packages/app",
+ // pnpm: use --filter
+ "dev:app": "pnpm --filter @myorg/app run dev"
+ }
+}
+```
+
+## CI/CD Migration
+
+Update CI configuration:
+
+```yaml
+# Before (npm)
+- run: npm ci
+
+# After (pnpm)
+- uses: pnpm/action-setup@v4
+- run: pnpm install --frozen-lockfile
+```
+
+Add to `package.json` for Corepack:
+```json
+{
+ "packageManager": "pnpm@9.0.0"
+}
+```
+
+## Gradual Migration
+
+For large projects, migrate gradually:
+
+1. **Start with CI**: Use pnpm in CI, keep npm/yarn locally
+2. **Add pnpm-lock.yaml**: Run `pnpm import` to create lockfile
+3. **Test thoroughly**: Ensure builds work with pnpm
+4. **Update documentation**: Update README, CONTRIBUTING
+5. **Remove old files**: Delete old lockfiles after team adoption
+
+## Rollback Plan
+
+If migration causes issues:
+
+```bash
+# Remove pnpm files
+rm -rf node_modules pnpm-lock.yaml pnpm-workspace.yaml
+
+# Restore npm
+npm install
+
+# Or restore Yarn
+yarn install
+```
+
+Keep old lockfile in git history for easy rollback.
+
+
diff --git a/skills/pnpm/references/best-practices-performance.md b/skills/pnpm/references/best-practices-performance.md
new file mode 100644
index 0000000..bf0c8cc
--- /dev/null
+++ b/skills/pnpm/references/best-practices-performance.md
@@ -0,0 +1,284 @@
+---
+name: pnpm-performance-optimization
+description: Tips and tricks for faster installs and better performance
+---
+
+# pnpm Performance Optimization
+
+pnpm is fast by default, but these optimizations can make it even faster.
+
+## Install Optimizations
+
+### Use Frozen Lockfile
+
+Skip resolution when lockfile exists:
+
+```bash
+pnpm install --frozen-lockfile
+```
+
+This is faster because pnpm skips the resolution phase entirely.
+
+### Prefer Offline Mode
+
+Use cached packages when available:
+
+```bash
+pnpm install --prefer-offline
+```
+
+Or configure globally:
+```ini
+# .npmrc
+prefer-offline=true
+```
+
+### Skip Optional Dependencies
+
+If you don't need optional deps:
+
+```bash
+pnpm install --no-optional
+```
+
+### Skip Scripts
+
+For CI or when scripts aren't needed:
+
+```bash
+pnpm install --ignore-scripts
+```
+
+**Caution:** Some packages require postinstall scripts to work correctly.
+
+### Only Build Specific Dependencies
+
+Only run build scripts for specific packages:
+
+```ini
+# .npmrc
+onlyBuiltDependencies[]=esbuild
+onlyBuiltDependencies[]=sharp
+onlyBuiltDependencies[]=@swc/core
+```
+
+Or skip builds entirely for deps that don't need them:
+
+```json
+{
+ "pnpm": {
+ "neverBuiltDependencies": ["fsevents", "cpu-features"]
+ }
+}
+```
+
+## Store Optimizations
+
+### Side Effects Cache
+
+Cache native module build results:
+
+```ini
+# .npmrc
+side-effects-cache=true
+```
+
+This caches the results of postinstall scripts, speeding up subsequent installs.
+
+### Shared Store
+
+Use a single store for all projects (default behavior):
+
+```ini
+# .npmrc
+store-dir=~/.pnpm-store
+```
+
+Benefits:
+- Packages downloaded once for all projects
+- Hard links save disk space
+- Faster installs from cache
+
+### Store Maintenance
+
+Periodically clean unused packages:
+
+```bash
+# Remove unreferenced packages
+pnpm store prune
+
+# Check store integrity
+pnpm store status
+```
+
+## Workspace Optimizations
+
+### Parallel Execution
+
+Run workspace scripts in parallel:
+
+```bash
+pnpm -r --parallel run build
+```
+
+Control concurrency:
+```ini
+# .npmrc
+workspace-concurrency=8
+```
+
+### Stream Output
+
+See output in real-time:
+
+```bash
+pnpm -r --stream run build
+```
+
+### Filter to Changed Packages
+
+Only build what changed:
+
+```bash
+# Build packages changed since main branch
+pnpm --filter "...[origin/main]" run build
+```
+
+### Topological Order
+
+Build dependencies before dependents:
+
+```bash
+pnpm -r run build
+# Automatically runs in topological order
+```
+
+For explicit sequential builds:
+```bash
+pnpm -r --workspace-concurrency=1 run build
+```
+
+## Network Optimizations
+
+### Configure Registry
+
+Use closest/fastest registry:
+
+```ini
+# .npmrc
+registry=https://registry.npmmirror.com/
+```
+
+### HTTP Settings
+
+Tune network settings:
+
+```ini
+# .npmrc
+fetch-retries=3
+fetch-retry-mintimeout=10000
+fetch-retry-maxtimeout=60000
+network-concurrency=16
+```
+
+### Proxy Configuration
+
+```ini
+# .npmrc
+proxy=http://proxy.company.com:8080
+https-proxy=http://proxy.company.com:8080
+```
+
+## Lockfile Optimization
+
+### Single Lockfile (Monorepos)
+
+Use shared lockfile for all packages (default):
+
+```ini
+# .npmrc
+shared-workspace-lockfile=true
+```
+
+Benefits:
+- Single source of truth
+- Faster resolution
+- Consistent versions across workspace
+
+### Lockfile-only Mode
+
+Only update lockfile without installing:
+
+```bash
+pnpm install --lockfile-only
+```
+
+## Benchmarking
+
+### Compare Install Times
+
+```bash
+# Clean install
+rm -rf node_modules pnpm-lock.yaml
+time pnpm install
+
+# Cached install (with lockfile)
+rm -rf node_modules
+time pnpm install --frozen-lockfile
+
+# With store cache
+time pnpm install --frozen-lockfile --prefer-offline
+```
+
+### Profile Resolution
+
+Debug slow installs:
+
+```bash
+# Verbose logging
+pnpm install --reporter=append-only
+
+# Debug mode
+DEBUG=pnpm:* pnpm install
+```
+
+## Configuration Summary
+
+Optimized `.npmrc` for performance:
+
+```ini
+# Install behavior
+prefer-offline=true
+auto-install-peers=true
+
+# Build optimization
+side-effects-cache=true
+# Only build what's necessary
+onlyBuiltDependencies[]=esbuild
+onlyBuiltDependencies[]=@swc/core
+
+# Network
+fetch-retries=3
+network-concurrency=16
+
+# Workspace
+workspace-concurrency=4
+```
+
+## Quick Reference
+
+| Scenario | Command/Setting |
+|----------|-----------------|
+| CI installs | `pnpm install --frozen-lockfile` |
+| Offline development | `--prefer-offline` |
+| Skip native builds | `neverBuiltDependencies` |
+| Parallel workspace | `pnpm -r --parallel run build` |
+| Build changed only | `pnpm --filter "...[origin/main]" build` |
+| Clean store | `pnpm store prune` |
+
+
diff --git a/skills/pnpm/references/core-cli.md b/skills/pnpm/references/core-cli.md
new file mode 100644
index 0000000..2286293
--- /dev/null
+++ b/skills/pnpm/references/core-cli.md
@@ -0,0 +1,229 @@
+---
+name: pnpm-cli-commands
+description: Essential pnpm commands for package management, running scripts, and workspace operations
+---
+
+# pnpm CLI Commands
+
+pnpm provides a comprehensive CLI for package management with commands similar to npm/yarn but with unique features.
+
+## Installation Commands
+
+### Install all dependencies
+```bash
+pnpm install
+# or
+pnpm i
+```
+
+### Add a dependency
+```bash
+# Production dependency
+pnpm add
+
+# Dev dependency
+pnpm add -D
+pnpm add --save-dev
+
+# Optional dependency
+pnpm add -O
+
+# Global package
+pnpm add -g
+
+# Specific version
+pnpm add @
+pnpm add @next
+pnpm add @^1.0.0
+```
+
+### Remove a dependency
+```bash
+pnpm remove
+pnpm rm
+pnpm uninstall
+pnpm un
+```
+
+### Update dependencies
+```bash
+# Update all
+pnpm update
+pnpm up
+
+# Update specific package
+pnpm update
+
+# Update to latest (ignore semver)
+pnpm update --latest
+pnpm up -L
+
+# Interactive update
+pnpm update --interactive
+pnpm up -i
+```
+
+## Script Commands
+
+### Run scripts
+```bash
+pnpm run
+
+
+
+
+
+
+
+
+```
+
+### Export Components
+
+```ts
+// src/index.ts
+export { default as Button } from './Button.vue'
+export { default as Input } from './Input.vue'
+export { default as Modal } from './Modal.vue'
+
+// Re-export types
+export type { ButtonProps } from './Button.vue'
+```
+
+## Common Patterns
+
+### Component Library
+
+```ts
+export default defineConfig({
+ entry: ['src/index.ts'],
+ format: ['esm', 'cjs'],
+ platform: 'neutral',
+ external: ['vue'],
+ plugins: [
+ Vue({
+ isProduction: true,
+ style: {
+ trim: true,
+ },
+ }),
+ ],
+ dts: {
+ vue: true,
+ },
+ clean: true,
+})
+```
+
+### Multiple Components
+
+```ts
+export default defineConfig({
+ entry: {
+ index: 'src/index.ts',
+ Button: 'src/Button.vue',
+ Input: 'src/Input.vue',
+ Modal: 'src/Modal.vue',
+ },
+ format: ['esm', 'cjs'],
+ external: ['vue'],
+ plugins: [Vue({ isProduction: true })],
+ dts: { vue: true },
+})
+```
+
+### With Composition Utilities
+
+```ts
+// src/composables/useCounter.ts
+import { ref } from 'vue'
+
+export function useCounter(initial = 0) {
+ const count = ref(initial)
+ const increment = () => count.value++
+ const decrement = () => count.value--
+ return { count, increment, decrement }
+}
+```
+
+```ts
+export default defineConfig({
+ entry: ['src/index.ts'],
+ format: ['esm', 'cjs'],
+ external: ['vue'],
+ plugins: [Vue({ isProduction: true })],
+ dts: { vue: true },
+})
+```
+
+### TypeScript Configuration
+
+```json
+// tsconfig.json
+{
+ "compilerOptions": {
+ "target": "ES2020",
+ "module": "ESNext",
+ "lib": ["ES2020", "DOM", "DOM.Iterable"],
+ "jsx": "preserve",
+ "moduleResolution": "bundler",
+ "allowImportingTsExtensions": true,
+ "strict": true,
+ "isolatedDeclarations": true,
+ "skipLibCheck": true
+ },
+ "include": ["src"],
+ "exclude": ["node_modules", "dist"]
+}
+```
+
+### Package.json Configuration
+
+```json
+{
+ "name": "my-vue-library",
+ "version": "1.0.0",
+ "type": "module",
+ "main": "./dist/index.cjs",
+ "module": "./dist/index.mjs",
+ "types": "./dist/index.d.ts",
+ "exports": {
+ ".": {
+ "types": "./dist/index.d.ts",
+ "import": "./dist/index.mjs",
+ "require": "./dist/index.cjs"
+ },
+ },
+ "files": ["dist"],
+ "peerDependencies": {
+ "vue": "^3.0.0"
+ },
+ "devDependencies": {
+ "tsdown": "^0.9.0",
+ "typescript": "^5.0.0",
+ "unplugin-vue": "^5.0.0",
+ "vue": "^3.4.0",
+ "vue-tsc": "^2.0.0"
+ }
+}
+```
+
+## Advanced Patterns
+
+### With Vite Plugins
+
+Some Vite Vue plugins may work:
+
+```ts
+import Vue from 'unplugin-vue/rolldown'
+import Components from 'unplugin-vue-components/rolldown'
+
+export default defineConfig({
+ entry: ['src/index.ts'],
+ external: ['vue'],
+ plugins: [
+ Vue({ isProduction: true }),
+ Components({
+ dts: 'src/components.d.ts',
+ }),
+ ],
+ dts: { vue: true },
+})
+```
+
+### JSX Support
+
+```ts
+export default defineConfig({
+ entry: ['src/index.ts'],
+ format: ['esm', 'cjs'],
+ external: ['vue'],
+ plugins: [
+ Vue({
+ isProduction: true,
+ script: {
+ propsDestructure: true,
+ },
+ }),
+ ],
+ inputOptions: {
+ transform: {
+ jsx: 'automatic',
+ jsxImportSource: 'vue',
+ },
+ },
+ dts: { vue: true },
+})
+```
+
+### Monorepo Vue Packages
+
+```ts
+export default defineConfig({
+ workspace: 'packages/*',
+ entry: ['src/index.ts'],
+ format: ['esm', 'cjs'],
+ external: ['vue', /^@mycompany\//],
+ plugins: [Vue({ isProduction: true })],
+ dts: { vue: true },
+})
+```
+
+## Plugin Options
+
+### unplugin-vue Options
+
+```ts
+Vue({
+ isProduction: true,
+ script: {
+ defineModel: true,
+ propsDestructure: true,
+ },
+ style: {
+ trim: true,
+ },
+ template: {
+ compilerOptions: {
+ isCustomElement: (tag) => tag.startsWith('custom-'),
+ },
+ },
+})
+```
+
+## Tips
+
+1. **Always externalize Vue** - Don't bundle Vue itself
+2. **Enable vue: true in dts** - For proper type generation
+3. **Use platform: 'neutral'** - Maximum compatibility
+4. **Install vue-tsc** - Required for type generation
+5. **Set isProduction: true** - Optimize for production
+6. **Add peer dependency** - Vue as peer dependency
+
+## Troubleshooting
+
+### Type Generation Fails
+
+Ensure vue-tsc is installed:
+```bash
+pnpm add -D vue-tsc
+```
+
+Enable in config:
+```ts
+dts: { vue: true }
+```
+
+### Component Types Missing
+
+Check TypeScript config:
+```json
+{
+ "compilerOptions": {
+ "jsx": "preserve",
+ "moduleResolution": "bundler"
+ }
+}
+```
+
+### Vue Not Externalized
+
+Add to external:
+```ts
+external: ['vue']
+```
+
+### SFC Compilation Errors
+
+Check unplugin-vue version:
+```bash
+pnpm add -D unplugin-vue@latest
+```
+
+## Related
+
+- [Plugins](advanced-plugins.md) - Plugin system
+- [Dependencies](option-dependencies.md) - External packages
+- [DTS](option-dts.md) - Type declarations
+- [React Recipe](recipe-react.md) - React component libraries
diff --git a/skills/tsdown/references/recipe-wasm.md b/skills/tsdown/references/recipe-wasm.md
new file mode 100644
index 0000000..0d22f34
--- /dev/null
+++ b/skills/tsdown/references/recipe-wasm.md
@@ -0,0 +1,125 @@
+# WASM Support
+
+Bundle WebAssembly modules in your TypeScript/JavaScript project.
+
+## Overview
+
+tsdown supports WASM through [`rolldown-plugin-wasm`](https://github.com/sxzz/rolldown-plugin-wasm), enabling direct `.wasm` imports with synchronous and asynchronous instantiation.
+
+## Setup
+
+### Install
+
+```bash
+pnpm add -D rolldown-plugin-wasm
+```
+
+### Configure
+
+```ts
+import { wasm } from 'rolldown-plugin-wasm'
+import { defineConfig } from 'tsdown'
+
+export default defineConfig({
+ entry: ['./src/index.ts'],
+ plugins: [wasm()],
+})
+```
+
+### TypeScript Support
+
+Add type declarations to `tsconfig.json`:
+
+```jsonc
+{
+ "compilerOptions": {
+ "types": ["rolldown-plugin-wasm/types"]
+ }
+}
+```
+
+## Importing WASM Modules
+
+### Direct Import
+
+```ts
+import { add } from './add.wasm'
+add(1, 2)
+```
+
+### Async Init
+
+Use `?init` query for async initialization:
+
+```ts
+import init from './add.wasm?init'
+const instance = await init(imports) // imports optional
+instance.exports.add(1, 2)
+```
+
+### Sync Init
+
+Use `?init&sync` query for synchronous initialization:
+
+```ts
+import initSync from './add.wasm?init&sync'
+const instance = initSync(imports) // imports optional
+instance.exports.add(1, 2)
+```
+
+## wasm-bindgen Support
+
+### Target `bundler` (Recommended)
+
+```ts
+import { add } from 'some-pkg'
+add(1, 2)
+```
+
+### Target `web` (Node.js)
+
+```ts
+import { readFile } from 'node:fs/promises'
+import init, { add } from 'some-pkg'
+import wasmUrl from 'some-pkg/add_bg.wasm?url'
+
+await init({
+ module_or_path: readFile(new URL(wasmUrl, import.meta.url)),
+})
+add(1, 2)
+```
+
+### Target `web` (Browser)
+
+```ts
+import init, { add } from 'some-pkg/add.js'
+import wasmUrl from 'some-pkg/add_bg.wasm?url'
+
+await init({ module_or_path: wasmUrl })
+add(1, 2)
+```
+
+`nodejs` and `no-modules` wasm-bindgen targets are not supported.
+
+## Plugin Options
+
+```ts
+wasm({
+ maxFileSize: 14 * 1024, // Max size for inline (default: 14KB)
+ fileName: '[hash][extname]', // Output file name pattern
+ publicPath: '', // Prefix for non-inlined file paths
+ targetEnv: 'auto', // 'auto' | 'auto-inline' | 'browser' | 'node'
+})
+```
+
+| Option | Default | Description |
+|--------|---------|-------------|
+| `maxFileSize` | `14 * 1024` | Max file size for inlining. Set to `0` to always copy. |
+| `fileName` | `'[hash][extname]'` | Pattern for emitted WASM files |
+| `publicPath` | — | Prefix for non-inlined WASM file paths |
+| `targetEnv` | `'auto'` | `'auto'` detects at runtime; `'browser'` omits Node builtins; `'node'` omits fetch |
+
+## Related Options
+
+- [Plugins](advanced-plugins.md) - Plugin system overview
+- [Platform](option-platform.md) - Target platform configuration
diff --git a/skills/tsdown/references/reference-cli.md b/skills/tsdown/references/reference-cli.md
new file mode 100644
index 0000000..35cc647
--- /dev/null
+++ b/skills/tsdown/references/reference-cli.md
@@ -0,0 +1,395 @@
+# CLI Reference
+
+Complete reference for tsdown command-line interface.
+
+## Overview
+
+All CLI flags can also be set in the config file. CLI flags override config file options.
+
+## Flag Patterns
+
+CLI flag mapping rules:
+- `--foo` sets `foo: true`
+- `--no-foo` sets `foo: false`
+- `--foo.bar` sets `foo: { bar: true }`
+- `--format esm --format cjs` sets `format: ['esm', 'cjs']`
+
+CLI flags support both camelCase and kebab-case. For example, `--outDir` and `--out-dir` are equivalent.
+
+## Basic Commands
+
+### Build
+
+```bash
+# Build with default config
+tsdown
+
+# Build specific files
+tsdown src/index.ts src/cli.ts
+
+# Build with watch mode
+tsdown --watch
+```
+
+## Configuration
+
+### `--config, -c `
+
+Specify custom config file:
+
+```bash
+tsdown --config build.config.ts
+tsdown -c custom-config.js
+```
+
+### `--no-config`
+
+Disable config file loading:
+
+```bash
+tsdown --no-config src/index.ts
+```
+
+### `--config-loader `
+
+Choose config loader (`auto`, `native`, `unrun`):
+
+```bash
+tsdown --config-loader unrun
+```
+
+### `--tsconfig `
+
+Specify TypeScript config file:
+
+```bash
+tsdown --tsconfig tsconfig.build.json
+```
+
+## Entry Points
+
+### `[...files]`
+
+Specify entry files as arguments:
+
+```bash
+tsdown src/index.ts src/utils.ts
+```
+
+## Output Options
+
+### `--format `
+
+Output format (`esm`, `cjs`, `iife`, `umd`):
+
+```bash
+tsdown --format esm
+tsdown --format esm --format cjs
+```
+
+### `--out-dir, -d `
+
+Output directory:
+
+```bash
+tsdown --out-dir lib
+tsdown -d dist
+```
+
+### `--dts`
+
+Generate TypeScript declarations:
+
+```bash
+tsdown --dts
+```
+
+### `--clean`
+
+Clean output directory before build:
+
+```bash
+tsdown --clean
+```
+
+## Build Options
+
+### `--target `
+
+JavaScript target version:
+
+```bash
+tsdown --target es2020
+tsdown --target node18
+tsdown --target chrome100
+tsdown --no-target # Disable transformations
+```
+
+### `--platform `
+
+Target platform (`node`, `browser`, `neutral`):
+
+```bash
+tsdown --platform node
+tsdown --platform browser
+```
+
+### `--minify`
+
+Enable minification:
+
+```bash
+tsdown --minify
+tsdown --no-minify
+```
+
+### `--sourcemap`
+
+Generate source maps:
+
+```bash
+tsdown --sourcemap
+tsdown --sourcemap inline
+```
+
+### `--treeshake`
+
+Enable/disable tree shaking:
+
+```bash
+tsdown --treeshake
+tsdown --no-treeshake
+```
+
+## Dependencies
+
+### `--external `
+
+Mark module as external (not bundled):
+
+```bash
+tsdown --external react --external react-dom
+```
+
+### `--shims`
+
+Add ESM/CJS compatibility shims:
+
+```bash
+tsdown --shims
+```
+
+## Development
+
+### `--watch, -w [path]`
+
+Enable watch mode:
+
+```bash
+tsdown --watch
+tsdown -w
+tsdown --watch src # Watch specific directory
+```
+
+### `--ignore-watch `
+
+Ignore paths in watch mode:
+
+```bash
+tsdown --watch --ignore-watch test
+```
+
+### `--on-success `
+
+Run command after successful build:
+
+```bash
+tsdown --watch --on-success "echo Build complete!"
+```
+
+## Environment Variables
+
+### `--env.* `
+
+Set compile-time environment variables:
+
+```bash
+tsdown --env.NODE_ENV=production --env.API_URL=https://api.example.com
+```
+
+Access as `import.meta.env.*` or `process.env.*`.
+
+### `--env-file `
+
+Load environment variables from file:
+
+```bash
+tsdown --env-file .env.production
+```
+
+### `--env-prefix `
+
+Filter environment variables by prefix (default: `TSDOWN_`):
+
+```bash
+tsdown --env-file .env --env-prefix APP_ --env-prefix TSDOWN_
+```
+
+## Assets
+
+### `--copy `
+
+Copy directory to output:
+
+```bash
+tsdown --copy public
+tsdown --copy assets --copy static
+```
+
+## Package Management
+
+### `--exports`
+
+Auto-generate package.json exports field:
+
+```bash
+tsdown --exports
+```
+
+### `--publint`
+
+Enable package validation:
+
+```bash
+tsdown --publint
+```
+
+### `--attw`
+
+Enable "Are the types wrong" validation:
+
+```bash
+tsdown --attw
+```
+
+### `--unused`
+
+Check for unused dependencies:
+
+```bash
+tsdown --unused
+```
+
+## Logging
+
+### `--log-level `
+
+Set logging verbosity (`silent`, `error`, `warn`, `info`):
+
+```bash
+tsdown --log-level error
+tsdown --log-level warn
+```
+
+### `--report` / `--no-report`
+
+Enable/disable build report:
+
+```bash
+tsdown --no-report # Disable size report
+tsdown --report # Enable (default)
+```
+
+### `--debug [feat]`
+
+Show debug logs:
+
+```bash
+tsdown --debug
+tsdown --debug rolldown # Debug specific feature
+```
+
+## Integration
+
+### `--from-vite [vitest]`
+
+Extend Vite or Vitest config:
+
+```bash
+tsdown --from-vite # Use vite.config.*
+tsdown --from-vite vitest # Use vitest.config.*
+```
+
+## Common Usage Patterns
+
+### Basic Build
+
+```bash
+tsdown
+```
+
+### Library (ESM + CJS + Types)
+
+```bash
+tsdown --format esm --format cjs --dts --clean
+```
+
+### Production Build
+
+```bash
+tsdown --minify --clean --no-report
+```
+
+### Development (Watch)
+
+```bash
+tsdown --watch --sourcemap
+```
+
+### Browser Bundle (IIFE)
+
+```bash
+tsdown --format iife --platform browser --minify
+```
+
+### Node.js CLI Tool
+
+```bash
+tsdown --format esm --platform node --shims
+```
+
+### Monorepo Package
+
+```bash
+tsdown --clean --dts --exports --publint
+```
+
+### With Environment Variables
+
+```bash
+tsdown --env-file .env.production --env.BUILD_TIME=$(date +%s)
+```
+
+### Copy Assets
+
+```bash
+tsdown --copy public --copy assets --clean
+```
+
+## Tips
+
+1. **Use config file** for complex setups
+2. **CLI flags override** config file options
+3. **Chain multiple formats** for multi-target builds
+4. **Use --clean** to avoid stale files
+5. **Enable --dts** for TypeScript libraries
+6. **Use --watch** during development
+7. **Add --on-success** for post-build tasks
+8. **Use --exports** to auto-generate package.json fields
+
+## Related Documentation
+
+- [Config File](option-config-file.md) - Configuration file options
+- [Entry](option-entry.md) - Entry point configuration
+- [Output Format](option-output-format.md) - Format options
+- [Watch Mode](option-watch-mode.md) - Watch mode details
diff --git a/skills/turborepo/LICENSE.md b/skills/turborepo/LICENSE.md
new file mode 100644
index 0000000..71b5d4e
--- /dev/null
+++ b/skills/turborepo/LICENSE.md
@@ -0,0 +1,7 @@
+Copyright (c) 2026 Vercel, Inc
+
+Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
diff --git a/skills/turborepo/SKILL.md b/skills/turborepo/SKILL.md
new file mode 100644
index 0000000..5993c78
--- /dev/null
+++ b/skills/turborepo/SKILL.md
@@ -0,0 +1,914 @@
+---
+name: turborepo
+description: |
+ Turborepo monorepo build system guidance. Triggers on: turbo.json, task pipelines,
+ dependsOn, caching, remote cache, the "turbo" CLI, --filter, --affected, CI optimization, environment
+ variables, internal packages, monorepo structure/best practices, and boundaries.
+
+ Use when user: configures tasks/workflows/pipelines, creates packages, sets up
+ monorepo, shares code between apps, runs changed/affected packages, debugs cache,
+ or has apps/packages directories.
+metadata:
+ version: 2.8.1
+---
+
+# Turborepo Skill
+
+Build system for JavaScript/TypeScript monorepos. Turborepo caches task outputs and runs tasks in parallel based on dependency graph.
+
+## IMPORTANT: Package Tasks, Not Root Tasks
+
+**DO NOT create Root Tasks. ALWAYS create package tasks.**
+
+When creating tasks/scripts/pipelines, you MUST:
+
+1. Add the script to each relevant package's `package.json`
+2. Register the task in root `turbo.json`
+3. Root `package.json` only delegates via `turbo run `
+
+**DO NOT** put task logic in root `package.json`. This defeats Turborepo's parallelization.
+
+```json
+// DO THIS: Scripts in each package
+// apps/web/package.json
+{ "scripts": { "build": "next build", "lint": "eslint .", "test": "vitest" } }
+
+// apps/api/package.json
+{ "scripts": { "build": "tsc", "lint": "eslint .", "test": "vitest" } }
+
+// packages/ui/package.json
+{ "scripts": { "build": "tsc", "lint": "eslint .", "test": "vitest" } }
+```
+
+```json
+// turbo.json - register tasks
+{
+ "tasks": {
+ "build": { "dependsOn": ["^build"], "outputs": ["dist/**"] },
+ "lint": {},
+ "test": { "dependsOn": ["build"] }
+ }
+}
+```
+
+```json
+// Root package.json - ONLY delegates, no task logic
+{
+ "scripts": {
+ "build": "turbo run build",
+ "lint": "turbo run lint",
+ "test": "turbo run test"
+ }
+}
+```
+
+```json
+// DO NOT DO THIS - defeats parallelization
+// Root package.json
+{
+ "scripts": {
+ "build": "cd apps/web && next build && cd ../api && tsc",
+ "lint": "eslint apps/ packages/",
+ "test": "vitest"
+ }
+}
+```
+
+Root Tasks (`//#taskname`) are ONLY for tasks that truly cannot exist in packages (rare).
+
+## Secondary Rule: `turbo run` vs `turbo`
+
+**Always use `turbo run` when the command is written into code:**
+
+```json
+// package.json - ALWAYS "turbo run"
+{
+ "scripts": {
+ "build": "turbo run build"
+ }
+}
+```
+
+```yaml
+# CI workflows - ALWAYS "turbo run"
+- run: turbo run build --affected
+```
+
+**The shorthand `turbo ` is ONLY for one-off terminal commands** typed directly by humans or agents. Never write `turbo build` into package.json, CI, or scripts.
+
+## Quick Decision Trees
+
+### "I need to configure a task"
+
+```
+Configure a task?
+├─ Define task dependencies → references/configuration/tasks.md
+├─ Lint/check-types (parallel + caching) → Use Transit Nodes pattern (see below)
+├─ Specify build outputs → references/configuration/tasks.md#outputs
+├─ Handle environment variables → references/environment/RULE.md
+├─ Set up dev/watch tasks → references/configuration/tasks.md#persistent
+├─ Package-specific config → references/configuration/RULE.md#package-configurations
+└─ Global settings (cacheDir, daemon) → references/configuration/global-options.md
+```
+
+### "My cache isn't working"
+
+```
+Cache problems?
+├─ Tasks run but outputs not restored → Missing `outputs` key
+├─ Cache misses unexpectedly → references/caching/gotchas.md
+├─ Need to debug hash inputs → Use --summarize or --dry
+├─ Want to skip cache entirely → Use --force or cache: false
+├─ Remote cache not working → references/caching/remote-cache.md
+└─ Environment causing misses → references/environment/gotchas.md
+```
+
+### "I want to run only changed packages"
+
+```
+Run only what changed?
+├─ Changed packages + dependents (RECOMMENDED) → turbo run build --affected
+├─ Custom base branch → --affected --affected-base=origin/develop
+├─ Manual git comparison → --filter=...[origin/main]
+└─ See all filter options → references/filtering/RULE.md
+```
+
+**`--affected` is the primary way to run only changed packages.** It automatically compares against the default branch and includes dependents.
+
+### "I want to filter packages"
+
+```
+Filter packages?
+├─ Only changed packages → --affected (see above)
+├─ By package name → --filter=web
+├─ By directory → --filter=./apps/*
+├─ Package + dependencies → --filter=web...
+├─ Package + dependents → --filter=...web
+└─ Complex combinations → references/filtering/patterns.md
+```
+
+### "Environment variables aren't working"
+
+```
+Environment issues?
+├─ Vars not available at runtime → Strict mode filtering (default)
+├─ Cache hits with wrong env → Var not in `env` key
+├─ .env changes not causing rebuilds → .env not in `inputs`
+├─ CI variables missing → references/environment/gotchas.md
+└─ Framework vars (NEXT_PUBLIC_*) → Auto-included via inference
+```
+
+### "I need to set up CI"
+
+```
+CI setup?
+├─ GitHub Actions → references/ci/github-actions.md
+├─ Vercel deployment → references/ci/vercel.md
+├─ Remote cache in CI → references/caching/remote-cache.md
+├─ Only build changed packages → --affected flag
+├─ Skip unnecessary builds → turbo-ignore (references/cli/commands.md)
+└─ Skip container setup when no changes → turbo-ignore
+```
+
+### "I want to watch for changes during development"
+
+```
+Watch mode?
+├─ Re-run tasks on change → turbo watch (references/watch/RULE.md)
+├─ Dev servers with dependencies → Use `with` key (references/configuration/tasks.md#with)
+├─ Restart dev server on dep change → Use `interruptible: true`
+└─ Persistent dev tasks → Use `persistent: true`
+```
+
+### "I need to create/structure a package"
+
+```
+Package creation/structure?
+├─ Create an internal package → references/best-practices/packages.md
+├─ Repository structure → references/best-practices/structure.md
+├─ Dependency management → references/best-practices/dependencies.md
+├─ Best practices overview → references/best-practices/RULE.md
+├─ JIT vs Compiled packages → references/best-practices/packages.md#compilation-strategies
+└─ Sharing code between apps → references/best-practices/RULE.md#package-types
+```
+
+### "How should I structure my monorepo?"
+
+```
+Monorepo structure?
+├─ Standard layout (apps/, packages/) → references/best-practices/RULE.md
+├─ Package types (apps vs libraries) → references/best-practices/RULE.md#package-types
+├─ Creating internal packages → references/best-practices/packages.md
+├─ TypeScript configuration → references/best-practices/structure.md#typescript-configuration
+├─ ESLint configuration → references/best-practices/structure.md#eslint-configuration
+├─ Dependency management → references/best-practices/dependencies.md
+└─ Enforce package boundaries → references/boundaries/RULE.md
+```
+
+### "I want to enforce architectural boundaries"
+
+```
+Enforce boundaries?
+├─ Check for violations → turbo boundaries
+├─ Tag packages → references/boundaries/RULE.md#tags
+├─ Restrict which packages can import others → references/boundaries/RULE.md#rule-types
+└─ Prevent cross-package file imports → references/boundaries/RULE.md
+```
+
+## Critical Anti-Patterns
+
+### Using `turbo` Shorthand in Code
+
+**`turbo run` is recommended in package.json scripts and CI pipelines.** The shorthand `turbo ` is intended for interactive terminal use.
+
+```json
+// WRONG - using shorthand in package.json
+{
+ "scripts": {
+ "build": "turbo build",
+ "dev": "turbo dev"
+ }
+}
+
+// CORRECT
+{
+ "scripts": {
+ "build": "turbo run build",
+ "dev": "turbo run dev"
+ }
+}
+```
+
+```yaml
+# WRONG - using shorthand in CI
+- run: turbo build --affected
+
+# CORRECT
+- run: turbo run build --affected
+```
+
+### Root Scripts Bypassing Turbo
+
+Root `package.json` scripts MUST delegate to `turbo run`, not run tasks directly.
+
+```json
+// WRONG - bypasses turbo entirely
+{
+ "scripts": {
+ "build": "bun build",
+ "dev": "bun dev"
+ }
+}
+
+// CORRECT - delegates to turbo
+{
+ "scripts": {
+ "build": "turbo run build",
+ "dev": "turbo run dev"
+ }
+}
+```
+
+### Using `&&` to Chain Turbo Tasks
+
+Don't chain turbo tasks with `&&`. Let turbo orchestrate.
+
+```json
+// WRONG - turbo task not using turbo run
+{
+ "scripts": {
+ "changeset:publish": "bun build && changeset publish"
+ }
+}
+
+// CORRECT
+{
+ "scripts": {
+ "changeset:publish": "turbo run build && changeset publish"
+ }
+}
+```
+
+### `prebuild` Scripts That Manually Build Dependencies
+
+Scripts like `prebuild` that manually build other packages bypass Turborepo's dependency graph.
+
+```json
+// WRONG - manually building dependencies
+{
+ "scripts": {
+ "prebuild": "cd ../../packages/types && bun run build && cd ../utils && bun run build",
+ "build": "next build"
+ }
+}
+```
+
+**However, the fix depends on whether workspace dependencies are declared:**
+
+1. **If dependencies ARE declared** (e.g., `"@repo/types": "workspace:*"` in package.json), remove the `prebuild` script. Turbo's `dependsOn: ["^build"]` handles this automatically.
+
+2. **If dependencies are NOT declared**, the `prebuild` exists because `^build` won't trigger without a dependency relationship. The fix is to:
+ - Add the dependency to package.json: `"@repo/types": "workspace:*"`
+ - Then remove the `prebuild` script
+
+```json
+// CORRECT - declare dependency, let turbo handle build order
+// package.json
+{
+ "dependencies": {
+ "@repo/types": "workspace:*",
+ "@repo/utils": "workspace:*"
+ },
+ "scripts": {
+ "build": "next build"
+ }
+}
+
+// turbo.json
+{
+ "tasks": {
+ "build": {
+ "dependsOn": ["^build"]
+ }
+ }
+}
+```
+
+**Key insight:** `^build` only runs build in packages listed as dependencies. No dependency declaration = no automatic build ordering.
+
+### Overly Broad `globalDependencies`
+
+`globalDependencies` affects ALL tasks in ALL packages. Be specific.
+
+```json
+// WRONG - heavy hammer, affects all hashes
+{
+ "globalDependencies": ["**/.env.*local"]
+}
+
+// BETTER - move to task-level inputs
+{
+ "globalDependencies": [".env"],
+ "tasks": {
+ "build": {
+ "inputs": ["$TURBO_DEFAULT$", ".env*"],
+ "outputs": ["dist/**"]
+ }
+ }
+}
+```
+
+### Repetitive Task Configuration
+
+Look for repeated configuration across tasks that can be collapsed. Turborepo supports shared configuration patterns.
+
+```json
+// WRONG - repetitive env and inputs across tasks
+{
+ "tasks": {
+ "build": {
+ "env": ["API_URL", "DATABASE_URL"],
+ "inputs": ["$TURBO_DEFAULT$", ".env*"]
+ },
+ "test": {
+ "env": ["API_URL", "DATABASE_URL"],
+ "inputs": ["$TURBO_DEFAULT$", ".env*"]
+ },
+ "dev": {
+ "env": ["API_URL", "DATABASE_URL"],
+ "inputs": ["$TURBO_DEFAULT$", ".env*"],
+ "cache": false,
+ "persistent": true
+ }
+ }
+}
+
+// BETTER - use globalEnv and globalDependencies for shared config
+{
+ "globalEnv": ["API_URL", "DATABASE_URL"],
+ "globalDependencies": [".env*"],
+ "tasks": {
+ "build": {},
+ "test": {},
+ "dev": {
+ "cache": false,
+ "persistent": true
+ }
+ }
+}
+```
+
+**When to use global vs task-level:**
+
+- `globalEnv` / `globalDependencies` - affects ALL tasks, use for truly shared config
+- Task-level `env` / `inputs` - use when only specific tasks need it
+
+### NOT an Anti-Pattern: Large `env` Arrays
+
+A large `env` array (even 50+ variables) is **not** a problem. It usually means the user was thorough about declaring their build's environment dependencies. Do not flag this as an issue.
+
+### Using `--parallel` Flag
+
+The `--parallel` flag bypasses Turborepo's dependency graph. If tasks need parallel execution, configure `dependsOn` correctly instead.
+
+```bash
+# WRONG - bypasses dependency graph
+turbo run lint --parallel
+
+# CORRECT - configure tasks to allow parallel execution
+# In turbo.json, set dependsOn appropriately (or use transit nodes)
+turbo run lint
+```
+
+### Package-Specific Task Overrides in Root turbo.json
+
+When multiple packages need different task configurations, use **Package Configurations** (`turbo.json` in each package) instead of cluttering root `turbo.json` with `package#task` overrides.
+
+```json
+// WRONG - root turbo.json with many package-specific overrides
+{
+ "tasks": {
+ "test": { "dependsOn": ["build"] },
+ "@repo/web#test": { "outputs": ["coverage/**"] },
+ "@repo/api#test": { "outputs": ["coverage/**"] },
+ "@repo/utils#test": { "outputs": [] },
+ "@repo/cli#test": { "outputs": [] },
+ "@repo/core#test": { "outputs": [] }
+ }
+}
+
+// CORRECT - use Package Configurations
+// Root turbo.json - base config only
+{
+ "tasks": {
+ "test": { "dependsOn": ["build"] }
+ }
+}
+
+// packages/web/turbo.json - package-specific override
+{
+ "extends": ["//"],
+ "tasks": {
+ "test": { "outputs": ["coverage/**"] }
+ }
+}
+
+// packages/api/turbo.json
+{
+ "extends": ["//"],
+ "tasks": {
+ "test": { "outputs": ["coverage/**"] }
+ }
+}
+```
+
+**Benefits of Package Configurations:**
+
+- Keeps configuration close to the code it affects
+- Root turbo.json stays clean and focused on base patterns
+- Easier to understand what's special about each package
+- Works with `$TURBO_EXTENDS$` to inherit + extend arrays
+
+**When to use `package#task` in root:**
+
+- Single package needs a unique dependency (e.g., `"deploy": { "dependsOn": ["web#build"] }`)
+- Temporary override while migrating
+
+See `references/configuration/RULE.md#package-configurations` for full details.
+
+### Using `../` to Traverse Out of Package in `inputs`
+
+Don't use relative paths like `../` to reference files outside the package. Use `$TURBO_ROOT$` instead.
+
+```json
+// WRONG - traversing out of package
+{
+ "tasks": {
+ "build": {
+ "inputs": ["$TURBO_DEFAULT$", "../shared-config.json"]
+ }
+ }
+}
+
+// CORRECT - use $TURBO_ROOT$ for repo root
+{
+ "tasks": {
+ "build": {
+ "inputs": ["$TURBO_DEFAULT$", "$TURBO_ROOT$/shared-config.json"]
+ }
+ }
+}
+```
+
+### Missing `outputs` for File-Producing Tasks
+
+**Before flagging missing `outputs`, check what the task actually produces:**
+
+1. Read the package's script (e.g., `"build": "tsc"`, `"test": "vitest"`)
+2. Determine if it writes files to disk or only outputs to stdout
+3. Only flag if the task produces files that should be cached
+
+```json
+// WRONG: build produces files but they're not cached
+{
+ "tasks": {
+ "build": {
+ "dependsOn": ["^build"]
+ }
+ }
+}
+
+// CORRECT: build outputs are cached
+{
+ "tasks": {
+ "build": {
+ "dependsOn": ["^build"],
+ "outputs": ["dist/**"]
+ }
+ }
+}
+```
+
+Common outputs by framework:
+
+- Next.js: `[".next/**", "!.next/cache/**"]`
+- Vite/Rollup: `["dist/**"]`
+- tsc: `["dist/**"]` or custom `outDir`
+
+**TypeScript `--noEmit` can still produce cache files:**
+
+When `incremental: true` in tsconfig.json, `tsc --noEmit` writes `.tsbuildinfo` files even without emitting JS. Check the tsconfig before assuming no outputs:
+
+```json
+// If tsconfig has incremental: true, tsc --noEmit produces cache files
+{
+ "tasks": {
+ "typecheck": {
+ "outputs": ["node_modules/.cache/tsbuildinfo.json"] // or wherever tsBuildInfoFile points
+ }
+ }
+}
+```
+
+To determine correct outputs for TypeScript tasks:
+
+1. Check if `incremental` or `composite` is enabled in tsconfig
+2. Check `tsBuildInfoFile` for custom cache location (default: alongside `outDir` or in project root)
+3. If no incremental mode, `tsc --noEmit` produces no files
+
+### `^build` vs `build` Confusion
+
+```json
+{
+ "tasks": {
+ // ^build = run build in DEPENDENCIES first (other packages this one imports)
+ "build": {
+ "dependsOn": ["^build"]
+ },
+ // build (no ^) = run build in SAME PACKAGE first
+ "test": {
+ "dependsOn": ["build"]
+ },
+ // pkg#task = specific package's task
+ "deploy": {
+ "dependsOn": ["web#build"]
+ }
+ }
+}
+```
+
+### Environment Variables Not Hashed
+
+```json
+// WRONG: API_URL changes won't cause rebuilds
+{
+ "tasks": {
+ "build": {
+ "outputs": ["dist/**"]
+ }
+ }
+}
+
+// CORRECT: API_URL changes invalidate cache
+{
+ "tasks": {
+ "build": {
+ "outputs": ["dist/**"],
+ "env": ["API_URL", "API_KEY"]
+ }
+ }
+}
+```
+
+### `.env` Files Not in Inputs
+
+Turbo does NOT load `.env` files - your framework does. But Turbo needs to know about changes:
+
+```json
+// WRONG: .env changes don't invalidate cache
+{
+ "tasks": {
+ "build": {
+ "env": ["API_URL"]
+ }
+ }
+}
+
+// CORRECT: .env file changes invalidate cache
+{
+ "tasks": {
+ "build": {
+ "env": ["API_URL"],
+ "inputs": ["$TURBO_DEFAULT$", ".env", ".env.*"]
+ }
+ }
+}
+```
+
+### Root `.env` File in Monorepo
+
+A `.env` file at the repo root is an anti-pattern — even for small monorepos or starter templates. It creates implicit coupling between packages and makes it unclear which packages depend on which variables.
+
+```
+// WRONG - root .env affects all packages implicitly
+my-monorepo/
+├── .env # Which packages use this?
+├── apps/
+│ ├── web/
+│ └── api/
+└── packages/
+
+// CORRECT - .env files in packages that need them
+my-monorepo/
+├── apps/
+│ ├── web/
+│ │ └── .env # Clear: web needs DATABASE_URL
+│ └── api/
+│ └── .env # Clear: api needs API_KEY
+└── packages/
+```
+
+**Problems with root `.env`:**
+
+- Unclear which packages consume which variables
+- All packages get all variables (even ones they don't need)
+- Cache invalidation is coarse-grained (root .env change invalidates everything)
+- Security risk: packages may accidentally access sensitive vars meant for others
+- Bad habits start small — starter templates should model correct patterns
+
+**If you must share variables**, use `globalEnv` to be explicit about what's shared, and document why.
+
+### Strict Mode Filtering CI Variables
+
+By default, Turborepo filters environment variables to only those in `env`/`globalEnv`. CI variables may be missing:
+
+```json
+// If CI scripts need GITHUB_TOKEN but it's not in env:
+{
+ "globalPassThroughEnv": ["GITHUB_TOKEN", "CI"],
+ "tasks": { ... }
+}
+```
+
+Or use `--env-mode=loose` (not recommended for production).
+
+### Shared Code in Apps (Should Be a Package)
+
+```
+// WRONG: Shared code inside an app
+apps/
+ web/
+ shared/ # This breaks monorepo principles!
+ utils.ts
+
+// CORRECT: Extract to a package
+packages/
+ utils/
+ src/utils.ts
+```
+
+### Accessing Files Across Package Boundaries
+
+```typescript
+// WRONG: Reaching into another package's internals
+import { Button } from "../../packages/ui/src/button";
+
+// CORRECT: Install and import properly
+import { Button } from "@repo/ui/button";
+```
+
+### Too Many Root Dependencies
+
+```json
+// WRONG: App dependencies in root
+{
+ "dependencies": {
+ "react": "^18",
+ "next": "^14"
+ }
+}
+
+// CORRECT: Only repo tools in root
+{
+ "devDependencies": {
+ "turbo": "latest"
+ }
+}
+```
+
+## Common Task Configurations
+
+### Standard Build Pipeline
+
+```json
+{
+ "$schema": "https://turborepo.dev/schema.v2.json",
+ "tasks": {
+ "build": {
+ "dependsOn": ["^build"],
+ "outputs": ["dist/**", ".next/**", "!.next/cache/**"]
+ },
+ "dev": {
+ "cache": false,
+ "persistent": true
+ }
+ }
+}
+```
+
+Add a `transit` task if you have tasks that need parallel execution with cache invalidation (see below).
+
+### Dev Task with `^dev` Pattern (for `turbo watch`)
+
+A `dev` task with `dependsOn: ["^dev"]` and `persistent: false` in root turbo.json may look unusual but is **correct for `turbo watch` workflows**:
+
+```json
+// Root turbo.json
+{
+ "tasks": {
+ "dev": {
+ "dependsOn": ["^dev"],
+ "cache": false,
+ "persistent": false // Packages have one-shot dev scripts
+ }
+ }
+}
+
+// Package turbo.json (apps/web/turbo.json)
+{
+ "extends": ["//"],
+ "tasks": {
+ "dev": {
+ "persistent": true // Apps run long-running dev servers
+ }
+ }
+}
+```
+
+**Why this works:**
+
+- **Packages** (e.g., `@acme/db`, `@acme/validators`) have `"dev": "tsc"` — one-shot type generation that completes quickly
+- **Apps** override with `persistent: true` for actual dev servers (Next.js, etc.)
+- **`turbo watch`** re-runs the one-shot package `dev` scripts when source files change, keeping types in sync
+
+**Intended usage:** Run `turbo watch dev` (not `turbo run dev`). Watch mode re-executes one-shot tasks on file changes while keeping persistent tasks running.
+
+**Alternative pattern:** Use a separate task name like `prepare` or `generate` for one-shot dependency builds to make the intent clearer:
+
+```json
+{
+ "tasks": {
+ "prepare": {
+ "dependsOn": ["^prepare"],
+ "outputs": ["dist/**"]
+ },
+ "dev": {
+ "dependsOn": ["prepare"],
+ "cache": false,
+ "persistent": true
+ }
+ }
+}
+```
+
+### Transit Nodes for Parallel Tasks with Cache Invalidation
+
+Some tasks can run in parallel (don't need built output from dependencies) but must invalidate cache when dependency source code changes.
+
+**The problem with `dependsOn: ["^taskname"]`:**
+
+- Forces sequential execution (slow)
+
+**The problem with `dependsOn: []` (no dependencies):**
+
+- Allows parallel execution (fast)
+- But cache is INCORRECT - changing dependency source won't invalidate cache
+
+**Transit Nodes solve both:**
+
+```json
+{
+ "tasks": {
+ "transit": { "dependsOn": ["^transit"] },
+ "my-task": { "dependsOn": ["transit"] }
+ }
+}
+```
+
+The `transit` task creates dependency relationships without matching any actual script, so tasks run in parallel with correct cache invalidation.
+
+**How to identify tasks that need this pattern:** Look for tasks that read source files from dependencies but don't need their build outputs.
+
+### With Environment Variables
+
+```json
+{
+ "globalEnv": ["NODE_ENV"],
+ "globalDependencies": [".env"],
+ "tasks": {
+ "build": {
+ "dependsOn": ["^build"],
+ "outputs": ["dist/**"],
+ "env": ["API_URL", "DATABASE_URL"]
+ }
+ }
+}
+```
+
+## Reference Index
+
+### Configuration
+
+| File | Purpose |
+| ------------------------------------------------------------------------------- | -------------------------------------------------------- |
+| [configuration/RULE.md](./references/configuration/RULE.md) | turbo.json overview, Package Configurations |
+| [configuration/tasks.md](./references/configuration/tasks.md) | dependsOn, outputs, inputs, env, cache, persistent |
+| [configuration/global-options.md](./references/configuration/global-options.md) | globalEnv, globalDependencies, cacheDir, daemon, envMode |
+| [configuration/gotchas.md](./references/configuration/gotchas.md) | Common configuration mistakes |
+
+### Caching
+
+| File | Purpose |
+| --------------------------------------------------------------- | -------------------------------------------- |
+| [caching/RULE.md](./references/caching/RULE.md) | How caching works, hash inputs |
+| [caching/remote-cache.md](./references/caching/remote-cache.md) | Vercel Remote Cache, self-hosted, login/link |
+| [caching/gotchas.md](./references/caching/gotchas.md) | Debugging cache misses, --summarize, --dry |
+
+### Environment Variables
+
+| File | Purpose |
+| ------------------------------------------------------------- | ----------------------------------------- |
+| [environment/RULE.md](./references/environment/RULE.md) | env, globalEnv, passThroughEnv |
+| [environment/modes.md](./references/environment/modes.md) | Strict vs Loose mode, framework inference |
+| [environment/gotchas.md](./references/environment/gotchas.md) | .env files, CI issues |
+
+### Filtering
+
+| File | Purpose |
+| ----------------------------------------------------------- | ------------------------ |
+| [filtering/RULE.md](./references/filtering/RULE.md) | --filter syntax overview |
+| [filtering/patterns.md](./references/filtering/patterns.md) | Common filter patterns |
+
+### CI/CD
+
+| File | Purpose |
+| --------------------------------------------------------- | ------------------------------- |
+| [ci/RULE.md](./references/ci/RULE.md) | General CI principles |
+| [ci/github-actions.md](./references/ci/github-actions.md) | Complete GitHub Actions setup |
+| [ci/vercel.md](./references/ci/vercel.md) | Vercel deployment, turbo-ignore |
+| [ci/patterns.md](./references/ci/patterns.md) | --affected, caching strategies |
+
+### CLI
+
+| File | Purpose |
+| ----------------------------------------------- | --------------------------------------------- |
+| [cli/RULE.md](./references/cli/RULE.md) | turbo run basics |
+| [cli/commands.md](./references/cli/commands.md) | turbo run flags, turbo-ignore, other commands |
+
+### Best Practices
+
+| File | Purpose |
+| ----------------------------------------------------------------------------- | --------------------------------------------------------------- |
+| [best-practices/RULE.md](./references/best-practices/RULE.md) | Monorepo best practices overview |
+| [best-practices/structure.md](./references/best-practices/structure.md) | Repository structure, workspace config, TypeScript/ESLint setup |
+| [best-practices/packages.md](./references/best-practices/packages.md) | Creating internal packages, JIT vs Compiled, exports |
+| [best-practices/dependencies.md](./references/best-practices/dependencies.md) | Dependency management, installing, version sync |
+
+### Watch Mode
+
+| File | Purpose |
+| ------------------------------------------- | ----------------------------------------------- |
+| [watch/RULE.md](./references/watch/RULE.md) | turbo watch, interruptible tasks, dev workflows |
+
+### Boundaries (Experimental)
+
+| File | Purpose |
+| ----------------------------------------------------- | ----------------------------------------------------- |
+| [boundaries/RULE.md](./references/boundaries/RULE.md) | Enforce package isolation, tag-based dependency rules |
+
+## Source Documentation
+
+This skill is based on the official Turborepo documentation at:
+
+- Source: `docs/site/content/docs/` in the Turborepo repository
+- Live: https://turborepo.dev/docs
diff --git a/skills/turborepo/SYNC.md b/skills/turborepo/SYNC.md
new file mode 100644
index 0000000..bfbc228
--- /dev/null
+++ b/skills/turborepo/SYNC.md
@@ -0,0 +1,5 @@
+# Sync Info
+
+- **Source:** `vendor/turborepo/skills/turborepo`
+- **Git SHA:** `1caed6d1b018fc29b98c14e661a574f018a922f6`
+- **Synced:** 2026-01-31
diff --git a/skills/turborepo/command/turborepo.md b/skills/turborepo/command/turborepo.md
new file mode 100644
index 0000000..8323edc
--- /dev/null
+++ b/skills/turborepo/command/turborepo.md
@@ -0,0 +1,70 @@
+---
+description: Load Turborepo skill for creating workflows, tasks, and pipelines in monorepos. Use when users ask to "create a workflow", "make a task", "generate a pipeline", or set up build orchestration.
+---
+
+Load the Turborepo skill and help with monorepo task orchestration: creating workflows, configuring tasks, setting up pipelines, and optimizing builds.
+
+## Workflow
+
+### Step 1: Load turborepo skill
+
+```
+skill({ name: 'turborepo' })
+```
+
+### Step 2: Identify task type from user request
+
+Analyze $ARGUMENTS to determine:
+
+- **Topic**: configuration, caching, filtering, environment, CI, or CLI
+- **Task type**: new setup, debugging, optimization, or implementation
+
+Use decision trees in SKILL.md to select the relevant reference files.
+
+### Step 3: Read relevant reference files
+
+Based on task type, read from `references//`:
+
+| Task | Files to Read |
+| -------------------- | ------------------------------------------------------- |
+| Configure turbo.json | `configuration/RULE.md` + `configuration/tasks.md` |
+| Debug cache issues | `caching/gotchas.md` |
+| Set up remote cache | `caching/remote-cache.md` |
+| Filter packages | `filtering/RULE.md` + `filtering/patterns.md` |
+| Environment problems | `environment/gotchas.md` + `environment/modes.md` |
+| Set up CI | `ci/RULE.md` + `ci/github-actions.md` or `ci/vercel.md` |
+| CLI usage | `cli/commands.md` |
+
+### Step 4: Execute task
+
+Apply Turborepo-specific patterns from references to complete the user's request.
+
+**CRITICAL - When creating tasks/scripts/pipelines:**
+
+1. **DO NOT create Root Tasks** - Always create package tasks
+2. Add scripts to each relevant package's `package.json` (e.g., `apps/web/package.json`, `packages/ui/package.json`)
+3. Register the task in root `turbo.json`
+4. Root `package.json` only contains `turbo run ` - never actual task logic
+
+**Other things to verify:**
+
+- `outputs` defined for cacheable tasks
+- `dependsOn` uses correct syntax (`^task` vs `task`)
+- Environment variables in `env` key
+- `.env` files in `inputs` if used
+- Use `turbo run` (not `turbo`) in package.json and CI
+
+### Step 5: Summarize
+
+```
+=== Turborepo Task Complete ===
+
+Topic:
+Files referenced:
+
+
+```
+
+
+$ARGUMENTS
+
diff --git a/skills/turborepo/references/best-practices/RULE.md b/skills/turborepo/references/best-practices/RULE.md
new file mode 100644
index 0000000..d2f5280
--- /dev/null
+++ b/skills/turborepo/references/best-practices/RULE.md
@@ -0,0 +1,241 @@
+# Monorepo Best Practices
+
+Essential patterns for structuring and maintaining a healthy Turborepo monorepo.
+
+## Repository Structure
+
+### Standard Layout
+
+```
+my-monorepo/
+├── apps/ # Application packages (deployable)
+│ ├── web/
+│ ├── docs/
+│ └── api/
+├── packages/ # Library packages (shared code)
+│ ├── ui/
+│ ├── utils/
+│ └── config-*/ # Shared configs (eslint, typescript, etc.)
+├── package.json # Root package.json (minimal deps)
+├── turbo.json # Turborepo configuration
+├── pnpm-workspace.yaml # (pnpm) or workspaces in package.json
+└── pnpm-lock.yaml # Lockfile (required)
+```
+
+### Key Principles
+
+1. **`apps/` for deployables**: Next.js sites, APIs, CLIs - things that get deployed
+2. **`packages/` for libraries**: Shared code consumed by apps or other packages
+3. **One purpose per package**: Each package should do one thing well
+4. **No nested packages**: Don't put packages inside packages
+
+## Package Types
+
+### Application Packages (`apps/`)
+
+- **Deployable**: These are the "endpoints" of your package graph
+- **Not installed by other packages**: Apps shouldn't be dependencies of other packages
+- **No shared code**: If code needs sharing, extract to `packages/`
+
+```json
+// apps/web/package.json
+{
+ "name": "web",
+ "private": true,
+ "dependencies": {
+ "@repo/ui": "workspace:*",
+ "next": "latest"
+ }
+}
+```
+
+### Library Packages (`packages/`)
+
+- **Shared code**: Utilities, components, configs
+- **Namespaced names**: Use `@repo/` or `@yourorg/` prefix
+- **Clear exports**: Define what the package exposes
+
+```json
+// packages/ui/package.json
+{
+ "name": "@repo/ui",
+ "exports": {
+ "./button": "./src/button.tsx",
+ "./card": "./src/card.tsx"
+ }
+}
+```
+
+## Package Compilation Strategies
+
+### Just-in-Time (Simplest)
+
+Export TypeScript directly; let the app's bundler compile it.
+
+```json
+{
+ "name": "@repo/ui",
+ "exports": {
+ "./button": "./src/button.tsx"
+ }
+}
+```
+
+**Pros**: Zero build config, instant changes
+**Cons**: Can't cache builds, requires app bundler support
+
+### Compiled (Recommended for Libraries)
+
+Package compiles itself with `tsc` or bundler.
+
+```json
+{
+ "name": "@repo/ui",
+ "exports": {
+ "./button": {
+ "types": "./src/button.tsx",
+ "default": "./dist/button.js"
+ }
+ },
+ "scripts": {
+ "build": "tsc"
+ }
+}
+```
+
+**Pros**: Cacheable by Turborepo, works everywhere
+**Cons**: More configuration
+
+## Dependency Management
+
+### Install Where Used
+
+Install dependencies in the package that uses them, not the root.
+
+```bash
+# Good: Install in the package that needs it
+pnpm add lodash --filter=@repo/utils
+
+# Avoid: Installing everything at root
+pnpm add lodash -w # Only for repo-level tools
+```
+
+### Root Dependencies
+
+Only these belong in root `package.json`:
+
+- `turbo` - The build system
+- `husky`, `lint-staged` - Git hooks
+- Repository-level tooling
+
+### Internal Dependencies
+
+Use workspace protocol for internal packages:
+
+```json
+// pnpm/bun
+{ "@repo/ui": "workspace:*" }
+
+// npm/yarn
+{ "@repo/ui": "*" }
+```
+
+## Exports Best Practices
+
+### Use `exports` Field (Not `main`)
+
+```json
+{
+ "exports": {
+ ".": "./src/index.ts",
+ "./button": "./src/button.tsx",
+ "./utils": "./src/utils.ts"
+ }
+}
+```
+
+### Avoid Barrel Files
+
+Don't create `index.ts` files that re-export everything:
+
+```typescript
+// BAD: packages/ui/src/index.ts
+export * from './button';
+export * from './card';
+export * from './modal';
+// ... imports everything even if you need one thing
+
+// GOOD: Direct exports in package.json
+{
+ "exports": {
+ "./button": "./src/button.tsx",
+ "./card": "./src/card.tsx"
+ }
+}
+```
+
+### Namespace Your Packages
+
+```json
+// Good
+{ "name": "@repo/ui" }
+{ "name": "@acme/utils" }
+
+// Avoid (conflicts with npm registry)
+{ "name": "ui" }
+{ "name": "utils" }
+```
+
+## Common Anti-Patterns
+
+### Accessing Files Across Package Boundaries
+
+```typescript
+// BAD: Reaching into another package
+import { Button } from '../../packages/ui/src/button';
+
+// GOOD: Install and import properly
+import { Button } from '@repo/ui/button';
+```
+
+### Shared Code in Apps
+
+```
+// BAD
+apps/
+ web/
+ shared/ # This should be a package!
+ utils.ts
+
+// GOOD
+packages/
+ utils/ # Proper shared package
+ src/utils.ts
+```
+
+### Too Many Root Dependencies
+
+```json
+// BAD: Root has app dependencies
+{
+ "dependencies": {
+ "react": "^18",
+ "next": "^14",
+ "lodash": "^4"
+ }
+}
+
+// GOOD: Root only has repo tools
+{
+ "devDependencies": {
+ "turbo": "latest",
+ "husky": "latest"
+ }
+}
+```
+
+## See Also
+
+- [structure.md](./structure.md) - Detailed repository structure patterns
+- [packages.md](./packages.md) - Creating and managing internal packages
+- [dependencies.md](./dependencies.md) - Dependency management strategies
diff --git a/skills/turborepo/references/best-practices/dependencies.md b/skills/turborepo/references/best-practices/dependencies.md
new file mode 100644
index 0000000..7a6fd2b
--- /dev/null
+++ b/skills/turborepo/references/best-practices/dependencies.md
@@ -0,0 +1,246 @@
+# Dependency Management
+
+Best practices for managing dependencies in a Turborepo monorepo.
+
+## Core Principle: Install Where Used
+
+Dependencies belong in the package that uses them, not the root.
+
+```bash
+# Good: Install in specific package
+pnpm add react --filter=@repo/ui
+pnpm add next --filter=web
+
+# Avoid: Installing in root
+pnpm add react -w # Only for repo-level tools!
+```
+
+## Benefits of Local Installation
+
+### 1. Clarity
+
+Each package's `package.json` lists exactly what it needs:
+
+```json
+// packages/ui/package.json
+{
+ "dependencies": {
+ "react": "^18.0.0",
+ "class-variance-authority": "^0.7.0"
+ }
+}
+```
+
+### 2. Flexibility
+
+Different packages can use different versions when needed:
+
+```json
+// packages/legacy-ui/package.json
+{ "dependencies": { "react": "^17.0.0" } }
+
+// packages/ui/package.json
+{ "dependencies": { "react": "^18.0.0" } }
+```
+
+### 3. Better Caching
+
+Installing in root changes workspace lockfile, invalidating all caches.
+
+### 4. Pruning Support
+
+`turbo prune` can remove unused dependencies for Docker images.
+
+## What Belongs in Root
+
+Only repository-level tools:
+
+```json
+// Root package.json
+{
+ "devDependencies": {
+ "turbo": "latest",
+ "husky": "^8.0.0",
+ "lint-staged": "^15.0.0"
+ }
+}
+```
+
+**NOT** application dependencies:
+
+- react, next, express
+- lodash, axios, zod
+- Testing libraries (unless truly repo-wide)
+
+## Installing Dependencies
+
+### Single Package
+
+```bash
+# pnpm
+pnpm add lodash --filter=@repo/utils
+
+# npm
+npm install lodash --workspace=@repo/utils
+
+# yarn
+yarn workspace @repo/utils add lodash
+
+# bun
+cd packages/utils && bun add lodash
+```
+
+### Multiple Packages
+
+```bash
+# pnpm
+pnpm add jest --save-dev --filter=web --filter=@repo/ui
+
+# npm
+npm install jest --save-dev --workspace=web --workspace=@repo/ui
+
+# yarn (v2+)
+yarn workspaces foreach -R --from '{web,@repo/ui}' add jest --dev
+```
+
+### Internal Packages
+
+```bash
+# pnpm
+pnpm add @repo/ui --filter=web
+
+# This updates package.json:
+{
+ "dependencies": {
+ "@repo/ui": "workspace:*"
+ }
+}
+```
+
+## Keeping Versions in Sync
+
+### Option 1: Tooling
+
+```bash
+# syncpack - Check and fix version mismatches
+npx syncpack list-mismatches
+npx syncpack fix-mismatches
+
+# manypkg - Similar functionality
+npx @manypkg/cli check
+npx @manypkg/cli fix
+
+# sherif - Rust-based, very fast
+npx sherif
+```
+
+### Option 2: Package Manager Commands
+
+```bash
+# pnpm - Update everywhere
+pnpm up --recursive typescript@latest
+
+# npm - Update in all workspaces
+npm install typescript@latest --workspaces
+```
+
+### Option 3: pnpm Catalogs (pnpm 9.5+)
+
+```yaml
+# pnpm-workspace.yaml
+packages:
+ - "apps/*"
+ - "packages/*"
+
+catalog:
+ react: ^18.2.0
+ typescript: ^5.3.0
+```
+
+```json
+// Any package.json
+{
+ "dependencies": {
+ "react": "catalog:" // Uses version from catalog
+ }
+}
+```
+
+## Internal vs External Dependencies
+
+### Internal (Workspace)
+
+```json
+// pnpm/bun
+{ "@repo/ui": "workspace:*" }
+
+// npm/yarn
+{ "@repo/ui": "*" }
+```
+
+Turborepo understands these relationships and orders builds accordingly.
+
+### External (npm Registry)
+
+```json
+{ "lodash": "^4.17.21" }
+```
+
+Standard semver versioning from npm.
+
+## Peer Dependencies
+
+For library packages that expect the consumer to provide dependencies:
+
+```json
+// packages/ui/package.json
+{
+ "peerDependencies": {
+ "react": "^18.0.0",
+ "react-dom": "^18.0.0"
+ },
+ "devDependencies": {
+ "react": "^18.0.0", // For development/testing
+ "react-dom": "^18.0.0"
+ }
+}
+```
+
+## Common Issues
+
+### "Module not found"
+
+1. Check the dependency is installed in the right package
+2. Run `pnpm install` / `npm install` to update lockfile
+3. Check exports are defined in the package
+
+### Version Conflicts
+
+Packages can use different versions - this is a feature, not a bug. But if you need consistency:
+
+1. Use tooling (syncpack, manypkg)
+2. Use pnpm catalogs
+3. Create a lint rule
+
+### Hoisting Issues
+
+Some tools expect dependencies in specific locations. Use package manager config:
+
+```yaml
+# .npmrc (pnpm)
+public-hoist-pattern[]=*eslint*
+public-hoist-pattern[]=*prettier*
+```
+
+## Lockfile
+
+**Required** for:
+
+- Reproducible builds
+- Turborepo dependency analysis
+- Cache correctness
+
+```bash
+# Commit your lockfile!
+git add pnpm-lock.yaml # or package-lock.json, yarn.lock
+```
diff --git a/skills/turborepo/references/best-practices/packages.md b/skills/turborepo/references/best-practices/packages.md
new file mode 100644
index 0000000..29800a1
--- /dev/null
+++ b/skills/turborepo/references/best-practices/packages.md
@@ -0,0 +1,335 @@
+# Creating Internal Packages
+
+How to create and structure internal packages in your monorepo.
+
+## Package Creation Checklist
+
+1. Create directory in `packages/`
+2. Add `package.json` with name and exports
+3. Add source code in `src/`
+4. Add `tsconfig.json` if using TypeScript
+5. Install as dependency in consuming packages
+6. Run package manager install to update lockfile
+
+## Package Compilation Strategies
+
+### Just-in-Time (JIT)
+
+Export TypeScript directly. The consuming app's bundler compiles it.
+
+```json
+// packages/ui/package.json
+{
+ "name": "@repo/ui",
+ "exports": {
+ "./button": "./src/button.tsx",
+ "./card": "./src/card.tsx"
+ },
+ "scripts": {
+ "lint": "eslint .",
+ "check-types": "tsc --noEmit"
+ }
+}
+```
+
+**When to use:**
+
+- Apps use modern bundlers (Turbopack, webpack, Vite)
+- You want minimal configuration
+- Build times are acceptable without caching
+
+**Limitations:**
+
+- No Turborepo cache for the package itself
+- Consumer must support TypeScript compilation
+- Can't use TypeScript `paths` (use Node.js subpath imports instead)
+
+### Compiled
+
+Package handles its own compilation.
+
+```json
+// packages/ui/package.json
+{
+ "name": "@repo/ui",
+ "exports": {
+ "./button": {
+ "types": "./src/button.tsx",
+ "default": "./dist/button.js"
+ }
+ },
+ "scripts": {
+ "build": "tsc",
+ "dev": "tsc --watch"
+ }
+}
+```
+
+```json
+// packages/ui/tsconfig.json
+{
+ "extends": "@repo/typescript-config/library.json",
+ "compilerOptions": {
+ "outDir": "dist",
+ "rootDir": "src"
+ },
+ "include": ["src"],
+ "exclude": ["node_modules", "dist"]
+}
+```
+
+**When to use:**
+
+- You want Turborepo to cache builds
+- Package will be used by non-bundler tools
+- You need maximum compatibility
+
+**Remember:** Add `dist/**` to turbo.json outputs!
+
+## Defining Exports
+
+### Multiple Entrypoints
+
+```json
+{
+ "exports": {
+ ".": "./src/index.ts", // @repo/ui
+ "./button": "./src/button.tsx", // @repo/ui/button
+ "./card": "./src/card.tsx", // @repo/ui/card
+ "./hooks": "./src/hooks/index.ts" // @repo/ui/hooks
+ }
+}
+```
+
+### Conditional Exports (Compiled)
+
+```json
+{
+ "exports": {
+ "./button": {
+ "types": "./src/button.tsx",
+ "import": "./dist/button.mjs",
+ "require": "./dist/button.cjs",
+ "default": "./dist/button.js"
+ }
+ }
+}
+```
+
+## Installing Internal Packages
+
+### Add to Consuming Package
+
+```json
+// apps/web/package.json
+{
+ "dependencies": {
+ "@repo/ui": "workspace:*" // pnpm/bun
+ // "@repo/ui": "*" // npm/yarn
+ }
+}
+```
+
+### Run Install
+
+```bash
+pnpm install # Updates lockfile with new dependency
+```
+
+### Import and Use
+
+```typescript
+// apps/web/src/page.tsx
+import { Button } from '@repo/ui/button';
+
+export default function Page() {
+ return Click me ;
+}
+```
+
+## One Purpose Per Package
+
+### Good Examples
+
+```
+packages/
+├── ui/ # Shared UI components
+├── utils/ # General utilities
+├── auth/ # Authentication logic
+├── database/ # Database client/schemas
+├── eslint-config/ # ESLint configuration
+├── typescript-config/ # TypeScript configuration
+└── api-client/ # Generated API client
+```
+
+### Avoid Mega-Packages
+
+```
+// BAD: One package for everything
+packages/
+└── shared/
+ ├── components/
+ ├── utils/
+ ├── hooks/
+ ├── types/
+ └── api/
+
+// GOOD: Separate by purpose
+packages/
+├── ui/ # Components
+├── utils/ # Utilities
+├── hooks/ # React hooks
+├── types/ # Shared TypeScript types
+└── api-client/ # API utilities
+```
+
+## Config Packages
+
+### TypeScript Config
+
+```json
+// packages/typescript-config/package.json
+{
+ "name": "@repo/typescript-config",
+ "exports": {
+ "./base.json": "./base.json",
+ "./nextjs.json": "./nextjs.json",
+ "./library.json": "./library.json"
+ }
+}
+```
+
+### ESLint Config
+
+```json
+// packages/eslint-config/package.json
+{
+ "name": "@repo/eslint-config",
+ "exports": {
+ "./base": "./base.js",
+ "./next": "./next.js"
+ },
+ "dependencies": {
+ "eslint": "^8.0.0",
+ "eslint-config-next": "latest"
+ }
+}
+```
+
+## Common Mistakes
+
+### Forgetting to Export
+
+```json
+// BAD: No exports defined
+{
+ "name": "@repo/ui"
+}
+
+// GOOD: Clear exports
+{
+ "name": "@repo/ui",
+ "exports": {
+ "./button": "./src/button.tsx"
+ }
+}
+```
+
+### Wrong Workspace Syntax
+
+```json
+// pnpm/bun
+{ "@repo/ui": "workspace:*" } // Correct
+
+// npm/yarn
+{ "@repo/ui": "*" } // Correct
+{ "@repo/ui": "workspace:*" } // Wrong for npm/yarn!
+```
+
+### Missing from turbo.json Outputs
+
+```json
+// Package builds to dist/, but turbo.json doesn't know
+{
+ "tasks": {
+ "build": {
+ "outputs": [".next/**"] // Missing dist/**!
+ }
+ }
+}
+
+// Correct
+{
+ "tasks": {
+ "build": {
+ "outputs": [".next/**", "dist/**"]
+ }
+ }
+}
+```
+
+## TypeScript Best Practices
+
+### Use Node.js Subpath Imports (Not `paths`)
+
+TypeScript `compilerOptions.paths` breaks with JIT packages. Use Node.js subpath imports instead (TypeScript 5.4+).
+
+**JIT Package:**
+
+```json
+// packages/ui/package.json
+{
+ "imports": {
+ "#*": "./src/*"
+ }
+}
+```
+
+```typescript
+// packages/ui/button.tsx
+import { MY_STRING } from "#utils.ts"; // Uses .ts extension
+```
+
+**Compiled Package:**
+
+```json
+// packages/ui/package.json
+{
+ "imports": {
+ "#*": "./dist/*"
+ }
+}
+```
+
+```typescript
+// packages/ui/button.tsx
+import { MY_STRING } from "#utils.js"; // Uses .js extension
+```
+
+### Use `tsc` for Internal Packages
+
+For internal packages, prefer `tsc` over bundlers. Bundlers can mangle code before it reaches your app's bundler, causing hard-to-debug issues.
+
+### Enable Go-to-Definition
+
+For Compiled Packages, enable declaration maps:
+
+```json
+// tsconfig.json
+{
+ "compilerOptions": {
+ "declaration": true,
+ "declarationMap": true
+ }
+}
+```
+
+This creates `.d.ts` and `.d.ts.map` files for IDE navigation.
+
+### No Root tsconfig.json Needed
+
+Each package should have its own `tsconfig.json`. A root one causes all tasks to miss cache when changed. Only use root `tsconfig.json` for non-package scripts.
+
+### Avoid TypeScript Project References
+
+They add complexity and another caching layer. Turborepo handles dependencies better.
diff --git a/skills/turborepo/references/best-practices/structure.md b/skills/turborepo/references/best-practices/structure.md
new file mode 100644
index 0000000..8e31de3
--- /dev/null
+++ b/skills/turborepo/references/best-practices/structure.md
@@ -0,0 +1,269 @@
+# Repository Structure
+
+Detailed guidance on structuring a Turborepo monorepo.
+
+## Workspace Configuration
+
+### pnpm (Recommended)
+
+```yaml
+# pnpm-workspace.yaml
+packages:
+ - "apps/*"
+ - "packages/*"
+```
+
+### npm/yarn/bun
+
+```json
+// package.json
+{
+ "workspaces": ["apps/*", "packages/*"]
+}
+```
+
+## Root package.json
+
+```json
+{
+ "name": "my-monorepo",
+ "private": true,
+ "packageManager": "pnpm@9.0.0",
+ "scripts": {
+ "build": "turbo run build",
+ "dev": "turbo run dev",
+ "lint": "turbo run lint",
+ "test": "turbo run test"
+ },
+ "devDependencies": {
+ "turbo": "latest"
+ }
+}
+```
+
+Key points:
+
+- `private: true` - Prevents accidental publishing
+- `packageManager` - Enforces consistent package manager version
+- **Scripts only delegate to `turbo run`** - No actual build logic here!
+- Minimal devDependencies (just turbo and repo tools)
+
+## Always Prefer Package Tasks
+
+**Always use package tasks. Only use Root Tasks if you cannot succeed with package tasks.**
+
+```json
+// packages/web/package.json
+{
+ "scripts": {
+ "build": "next build",
+ "lint": "eslint .",
+ "test": "vitest",
+ "typecheck": "tsc --noEmit"
+ }
+}
+
+// packages/api/package.json
+{
+ "scripts": {
+ "build": "tsc",
+ "lint": "eslint .",
+ "test": "vitest",
+ "typecheck": "tsc --noEmit"
+ }
+}
+```
+
+Package tasks enable Turborepo to:
+
+1. **Parallelize** - Run `web#lint` and `api#lint` simultaneously
+2. **Cache individually** - Each package's task output is cached separately
+3. **Filter precisely** - Run `turbo run test --filter=web` for just one package
+
+**Root Tasks are a fallback** for tasks that truly cannot run per-package:
+
+```json
+// AVOID unless necessary - sequential, not parallelized, can't filter
+{
+ "scripts": {
+ "lint": "eslint apps/web && eslint apps/api && eslint packages/ui"
+ }
+}
+```
+
+## Root turbo.json
+
+```json
+{
+ "$schema": "https://turborepo.dev/schema.v2.json",
+ "tasks": {
+ "build": {
+ "dependsOn": ["^build"],
+ "outputs": ["dist/**", ".next/**", "!.next/cache/**"]
+ },
+ "lint": {},
+ "test": {
+ "dependsOn": ["build"]
+ },
+ "dev": {
+ "cache": false,
+ "persistent": true
+ }
+ }
+}
+```
+
+## Directory Organization
+
+### Grouping Packages
+
+You can group packages by adding more workspace paths:
+
+```yaml
+# pnpm-workspace.yaml
+packages:
+ - "apps/*"
+ - "packages/*"
+ - "packages/config/*" # Grouped configs
+ - "packages/features/*" # Feature packages
+```
+
+This allows:
+
+```
+packages/
+├── ui/
+├── utils/
+├── config/
+│ ├── eslint/
+│ ├── typescript/
+│ └── tailwind/
+└── features/
+ ├── auth/
+ └── payments/
+```
+
+### What NOT to Do
+
+```yaml
+# BAD: Nested wildcards cause ambiguous behavior
+packages:
+ - "packages/**" # Don't do this!
+```
+
+## Package Anatomy
+
+### Minimum Required Files
+
+```
+packages/ui/
+├── package.json # Required: Makes it a package
+├── src/ # Source code
+│ └── button.tsx
+└── tsconfig.json # TypeScript config (if using TS)
+```
+
+### package.json Requirements
+
+```json
+{
+ "name": "@repo/ui", // Unique, namespaced name
+ "version": "0.0.0", // Version (can be 0.0.0 for internal)
+ "private": true, // Prevents accidental publishing
+ "exports": { // Entry points
+ "./button": "./src/button.tsx"
+ }
+}
+```
+
+## TypeScript Configuration
+
+### Shared Base Config
+
+Create a shared TypeScript config package:
+
+```
+packages/
+└── typescript-config/
+ ├── package.json
+ ├── base.json
+ ├── nextjs.json
+ └── library.json
+```
+
+```json
+// packages/typescript-config/base.json
+{
+ "compilerOptions": {
+ "strict": true,
+ "esModuleInterop": true,
+ "skipLibCheck": true,
+ "moduleResolution": "bundler",
+ "module": "ESNext",
+ "target": "ES2022"
+ }
+}
+```
+
+### Extending in Packages
+
+```json
+// packages/ui/tsconfig.json
+{
+ "extends": "@repo/typescript-config/library.json",
+ "compilerOptions": {
+ "outDir": "dist",
+ "rootDir": "src"
+ },
+ "include": ["src"],
+ "exclude": ["node_modules", "dist"]
+}
+```
+
+### No Root tsconfig.json
+
+You likely don't need a `tsconfig.json` in the workspace root. Each package should have its own config extending from the shared config package.
+
+## ESLint Configuration
+
+### Shared Config Package
+
+```
+packages/
+└── eslint-config/
+ ├── package.json
+ ├── base.js
+ ├── next.js
+ └── library.js
+```
+
+```json
+// packages/eslint-config/package.json
+{
+ "name": "@repo/eslint-config",
+ "exports": {
+ "./base": "./base.js",
+ "./next": "./next.js",
+ "./library": "./library.js"
+ }
+}
+```
+
+### Using in Packages
+
+```js
+// apps/web/.eslintrc.js
+module.exports = {
+ extends: ["@repo/eslint-config/next"],
+};
+```
+
+## Lockfile
+
+A lockfile is **required** for:
+
+- Reproducible builds
+- Turborepo to understand package dependencies
+- Cache correctness
+
+Without a lockfile, you'll see unpredictable behavior.
diff --git a/skills/turborepo/references/boundaries/RULE.md b/skills/turborepo/references/boundaries/RULE.md
new file mode 100644
index 0000000..3deb0a4
--- /dev/null
+++ b/skills/turborepo/references/boundaries/RULE.md
@@ -0,0 +1,126 @@
+# Boundaries
+
+**Experimental feature** - See [RFC](https://github.com/vercel/turborepo/discussions/9435)
+
+Full docs: https://turborepo.dev/docs/reference/boundaries
+
+Boundaries enforce package isolation by detecting:
+
+1. Imports of files outside the package's directory
+2. Imports of packages not declared in `package.json` dependencies
+
+## Usage
+
+```bash
+turbo boundaries
+```
+
+Run this to check for workspace violations across your monorepo.
+
+## Tags
+
+Tags allow you to create rules for which packages can depend on each other.
+
+### Adding Tags to a Package
+
+```json
+// packages/ui/turbo.json
+{
+ "tags": ["internal"]
+}
+```
+
+### Configuring Tag Rules
+
+Rules go in root `turbo.json`:
+
+```json
+// turbo.json
+{
+ "boundaries": {
+ "tags": {
+ "public": {
+ "dependencies": {
+ "deny": ["internal"]
+ }
+ }
+ }
+ }
+}
+```
+
+This prevents `public`-tagged packages from importing `internal`-tagged packages.
+
+### Rule Types
+
+**Allow-list approach** (only allow specific tags):
+
+```json
+{
+ "boundaries": {
+ "tags": {
+ "public": {
+ "dependencies": {
+ "allow": ["public"]
+ }
+ }
+ }
+ }
+}
+```
+
+**Deny-list approach** (block specific tags):
+
+```json
+{
+ "boundaries": {
+ "tags": {
+ "public": {
+ "dependencies": {
+ "deny": ["internal"]
+ }
+ }
+ }
+ }
+}
+```
+
+**Restrict dependents** (who can import this package):
+
+```json
+{
+ "boundaries": {
+ "tags": {
+ "private": {
+ "dependents": {
+ "deny": ["public"]
+ }
+ }
+ }
+ }
+}
+```
+
+### Using Package Names
+
+Package names work in place of tags:
+
+```json
+{
+ "boundaries": {
+ "tags": {
+ "private": {
+ "dependents": {
+ "deny": ["@repo/my-pkg"]
+ }
+ }
+ }
+ }
+}
+```
+
+## Key Points
+
+- Rules apply transitively (dependencies of dependencies)
+- Helps enforce architectural boundaries at scale
+- Catches violations before runtime/build errors
diff --git a/skills/turborepo/references/caching/RULE.md b/skills/turborepo/references/caching/RULE.md
new file mode 100644
index 0000000..fe6388e
--- /dev/null
+++ b/skills/turborepo/references/caching/RULE.md
@@ -0,0 +1,107 @@
+# How Turborepo Caching Works
+
+Turborepo's core principle: **never do the same work twice**.
+
+## The Cache Equation
+
+```
+fingerprint(inputs) → stored outputs
+```
+
+If inputs haven't changed, restore outputs from cache instead of re-running the task.
+
+## What Determines the Cache Key
+
+### Global Hash Inputs
+
+These affect ALL tasks in the repo:
+
+- `package-lock.json` / `yarn.lock` / `pnpm-lock.yaml`
+- Files listed in `globalDependencies`
+- Environment variables in `globalEnv`
+- `turbo.json` configuration
+
+```json
+{
+ "globalDependencies": [".env", "tsconfig.base.json"],
+ "globalEnv": ["CI", "NODE_ENV"]
+}
+```
+
+### Task Hash Inputs
+
+These affect specific tasks:
+
+- All files in the package (unless filtered by `inputs`)
+- `package.json` contents
+- Environment variables in task's `env` key
+- Task configuration (command, outputs, dependencies)
+- Hashes of dependent tasks (`dependsOn`)
+
+```json
+{
+ "tasks": {
+ "build": {
+ "dependsOn": ["^build"],
+ "inputs": ["src/**", "package.json", "tsconfig.json"],
+ "env": ["API_URL"]
+ }
+ }
+}
+```
+
+## What Gets Cached
+
+1. **File outputs** - files/directories specified in `outputs`
+2. **Task logs** - stdout/stderr for replay on cache hit
+
+```json
+{
+ "tasks": {
+ "build": {
+ "outputs": ["dist/**", ".next/**"]
+ }
+ }
+}
+```
+
+## Local Cache Location
+
+```
+.turbo/cache/
+├── .tar.zst # compressed outputs
+├── .tar.zst
+└── ...
+```
+
+Add `.turbo` to `.gitignore`.
+
+## Cache Restoration
+
+On cache hit, Turborepo:
+
+1. Extracts archived outputs to their original locations
+2. Replays the logged stdout/stderr
+3. Reports the task as cached (shows `FULL TURBO` in output)
+
+## Example Flow
+
+```bash
+# First run - executes build, caches result
+turbo build
+# → packages/ui: cache miss, executing...
+# → packages/web: cache miss, executing...
+
+# Second run - same inputs, restores from cache
+turbo build
+# → packages/ui: cache hit, replaying output
+# → packages/web: cache hit, replaying output
+# → FULL TURBO
+```
+
+## Key Points
+
+- Cache is content-addressed (based on input hash, not timestamps)
+- Empty `outputs` array means task runs but nothing is cached
+- Tasks without `outputs` key cache nothing (use `"outputs": []` to be explicit)
+- Cache is invalidated when ANY input changes
diff --git a/skills/turborepo/references/caching/gotchas.md b/skills/turborepo/references/caching/gotchas.md
new file mode 100644
index 0000000..17d4499
--- /dev/null
+++ b/skills/turborepo/references/caching/gotchas.md
@@ -0,0 +1,169 @@
+# Debugging Cache Issues
+
+## Diagnostic Tools
+
+### `--summarize`
+
+Generates a JSON file with all hash inputs. Compare two runs to find differences.
+
+```bash
+turbo build --summarize
+# Creates .turbo/runs/.json
+```
+
+The summary includes:
+
+- Global hash and its inputs
+- Per-task hashes and their inputs
+- Environment variables that affected the hash
+
+**Comparing runs:**
+
+```bash
+# Run twice, compare the summaries
+diff .turbo/runs/.json .turbo/runs/.json
+```
+
+### `--dry` / `--dry=json`
+
+See what would run without executing anything:
+
+```bash
+turbo build --dry
+turbo build --dry=json # machine-readable output
+```
+
+Shows cache status for each task without running them.
+
+### `--force`
+
+Skip reading cache, re-execute all tasks:
+
+```bash
+turbo build --force
+```
+
+Useful to verify tasks actually work (not just cached results).
+
+## Unexpected Cache Misses
+
+**Symptom:** Task runs when you expected a cache hit.
+
+### Environment Variable Changed
+
+Check if an env var in the `env` key changed:
+
+```json
+{
+ "tasks": {
+ "build": {
+ "env": ["API_URL", "NODE_ENV"]
+ }
+ }
+}
+```
+
+Different `API_URL` between runs = cache miss.
+
+### .env File Changed
+
+`.env` files aren't tracked by default. Add to `inputs`:
+
+```json
+{
+ "tasks": {
+ "build": {
+ "inputs": ["$TURBO_DEFAULT$", ".env", ".env.local"]
+ }
+ }
+}
+```
+
+Or use `globalDependencies` for repo-wide env files:
+
+```json
+{
+ "globalDependencies": [".env"]
+}
+```
+
+### Lockfile Changed
+
+Installing/updating packages changes the global hash.
+
+### Source Files Changed
+
+Any file in the package (or in `inputs`) triggers a miss.
+
+### turbo.json Changed
+
+Config changes invalidate the global hash.
+
+## Incorrect Cache Hits
+
+**Symptom:** Cached output is stale/wrong.
+
+### Missing Environment Variable
+
+Task uses an env var not listed in `env`:
+
+```javascript
+// build.js
+const apiUrl = process.env.API_URL; // not tracked!
+```
+
+Fix: add to task config:
+
+```json
+{
+ "tasks": {
+ "build": {
+ "env": ["API_URL"]
+ }
+ }
+}
+```
+
+### Missing File in Inputs
+
+Task reads a file outside default inputs:
+
+```json
+{
+ "tasks": {
+ "build": {
+ "inputs": [
+ "$TURBO_DEFAULT$",
+ "../../shared-config.json" // file outside package
+ ]
+ }
+ }
+}
+```
+
+## Useful Flags
+
+```bash
+# Only show output for cache misses
+turbo build --output-logs=new-only
+
+# Show output for everything (debugging)
+turbo build --output-logs=full
+
+# See why tasks are running
+turbo build --verbosity=2
+```
+
+## Quick Checklist
+
+Cache miss when expected hit:
+
+1. Run with `--summarize`, compare with previous run
+2. Check env vars with `--dry=json`
+3. Look for lockfile/config changes in git
+
+Cache hit when expected miss:
+
+1. Verify env var is in `env` array
+2. Verify file is in `inputs` array
+3. Check if file is outside package directory
diff --git a/skills/turborepo/references/caching/remote-cache.md b/skills/turborepo/references/caching/remote-cache.md
new file mode 100644
index 0000000..da76458
--- /dev/null
+++ b/skills/turborepo/references/caching/remote-cache.md
@@ -0,0 +1,127 @@
+# Remote Caching
+
+Share cache artifacts across your team and CI pipelines.
+
+## Benefits
+
+- Team members get cache hits from each other's work
+- CI gets cache hits from local development (and vice versa)
+- Dramatically faster CI runs after first build
+- No more "works on my machine" rebuilds
+
+## Vercel Remote Cache
+
+Free, zero-config when deploying on Vercel. For local dev and other CI:
+
+### Local Development Setup
+
+```bash
+# Authenticate with Vercel
+npx turbo login
+
+# Link repo to your Vercel team
+npx turbo link
+```
+
+This creates `.turbo/config.json` with your team info (gitignored by default).
+
+### CI Setup
+
+Set these environment variables:
+
+```bash
+TURBO_TOKEN=
+TURBO_TEAM=
+```
+
+Get your token from Vercel dashboard → Settings → Tokens.
+
+**GitHub Actions example:**
+
+```yaml
+- name: Build
+ run: npx turbo build
+ env:
+ TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
+ TURBO_TEAM: ${{ vars.TURBO_TEAM }}
+```
+
+## Configuration in turbo.json
+
+```json
+{
+ "remoteCache": {
+ "enabled": true,
+ "signature": false
+ }
+}
+```
+
+Options:
+
+- `enabled`: toggle remote cache (default: true when authenticated)
+- `signature`: require artifact signing (default: false)
+
+## Artifact Signing
+
+Verify cache artifacts haven't been tampered with:
+
+```bash
+# Set a secret key (use same key across all environments)
+export TURBO_REMOTE_CACHE_SIGNATURE_KEY="your-secret-key"
+```
+
+Enable in config:
+
+```json
+{
+ "remoteCache": {
+ "signature": true
+ }
+}
+```
+
+Signed artifacts can only be restored if the signature matches.
+
+## Self-Hosted Options
+
+Community implementations for running your own cache server:
+
+- **turbo-remote-cache** (Node.js) - supports S3, GCS, Azure
+- **turborepo-remote-cache** (Go) - lightweight, S3-compatible
+- **ducktape** (Rust) - high-performance option
+
+Configure with environment variables:
+
+```bash
+TURBO_API=https://your-cache-server.com
+TURBO_TOKEN=your-auth-token
+TURBO_TEAM=your-team
+```
+
+## Cache Behavior Control
+
+```bash
+# Disable remote cache for a run
+turbo build --remote-cache-read-only # read but don't write
+turbo build --no-cache # skip cache entirely
+
+# Environment variable alternative
+TURBO_REMOTE_ONLY=true # only use remote, skip local
+```
+
+## Debugging Remote Cache
+
+```bash
+# Verbose output shows cache operations
+turbo build --verbosity=2
+
+# Check if remote cache is configured
+turbo config
+```
+
+Look for:
+
+- "Remote caching enabled" in output
+- Upload/download messages during runs
+- "cache hit, replaying output" with remote cache indicator
diff --git a/skills/turborepo/references/ci/RULE.md b/skills/turborepo/references/ci/RULE.md
new file mode 100644
index 0000000..0e21933
--- /dev/null
+++ b/skills/turborepo/references/ci/RULE.md
@@ -0,0 +1,79 @@
+# CI/CD with Turborepo
+
+General principles for running Turborepo in continuous integration environments.
+
+## Core Principles
+
+### Always Use `turbo run` in CI
+
+**Never use the `turbo ` shorthand in CI or scripts.** Always use `turbo run`:
+
+```bash
+# CORRECT - Always use in CI, package.json, scripts
+turbo run build test lint
+
+# WRONG - Shorthand is only for one-off terminal commands
+turbo build test lint
+```
+
+The shorthand `turbo ` is only for one-off invocations typed directly in terminal by humans or agents. Anywhere the command is written into code (CI, package.json, scripts), use `turbo run`.
+
+### Enable Remote Caching
+
+Remote caching dramatically speeds up CI by sharing cached artifacts across runs.
+
+Required environment variables:
+
+```bash
+TURBO_TOKEN=your_vercel_token
+TURBO_TEAM=your_team_slug
+```
+
+### Use --affected for PR Builds
+
+The `--affected` flag only runs tasks for packages changed since the base branch:
+
+```bash
+turbo run build test --affected
+```
+
+This requires Git history to compute what changed.
+
+## Git History Requirements
+
+### Fetch Depth
+
+`--affected` needs access to the merge base. Shallow clones break this.
+
+```yaml
+# GitHub Actions
+- uses: actions/checkout@v4
+ with:
+ fetch-depth: 2 # Minimum for --affected
+ # Use 0 for full history if merge base is far
+```
+
+### Why Shallow Clones Break --affected
+
+Turborepo compares the current HEAD to the merge base with `main`. If that commit isn't fetched, `--affected` falls back to running everything.
+
+For PRs with many commits, consider:
+
+```yaml
+fetch-depth: 0 # Full history
+```
+
+## Environment Variables Reference
+
+| Variable | Purpose |
+| ------------------- | ------------------------------------ |
+| `TURBO_TOKEN` | Vercel access token for remote cache |
+| `TURBO_TEAM` | Your Vercel team slug |
+| `TURBO_REMOTE_ONLY` | Skip local cache, use remote only |
+| `TURBO_LOG_ORDER` | Set to `grouped` for cleaner CI logs |
+
+## See Also
+
+- [github-actions.md](./github-actions.md) - GitHub Actions setup
+- [vercel.md](./vercel.md) - Vercel deployment
+- [patterns.md](./patterns.md) - CI optimization patterns
diff --git a/skills/turborepo/references/ci/github-actions.md b/skills/turborepo/references/ci/github-actions.md
new file mode 100644
index 0000000..7e5d4cc
--- /dev/null
+++ b/skills/turborepo/references/ci/github-actions.md
@@ -0,0 +1,162 @@
+# GitHub Actions
+
+Complete setup guide for Turborepo with GitHub Actions.
+
+## Basic Workflow Structure
+
+```yaml
+name: CI
+
+on:
+ push:
+ branches: [main]
+ pull_request:
+ branches: [main]
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ fetch-depth: 2
+
+ - uses: actions/setup-node@v4
+ with:
+ node-version: 20
+
+ - name: Install dependencies
+ run: npm ci
+
+ - name: Build and Test
+ run: turbo run build test lint
+```
+
+## Package Manager Setup
+
+### pnpm
+
+```yaml
+- uses: pnpm/action-setup@v3
+ with:
+ version: 9
+
+- uses: actions/setup-node@v4
+ with:
+ node-version: 20
+ cache: 'pnpm'
+
+- run: pnpm install --frozen-lockfile
+```
+
+### Yarn
+
+```yaml
+- uses: actions/setup-node@v4
+ with:
+ node-version: 20
+ cache: 'yarn'
+
+- run: yarn install --frozen-lockfile
+```
+
+### Bun
+
+```yaml
+- uses: oven-sh/setup-bun@v1
+ with:
+ bun-version: latest
+
+- run: bun install --frozen-lockfile
+```
+
+## Remote Cache Setup
+
+### 1. Create Vercel Access Token
+
+1. Go to [Vercel Dashboard](https://vercel.com/account/tokens)
+2. Create a new token with appropriate scope
+3. Copy the token value
+
+### 2. Add Secrets and Variables
+
+In your GitHub repository settings:
+
+**Secrets** (Settings > Secrets and variables > Actions > Secrets):
+
+- `TURBO_TOKEN`: Your Vercel access token
+
+**Variables** (Settings > Secrets and variables > Actions > Variables):
+
+- `TURBO_TEAM`: Your Vercel team slug
+
+### 3. Add to Workflow
+
+```yaml
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ env:
+ TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
+ TURBO_TEAM: ${{ vars.TURBO_TEAM }}
+```
+
+## Alternative: actions/cache
+
+If you can't use remote cache, cache Turborepo's local cache directory:
+
+```yaml
+- uses: actions/cache@v4
+ with:
+ path: .turbo
+ key: turbo-${{ runner.os }}-${{ hashFiles('**/turbo.json', '**/package-lock.json') }}
+ restore-keys: |
+ turbo-${{ runner.os }}-
+```
+
+Note: This is less effective than remote cache since it's per-branch.
+
+## Complete Example
+
+```yaml
+name: CI
+
+on:
+ push:
+ branches: [main]
+ pull_request:
+ branches: [main]
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ env:
+ TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
+ TURBO_TEAM: ${{ vars.TURBO_TEAM }}
+
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ fetch-depth: 2
+
+ - uses: pnpm/action-setup@v3
+ with:
+ version: 9
+
+ - uses: actions/setup-node@v4
+ with:
+ node-version: 20
+ cache: 'pnpm'
+
+ - name: Install dependencies
+ run: pnpm install --frozen-lockfile
+
+ - name: Build
+ run: turbo run build --affected
+
+ - name: Test
+ run: turbo run test --affected
+
+ - name: Lint
+ run: turbo run lint --affected
+```
diff --git a/skills/turborepo/references/ci/patterns.md b/skills/turborepo/references/ci/patterns.md
new file mode 100644
index 0000000..447509a
--- /dev/null
+++ b/skills/turborepo/references/ci/patterns.md
@@ -0,0 +1,145 @@
+# CI Optimization Patterns
+
+Strategies for efficient CI/CD with Turborepo.
+
+## PR vs Main Branch Builds
+
+### PR Builds: Only Affected
+
+Test only what changed in the PR:
+
+```yaml
+- name: Test (PR)
+ if: github.event_name == 'pull_request'
+ run: turbo run build test --affected
+```
+
+### Main Branch: Full Build
+
+Ensure complete validation on merge:
+
+```yaml
+- name: Test (Main)
+ if: github.ref == 'refs/heads/main'
+ run: turbo run build test
+```
+
+## Custom Git Ranges with --filter
+
+For advanced scenarios, use `--filter` with git refs:
+
+```bash
+# Changes since specific commit
+turbo run test --filter="...[abc123]"
+
+# Changes between refs
+turbo run test --filter="...[main...HEAD]"
+
+# Changes in last 3 commits
+turbo run test --filter="...[HEAD~3]"
+```
+
+## Caching Strategies
+
+### Remote Cache (Recommended)
+
+Best performance - shared across all CI runs and developers:
+
+```yaml
+env:
+ TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
+ TURBO_TEAM: ${{ vars.TURBO_TEAM }}
+```
+
+### actions/cache Fallback
+
+When remote cache isn't available:
+
+```yaml
+- uses: actions/cache@v4
+ with:
+ path: .turbo
+ key: turbo-${{ runner.os }}-${{ github.sha }}
+ restore-keys: |
+ turbo-${{ runner.os }}-${{ github.ref }}-
+ turbo-${{ runner.os }}-
+```
+
+Limitations:
+
+- Cache is branch-scoped
+- PRs restore from base branch cache
+- Less efficient than remote cache
+
+## Matrix Builds
+
+Test across Node versions:
+
+```yaml
+strategy:
+ matrix:
+ node: [18, 20, 22]
+
+steps:
+ - uses: actions/setup-node@v4
+ with:
+ node-version: ${{ matrix.node }}
+
+ - run: turbo run test
+```
+
+## Parallelizing Across Jobs
+
+Split tasks into separate jobs:
+
+```yaml
+jobs:
+ lint:
+ runs-on: ubuntu-latest
+ steps:
+ - run: turbo run lint --affected
+
+ test:
+ runs-on: ubuntu-latest
+ steps:
+ - run: turbo run test --affected
+
+ build:
+ runs-on: ubuntu-latest
+ needs: [lint, test]
+ steps:
+ - run: turbo run build
+```
+
+### Cache Considerations
+
+When parallelizing:
+
+- Each job has separate cache writes
+- Remote cache handles this automatically
+- With actions/cache, use unique keys per job to avoid conflicts
+
+```yaml
+- uses: actions/cache@v4
+ with:
+ path: .turbo
+ key: turbo-${{ runner.os }}-${{ github.job }}-${{ github.sha }}
+```
+
+## Conditional Tasks
+
+Skip expensive tasks on draft PRs:
+
+```yaml
+- name: E2E Tests
+ if: github.event.pull_request.draft == false
+ run: turbo run test:e2e --affected
+```
+
+Or require label for full test:
+
+```yaml
+- name: Full Test Suite
+ if: contains(github.event.pull_request.labels.*.name, 'full-test')
+ run: turbo run test
+```
diff --git a/skills/turborepo/references/ci/vercel.md b/skills/turborepo/references/ci/vercel.md
new file mode 100644
index 0000000..f21d41a
--- /dev/null
+++ b/skills/turborepo/references/ci/vercel.md
@@ -0,0 +1,103 @@
+# Vercel Deployment
+
+Turborepo integrates seamlessly with Vercel for monorepo deployments.
+
+## Remote Cache
+
+Remote caching is **automatically enabled** when deploying to Vercel. No configuration needed - Vercel detects Turborepo and enables caching.
+
+This means:
+
+- No `TURBO_TOKEN` or `TURBO_TEAM` setup required on Vercel
+- Cache is shared across all deployments
+- Preview and production builds benefit from cache
+
+## turbo-ignore
+
+Skip unnecessary builds when a package hasn't changed using `turbo-ignore`.
+
+### Installation
+
+```bash
+npx turbo-ignore
+```
+
+Or install globally in your project:
+
+```bash
+pnpm add -D turbo-ignore
+```
+
+### Setup in Vercel
+
+1. Go to your project in Vercel Dashboard
+2. Navigate to Settings > Git > Ignored Build Step
+3. Select "Custom" and enter:
+
+```bash
+npx turbo-ignore
+```
+
+### How It Works
+
+`turbo-ignore` checks if the current package (or its dependencies) changed since the last successful deployment:
+
+1. Compares current commit to last deployed commit
+2. Uses Turborepo's dependency graph
+3. Returns exit code 0 (skip) if no changes
+4. Returns exit code 1 (build) if changes detected
+
+### Options
+
+```bash
+# Check specific package
+npx turbo-ignore web
+
+# Use specific comparison ref
+npx turbo-ignore --fallback=HEAD~1
+
+# Verbose output
+npx turbo-ignore --verbose
+```
+
+## Environment Variables
+
+Set environment variables in Vercel Dashboard:
+
+1. Go to Project Settings > Environment Variables
+2. Add variables for each environment (Production, Preview, Development)
+
+Common variables:
+
+- `DATABASE_URL`
+- `API_KEY`
+- Package-specific config
+
+## Monorepo Root Directory
+
+For monorepos, set the root directory in Vercel:
+
+1. Project Settings > General > Root Directory
+2. Set to the package path (e.g., `apps/web`)
+
+Vercel automatically:
+
+- Installs dependencies from monorepo root
+- Runs build from the package directory
+- Detects framework settings
+
+## Build Command
+
+Vercel auto-detects `turbo run build` when `turbo.json` exists at root.
+
+Override if needed:
+
+```bash
+turbo run build --filter=web
+```
+
+Or for production-only optimizations:
+
+```bash
+turbo run build --filter=web --env-mode=strict
+```
diff --git a/skills/turborepo/references/cli/RULE.md b/skills/turborepo/references/cli/RULE.md
new file mode 100644
index 0000000..63f6f34
--- /dev/null
+++ b/skills/turborepo/references/cli/RULE.md
@@ -0,0 +1,100 @@
+# turbo run
+
+The primary command for executing tasks across your monorepo.
+
+## Basic Usage
+
+```bash
+# Full form (use in CI, package.json, scripts)
+turbo run
+
+# Shorthand (only for one-off terminal invocations)
+turbo
+```
+
+## When to Use `turbo run` vs `turbo`
+
+**Always use `turbo run` when the command is written into code:**
+
+- `package.json` scripts
+- CI/CD workflows (GitHub Actions, etc.)
+- Shell scripts
+- Documentation
+- Any static/committed configuration
+
+**Only use `turbo` (shorthand) for:**
+
+- One-off commands typed directly in terminal
+- Ad-hoc invocations by humans or agents
+
+```json
+// package.json - ALWAYS use "turbo run"
+{
+ "scripts": {
+ "build": "turbo run build",
+ "dev": "turbo run dev",
+ "lint": "turbo run lint",
+ "test": "turbo run test"
+ }
+}
+```
+
+```yaml
+# CI workflow - ALWAYS use "turbo run"
+- run: turbo run build --affected
+- run: turbo run test --affected
+```
+
+```bash
+# Terminal one-off - shorthand OK
+turbo build --filter=web
+```
+
+## Running Tasks
+
+Tasks must be defined in `turbo.json` before running.
+
+```bash
+# Single task
+turbo build
+
+# Multiple tasks
+turbo run build lint test
+
+# See available tasks (run without arguments)
+turbo run
+```
+
+## Passing Arguments to Scripts
+
+Use `--` to pass arguments through to the underlying package scripts:
+
+```bash
+turbo run build -- --sourcemap
+turbo test -- --watch
+turbo lint -- --fix
+```
+
+Everything after `--` goes directly to the task's script.
+
+## Package Selection
+
+By default, turbo runs tasks in all packages. Use `--filter` to narrow scope:
+
+```bash
+turbo build --filter=web
+turbo test --filter=./apps/*
+```
+
+See `filtering/` for complete filter syntax.
+
+## Quick Reference
+
+| Goal | Command |
+| ------------------- | -------------------------- |
+| Build everything | `turbo build` |
+| Build one package | `turbo build --filter=web` |
+| Multiple tasks | `turbo build lint test` |
+| Pass args to script | `turbo build -- --arg` |
+| Preview run | `turbo build --dry` |
+| Force rebuild | `turbo build --force` |
diff --git a/skills/turborepo/references/cli/commands.md b/skills/turborepo/references/cli/commands.md
new file mode 100644
index 0000000..c1eb6b2
--- /dev/null
+++ b/skills/turborepo/references/cli/commands.md
@@ -0,0 +1,297 @@
+# turbo run Flags Reference
+
+Full docs: https://turborepo.dev/docs/reference/run
+
+## Package Selection
+
+### `--filter` / `-F`
+
+Select specific packages to run tasks in.
+
+```bash
+turbo build --filter=web
+turbo build -F=@repo/ui -F=@repo/utils
+turbo test --filter=./apps/*
+```
+
+See `filtering/` for complete syntax (globs, dependencies, git ranges).
+
+### Task Identifier Syntax (v2.2.4+)
+
+Run specific package tasks directly:
+
+```bash
+turbo run web#build # Build web package
+turbo run web#build docs#lint # Multiple specific tasks
+```
+
+### `--affected`
+
+Run only in packages changed since the base branch.
+
+```bash
+turbo build --affected
+turbo test --affected --filter=./apps/* # combine with filter
+```
+
+**How it works:**
+
+- Default: compares `main...HEAD`
+- In GitHub Actions: auto-detects `GITHUB_BASE_REF`
+- Override base: `TURBO_SCM_BASE=development turbo build --affected`
+- Override head: `TURBO_SCM_HEAD=your-branch turbo build --affected`
+
+**Requires git history** - shallow clones may fall back to running all tasks.
+
+## Execution Control
+
+### `--dry` / `--dry=json`
+
+Preview what would run without executing.
+
+```bash
+turbo build --dry # human-readable
+turbo build --dry=json # machine-readable
+```
+
+### `--force`
+
+Ignore all cached artifacts, re-run everything.
+
+```bash
+turbo build --force
+```
+
+### `--concurrency`
+
+Limit parallel task execution.
+
+```bash
+turbo build --concurrency=4 # max 4 tasks
+turbo build --concurrency=50% # 50% of CPU cores
+```
+
+### `--continue`
+
+Keep running other tasks when one fails.
+
+```bash
+turbo build test --continue
+```
+
+### `--only`
+
+Run only the specified task, skip its dependencies.
+
+```bash
+turbo build --only # skip running dependsOn tasks
+```
+
+### `--parallel` (Discouraged)
+
+Ignores task graph dependencies, runs all tasks simultaneously. **Avoid using this flag**—if tasks need to run in parallel, configure `dependsOn` correctly instead. Using `--parallel` bypasses Turborepo's dependency graph, which can cause race conditions and incorrect builds.
+
+## Cache Control
+
+### `--cache`
+
+Fine-grained cache behavior control.
+
+```bash
+# Default: read/write both local and remote
+turbo build --cache=local:rw,remote:rw
+
+# Read-only local, no remote
+turbo build --cache=local:r,remote:
+
+# Disable local, read-only remote
+turbo build --cache=local:,remote:r
+
+# Disable all caching
+turbo build --cache=local:,remote:
+```
+
+## Output & Debugging
+
+### `--graph`
+
+Generate task graph visualization.
+
+```bash
+turbo build --graph # opens in browser
+turbo build --graph=graph.svg # SVG file
+turbo build --graph=graph.png # PNG file
+turbo build --graph=graph.json # JSON data
+turbo build --graph=graph.mermaid # Mermaid diagram
+```
+
+### `--summarize`
+
+Generate JSON run summary for debugging.
+
+```bash
+turbo build --summarize
+# creates .turbo/runs/.json
+```
+
+### `--output-logs`
+
+Control log output verbosity.
+
+```bash
+turbo build --output-logs=full # all logs (default)
+turbo build --output-logs=new-only # only cache misses
+turbo build --output-logs=errors-only # only failures
+turbo build --output-logs=none # silent
+```
+
+### `--profile`
+
+Generate Chrome tracing profile for performance analysis.
+
+```bash
+turbo build --profile=profile.json
+# open chrome://tracing and load the file
+```
+
+### `--verbosity` / `-v`
+
+Control turbo's own log level.
+
+```bash
+turbo build -v # verbose
+turbo build -vv # more verbose
+turbo build -vvv # maximum verbosity
+```
+
+## Environment
+
+### `--env-mode`
+
+Control environment variable handling.
+
+```bash
+turbo build --env-mode=strict # only declared env vars (default)
+turbo build --env-mode=loose # include all env vars in hash
+```
+
+## UI
+
+### `--ui`
+
+Select output interface.
+
+```bash
+turbo build --ui=tui # interactive terminal UI (default in TTY)
+turbo build --ui=stream # streaming logs (default in CI)
+```
+
+---
+
+# turbo-ignore
+
+Full docs: https://turborepo.dev/docs/reference/turbo-ignore
+
+Skip CI work when nothing relevant changed. Useful for skipping container setup.
+
+## Basic Usage
+
+```bash
+# Check if build is needed for current package (uses Automatic Package Scoping)
+npx turbo-ignore
+
+# Check specific package
+npx turbo-ignore web
+
+# Check specific task
+npx turbo-ignore --task=test
+```
+
+## Exit Codes
+
+- `0`: No changes detected - skip CI work
+- `1`: Changes detected - proceed with CI
+
+## CI Integration Example
+
+```yaml
+# GitHub Actions
+- name: Check for changes
+ id: turbo-ignore
+ run: npx turbo-ignore web
+ continue-on-error: true
+
+- name: Build
+ if: steps.turbo-ignore.outcome == 'failure' # changes detected
+ run: pnpm build
+```
+
+## Comparison Depth
+
+Default: compares to parent commit (`HEAD^1`).
+
+```bash
+# Compare to specific commit
+npx turbo-ignore --fallback=abc123
+
+# Compare to branch
+npx turbo-ignore --fallback=main
+```
+
+---
+
+# Other Commands
+
+## turbo boundaries
+
+Check workspace violations (experimental).
+
+```bash
+turbo boundaries
+```
+
+See `references/boundaries/` for configuration.
+
+## turbo watch
+
+Re-run tasks on file changes.
+
+```bash
+turbo watch build test
+```
+
+See `references/watch/` for details.
+
+## turbo prune
+
+Create sparse checkout for Docker.
+
+```bash
+turbo prune web --docker
+```
+
+## turbo link / unlink
+
+Connect/disconnect Remote Cache.
+
+```bash
+turbo link # connect to Vercel Remote Cache
+turbo unlink # disconnect
+```
+
+## turbo login / logout
+
+Authenticate with Remote Cache provider.
+
+```bash
+turbo login # authenticate
+turbo logout # log out
+```
+
+## turbo generate
+
+Scaffold new packages.
+
+```bash
+turbo generate
+```
diff --git a/skills/turborepo/references/configuration/RULE.md b/skills/turborepo/references/configuration/RULE.md
new file mode 100644
index 0000000..1edb07d
--- /dev/null
+++ b/skills/turborepo/references/configuration/RULE.md
@@ -0,0 +1,211 @@
+# turbo.json Configuration Overview
+
+Configuration reference for Turborepo. Full docs: https://turborepo.dev/docs/reference/configuration
+
+## File Location
+
+Root `turbo.json` lives at repo root, sibling to root `package.json`:
+
+```
+my-monorepo/
+├── turbo.json # Root configuration
+├── package.json
+└── packages/
+ └── web/
+ ├── turbo.json # Package Configuration (optional)
+ └── package.json
+```
+
+## Always Prefer Package Tasks Over Root Tasks
+
+**Always use package tasks. Only use Root Tasks if you cannot succeed with package tasks.**
+
+Package tasks enable parallelization, individual caching, and filtering. Define scripts in each package's `package.json`:
+
+```json
+// packages/web/package.json
+{
+ "scripts": {
+ "build": "next build",
+ "lint": "eslint .",
+ "test": "vitest",
+ "typecheck": "tsc --noEmit"
+ }
+}
+
+// packages/api/package.json
+{
+ "scripts": {
+ "build": "tsc",
+ "lint": "eslint .",
+ "test": "vitest",
+ "typecheck": "tsc --noEmit"
+ }
+}
+```
+
+```json
+// Root package.json - delegates to turbo
+{
+ "scripts": {
+ "build": "turbo run build",
+ "lint": "turbo run lint",
+ "test": "turbo run test",
+ "typecheck": "turbo run typecheck"
+ }
+}
+```
+
+When you run `turbo run lint`, Turborepo finds all packages with a `lint` script and runs them **in parallel**.
+
+**Root Tasks are a fallback**, not the default. Only use them for tasks that truly cannot run per-package (e.g., repo-level CI scripts, workspace-wide config generation).
+
+```json
+// AVOID: Task logic in root defeats parallelization
+{
+ "scripts": {
+ "lint": "eslint apps/web && eslint apps/api && eslint packages/ui"
+ }
+}
+```
+
+## Basic Structure
+
+```json
+{
+ "$schema": "https://turborepo.dev/schema.v2.json",
+ "globalEnv": ["CI"],
+ "globalDependencies": ["tsconfig.json"],
+ "tasks": {
+ "build": {
+ "dependsOn": ["^build"],
+ "outputs": ["dist/**"]
+ },
+ "dev": {
+ "cache": false,
+ "persistent": true
+ }
+ }
+}
+```
+
+The `$schema` key enables IDE autocompletion and validation.
+
+## Configuration Sections
+
+**Global options** - Settings affecting all tasks:
+
+- `globalEnv`, `globalDependencies`, `globalPassThroughEnv`
+- `cacheDir`, `daemon`, `envMode`, `ui`, `remoteCache`
+
+**Task definitions** - Per-task settings in `tasks` object:
+
+- `dependsOn`, `outputs`, `inputs`, `env`
+- `cache`, `persistent`, `interactive`, `outputLogs`
+
+## Package Configurations
+
+Use `turbo.json` in individual packages to override root settings:
+
+```json
+// packages/web/turbo.json
+{
+ "extends": ["//"],
+ "tasks": {
+ "build": {
+ "outputs": [".next/**", "!.next/cache/**"]
+ }
+ }
+}
+```
+
+The `"extends": ["//"]` is required - it references the root configuration.
+
+**When to use Package Configurations:**
+
+- Framework-specific outputs (Next.js, Vite, etc.)
+- Package-specific env vars
+- Different caching rules for specific packages
+- Keeping framework config close to the framework code
+
+### Extending from Other Packages
+
+You can extend from config packages instead of just root:
+
+```json
+// packages/web/turbo.json
+{
+ "extends": ["//", "@repo/turbo-config"]
+}
+```
+
+### Adding to Inherited Arrays with `$TURBO_EXTENDS$`
+
+By default, array fields in Package Configurations **replace** root values. Use `$TURBO_EXTENDS$` to **append** instead:
+
+```json
+// Root turbo.json
+{
+ "tasks": {
+ "build": {
+ "outputs": ["dist/**"]
+ }
+ }
+}
+```
+
+```json
+// packages/web/turbo.json
+{
+ "extends": ["//"],
+ "tasks": {
+ "build": {
+ // Inherits "dist/**" from root, adds ".next/**"
+ "outputs": ["$TURBO_EXTENDS$", ".next/**", "!.next/cache/**"]
+ }
+ }
+}
+```
+
+Without `$TURBO_EXTENDS$`, outputs would only be `[".next/**", "!.next/cache/**"]`.
+
+**Works with:**
+
+- `dependsOn`
+- `env`
+- `inputs`
+- `outputs`
+- `passThroughEnv`
+- `with`
+
+### Excluding Tasks from Packages
+
+Use `extends: false` to exclude a task from a package:
+
+```json
+// packages/ui/turbo.json
+{
+ "extends": ["//"],
+ "tasks": {
+ "e2e": {
+ "extends": false // UI package doesn't have e2e tests
+ }
+ }
+}
+```
+
+## `turbo.jsonc` for Comments
+
+Use `turbo.jsonc` extension to add comments with IDE support:
+
+```jsonc
+// turbo.jsonc
+{
+ "tasks": {
+ "build": {
+ // Next.js outputs
+ "outputs": [".next/**", "!.next/cache/**"]
+ }
+ }
+}
+```
diff --git a/skills/turborepo/references/configuration/global-options.md b/skills/turborepo/references/configuration/global-options.md
new file mode 100644
index 0000000..b2d7a8d
--- /dev/null
+++ b/skills/turborepo/references/configuration/global-options.md
@@ -0,0 +1,191 @@
+# Global Options Reference
+
+Options that affect all tasks. Full docs: https://turborepo.dev/docs/reference/configuration
+
+## globalEnv
+
+Environment variables affecting all task hashes.
+
+```json
+{
+ "globalEnv": ["CI", "NODE_ENV", "VERCEL_*"]
+}
+```
+
+Use for variables that should invalidate all caches when changed.
+
+## globalDependencies
+
+Files that affect all task hashes.
+
+```json
+{
+ "globalDependencies": ["tsconfig.json", ".env", "pnpm-lock.yaml"]
+}
+```
+
+Lockfile is included by default. Add shared configs here.
+
+## globalPassThroughEnv
+
+Variables available to tasks but not included in hash.
+
+```json
+{
+ "globalPassThroughEnv": ["AWS_SECRET_KEY", "GITHUB_TOKEN"]
+}
+```
+
+Use for credentials that shouldn't affect cache keys.
+
+## cacheDir
+
+Custom cache location. Default: `node_modules/.cache/turbo`.
+
+```json
+{
+ "cacheDir": ".turbo/cache"
+}
+```
+
+## daemon
+
+Background process for faster subsequent runs. Default: `true`.
+
+```json
+{
+ "daemon": false
+}
+```
+
+Disable in CI or when debugging.
+
+## envMode
+
+How unspecified env vars are handled. Default: `"strict"`.
+
+```json
+{
+ "envMode": "strict" // Only specified vars available
+ // or
+ "envMode": "loose" // All vars pass through
+}
+```
+
+Strict mode catches missing env declarations.
+
+## ui
+
+Terminal UI mode. Default: `"stream"`.
+
+```json
+{
+ "ui": "tui" // Interactive terminal UI
+ // or
+ "ui": "stream" // Traditional streaming logs
+}
+```
+
+TUI provides better UX for parallel tasks.
+
+## remoteCache
+
+Configure remote caching.
+
+```json
+{
+ "remoteCache": {
+ "enabled": true,
+ "signature": true,
+ "timeout": 30,
+ "uploadTimeout": 60
+ }
+}
+```
+
+| Option | Default | Description |
+| --------------- | ---------------------- | ------------------------------------------------------ |
+| `enabled` | `true` | Enable/disable remote caching |
+| `signature` | `false` | Sign artifacts with `TURBO_REMOTE_CACHE_SIGNATURE_KEY` |
+| `preflight` | `false` | Send OPTIONS request before cache requests |
+| `timeout` | `30` | Timeout in seconds for cache operations |
+| `uploadTimeout` | `60` | Timeout in seconds for uploads |
+| `apiUrl` | `"https://vercel.com"` | Remote cache API endpoint |
+| `loginUrl` | `"https://vercel.com"` | Login endpoint |
+| `teamId` | - | Team ID (must start with `team_`) |
+| `teamSlug` | - | Team slug for querystring |
+
+See https://turborepo.dev/docs/core-concepts/remote-caching for setup.
+
+## concurrency
+
+Default: `"10"`
+
+Limit parallel task execution.
+
+```json
+{
+ "concurrency": "4" // Max 4 tasks at once
+ // or
+ "concurrency": "50%" // 50% of available CPUs
+}
+```
+
+## futureFlags
+
+Enable experimental features that will become default in future versions.
+
+```json
+{
+ "futureFlags": {
+ "errorsOnlyShowHash": true
+ }
+}
+```
+
+### `errorsOnlyShowHash`
+
+When using `outputLogs: "errors-only"`, show task hashes on start/completion:
+
+- Cache miss: `cache miss, executing (only logging errors)`
+- Cache hit: `cache hit, replaying logs (no errors) `
+
+## noUpdateNotifier
+
+Disable update notifications when new turbo versions are available.
+
+```json
+{
+ "noUpdateNotifier": true
+}
+```
+
+## dangerouslyDisablePackageManagerCheck
+
+Bypass the `packageManager` field requirement. Use for incremental migration.
+
+```json
+{
+ "dangerouslyDisablePackageManagerCheck": true
+}
+```
+
+**Warning**: Unstable lockfiles can cause unpredictable behavior.
+
+## Git Worktree Cache Sharing
+
+When working in Git worktrees, Turborepo automatically shares local cache between the main worktree and linked worktrees.
+
+**How it works:**
+
+- Detects worktree configuration
+- Redirects cache to main worktree's `.turbo/cache`
+- Works alongside Remote Cache
+
+**Benefits:**
+
+- Cache hits across branches
+- Reduced disk usage
+- Faster branch switching
+
+**Disabled by**: Setting explicit `cacheDir` in turbo.json.
diff --git a/skills/turborepo/references/configuration/gotchas.md b/skills/turborepo/references/configuration/gotchas.md
new file mode 100644
index 0000000..225bd39
--- /dev/null
+++ b/skills/turborepo/references/configuration/gotchas.md
@@ -0,0 +1,348 @@
+# Configuration Gotchas
+
+Common mistakes and how to fix them.
+
+## #1 Root Scripts Not Using `turbo run`
+
+Root `package.json` scripts for turbo tasks MUST use `turbo run`, not direct commands.
+
+```json
+// WRONG - bypasses turbo, no parallelization or caching
+{
+ "scripts": {
+ "build": "bun build",
+ "dev": "bun dev"
+ }
+}
+
+// CORRECT - delegates to turbo
+{
+ "scripts": {
+ "build": "turbo run build",
+ "dev": "turbo run dev"
+ }
+}
+```
+
+**Why this matters:** Running `bun build` or `npm run build` at root bypasses Turborepo entirely - no parallelization, no caching, no dependency graph awareness.
+
+## #2 Using `&&` to Chain Turbo Tasks
+
+Don't use `&&` to chain tasks that turbo should orchestrate.
+
+```json
+// WRONG - changeset:publish chains turbo task with non-turbo command
+{
+ "scripts": {
+ "changeset:publish": "bun build && changeset publish"
+ }
+}
+
+// CORRECT - use turbo run, let turbo handle dependencies
+{
+ "scripts": {
+ "changeset:publish": "turbo run build && changeset publish"
+ }
+}
+```
+
+If the second command (`changeset publish`) depends on build outputs, the turbo task should run through turbo to get caching and parallelization benefits.
+
+## #3 Overly Broad globalDependencies
+
+`globalDependencies` affects hash for ALL tasks in ALL packages. Be specific.
+
+```json
+// WRONG - affects all hashes
+{
+ "globalDependencies": ["**/.env.*local"]
+}
+
+// CORRECT - move to specific tasks that need it
+{
+ "globalDependencies": [".env"],
+ "tasks": {
+ "build": {
+ "inputs": ["$TURBO_DEFAULT$", ".env*"],
+ "outputs": ["dist/**"]
+ }
+ }
+}
+```
+
+**Why this matters:** `**/.env.*local` matches .env files in ALL packages, causing unnecessary cache invalidation. Instead:
+
+- Use `globalDependencies` only for truly global files (root `.env`)
+- Use task-level `inputs` for package-specific .env files with `$TURBO_DEFAULT$` to preserve default behavior
+
+## #4 Repetitive Task Configuration
+
+Look for repeated configuration across tasks that can be collapsed.
+
+```json
+// WRONG - repetitive env and inputs across tasks
+{
+ "tasks": {
+ "build": {
+ "env": ["API_URL", "DATABASE_URL"],
+ "inputs": ["$TURBO_DEFAULT$", ".env*"]
+ },
+ "test": {
+ "env": ["API_URL", "DATABASE_URL"],
+ "inputs": ["$TURBO_DEFAULT$", ".env*"]
+ }
+ }
+}
+
+// BETTER - use globalEnv and globalDependencies
+{
+ "globalEnv": ["API_URL", "DATABASE_URL"],
+ "globalDependencies": [".env*"],
+ "tasks": {
+ "build": {},
+ "test": {}
+ }
+}
+```
+
+**When to use global vs task-level:**
+
+- `globalEnv` / `globalDependencies` - affects ALL tasks, use for truly shared config
+- Task-level `env` / `inputs` - use when only specific tasks need it
+
+## #5 Using `../` to Traverse Out of Package in `inputs`
+
+Don't use relative paths like `../` to reference files outside the package. Use `$TURBO_ROOT$` instead.
+
+```json
+// WRONG - traversing out of package
+{
+ "tasks": {
+ "build": {
+ "inputs": ["$TURBO_DEFAULT$", "../shared-config.json"]
+ }
+ }
+}
+
+// CORRECT - use $TURBO_ROOT$ for repo root
+{
+ "tasks": {
+ "build": {
+ "inputs": ["$TURBO_DEFAULT$", "$TURBO_ROOT$/shared-config.json"]
+ }
+ }
+}
+```
+
+## #6 MOST COMMON MISTAKE: Creating Root Tasks
+
+**DO NOT create Root Tasks. ALWAYS create package tasks.**
+
+When you need to create a task (build, lint, test, typecheck, etc.):
+
+1. Add the script to **each relevant package's** `package.json`
+2. Register the task in root `turbo.json`
+3. Root `package.json` only contains `turbo run `
+
+```json
+// WRONG - DO NOT DO THIS
+// Root package.json with task logic
+{
+ "scripts": {
+ "build": "cd apps/web && next build && cd ../api && tsc",
+ "lint": "eslint apps/ packages/",
+ "test": "vitest"
+ }
+}
+
+// CORRECT - DO THIS
+// apps/web/package.json
+{ "scripts": { "build": "next build", "lint": "eslint .", "test": "vitest" } }
+
+// apps/api/package.json
+{ "scripts": { "build": "tsc", "lint": "eslint .", "test": "vitest" } }
+
+// packages/ui/package.json
+{ "scripts": { "build": "tsc", "lint": "eslint .", "test": "vitest" } }
+
+// Root package.json - ONLY delegates
+{ "scripts": { "build": "turbo run build", "lint": "turbo run lint", "test": "turbo run test" } }
+
+// turbo.json - register tasks
+{
+ "tasks": {
+ "build": { "dependsOn": ["^build"], "outputs": ["dist/**"] },
+ "lint": {},
+ "test": {}
+ }
+}
+```
+
+**Why this matters:**
+
+- Package tasks run in **parallel** across all packages
+- Each package's output is cached **individually**
+- You can **filter** to specific packages: `turbo run test --filter=web`
+
+Root Tasks (`//#taskname`) defeat all these benefits. Only use them for tasks that truly cannot exist in any package (extremely rare).
+
+## #7 Tasks That Need Parallel Execution + Cache Invalidation
+
+Some tasks can run in parallel (don't need built output from dependencies) but must still invalidate cache when dependency source code changes. Using `dependsOn: ["^taskname"]` forces sequential execution. Using no dependencies breaks cache invalidation.
+
+**Use Transit Nodes for these tasks:**
+
+```json
+// WRONG - forces sequential execution (SLOW)
+"my-task": {
+ "dependsOn": ["^my-task"]
+}
+
+// ALSO WRONG - no dependency awareness (INCORRECT CACHING)
+"my-task": {}
+
+// CORRECT - use Transit Nodes for parallel + correct caching
+{
+ "tasks": {
+ "transit": { "dependsOn": ["^transit"] },
+ "my-task": { "dependsOn": ["transit"] }
+ }
+}
+```
+
+**Why Transit Nodes work:**
+
+- `transit` creates dependency relationships without matching any actual script
+- Tasks that depend on `transit` gain dependency awareness
+- Since `transit` completes instantly (no script), tasks run in parallel
+- Cache correctly invalidates when dependency source code changes
+
+**How to identify tasks that need this pattern:** Look for tasks that read source files from dependencies but don't need their build outputs.
+
+## Missing outputs for File-Producing Tasks
+
+**Before flagging missing `outputs`, check what the task actually produces:**
+
+1. Read the package's script (e.g., `"build": "tsc"`, `"test": "vitest"`)
+2. Determine if it writes files to disk or only outputs to stdout
+3. Only flag if the task produces files that should be cached
+
+```json
+// WRONG - build produces files but they're not cached
+"build": {
+ "dependsOn": ["^build"]
+}
+
+// CORRECT - outputs are cached
+"build": {
+ "dependsOn": ["^build"],
+ "outputs": ["dist/**"]
+}
+```
+
+No `outputs` key is fine for stdout-only tasks. For file-producing tasks, missing `outputs` means Turbo has nothing to cache.
+
+## Forgetting ^ in dependsOn
+
+```json
+// WRONG - looks for "build" in SAME package (infinite loop or missing)
+"build": {
+ "dependsOn": ["build"]
+}
+
+// CORRECT - runs dependencies' build first
+"build": {
+ "dependsOn": ["^build"]
+}
+```
+
+The `^` means "in dependency packages", not "in this package".
+
+## Missing persistent on Dev Tasks
+
+```json
+// WRONG - dependent tasks hang waiting for dev to "finish"
+"dev": {
+ "cache": false
+}
+
+// CORRECT
+"dev": {
+ "cache": false,
+ "persistent": true
+}
+```
+
+## Package Config Missing extends
+
+```json
+// WRONG - packages/web/turbo.json
+{
+ "tasks": {
+ "build": { "outputs": [".next/**"] }
+ }
+}
+
+// CORRECT
+{
+ "extends": ["//"],
+ "tasks": {
+ "build": { "outputs": [".next/**"] }
+ }
+}
+```
+
+Without `"extends": ["//"]`, Package Configurations are invalid.
+
+## Root Tasks Need Special Syntax
+
+To run a task defined only in root `package.json`:
+
+```bash
+# WRONG
+turbo run format
+
+# CORRECT
+turbo run //#format
+```
+
+And in dependsOn:
+
+```json
+"build": {
+ "dependsOn": ["//#codegen"] // Root package's codegen
+}
+```
+
+## Overwriting Default Inputs
+
+```json
+// WRONG - only watches test files, ignores source changes
+"test": {
+ "inputs": ["tests/**"]
+}
+
+// CORRECT - extends defaults, adds test files
+"test": {
+ "inputs": ["$TURBO_DEFAULT$", "tests/**"]
+}
+```
+
+Without `$TURBO_DEFAULT$`, you replace all default file watching.
+
+## Caching Tasks with Side Effects
+
+```json
+// WRONG - deploy might be skipped on cache hit
+"deploy": {
+ "dependsOn": ["build"]
+}
+
+// CORRECT
+"deploy": {
+ "dependsOn": ["build"],
+ "cache": false
+}
+```
+
+Always disable cache for deploy, publish, or mutation tasks.
diff --git a/skills/turborepo/references/configuration/tasks.md b/skills/turborepo/references/configuration/tasks.md
new file mode 100644
index 0000000..a529b51
--- /dev/null
+++ b/skills/turborepo/references/configuration/tasks.md
@@ -0,0 +1,285 @@
+# Task Configuration Reference
+
+Full docs: https://turborepo.dev/docs/reference/configuration#tasks
+
+## dependsOn
+
+Controls task execution order.
+
+```json
+{
+ "tasks": {
+ "build": {
+ "dependsOn": [
+ "^build", // Dependencies' build tasks first
+ "codegen", // Same package's codegen task first
+ "shared#build" // Specific package's build task
+ ]
+ }
+ }
+}
+```
+
+| Syntax | Meaning |
+| ---------- | ------------------------------------ |
+| `^task` | Run `task` in all dependencies first |
+| `task` | Run `task` in same package first |
+| `pkg#task` | Run specific package's task first |
+
+The `^` prefix is crucial - without it, you're referencing the same package.
+
+### Transit Nodes for Parallel Tasks
+
+For tasks like `lint` and `check-types` that can run in parallel but need dependency-aware caching:
+
+```json
+{
+ "tasks": {
+ "transit": { "dependsOn": ["^transit"] },
+ "lint": { "dependsOn": ["transit"] },
+ "check-types": { "dependsOn": ["transit"] }
+ }
+}
+```
+
+**DO NOT use `dependsOn: ["^lint"]`** - this forces sequential execution.
+**DO NOT use `dependsOn: []`** - this breaks cache invalidation.
+
+The `transit` task creates dependency relationships without running anything (no matching script), so tasks run in parallel with correct caching.
+
+## outputs
+
+Glob patterns for files to cache. **If omitted, nothing is cached.**
+
+```json
+{
+ "tasks": {
+ "build": {
+ "outputs": ["dist/**", "build/**"]
+ }
+ }
+}
+```
+
+**Framework examples:**
+
+```json
+// Next.js
+"outputs": [".next/**", "!.next/cache/**"]
+
+// Vite
+"outputs": ["dist/**"]
+
+// TypeScript (tsc)
+"outputs": ["dist/**", "*.tsbuildinfo"]
+
+// No file outputs (lint, typecheck)
+"outputs": []
+```
+
+Use `!` prefix to exclude patterns from caching.
+
+## inputs
+
+Files considered when calculating task hash. Defaults to all tracked files in package.
+
+```json
+{
+ "tasks": {
+ "test": {
+ "inputs": ["src/**", "tests/**", "vitest.config.ts"]
+ }
+ }
+}
+```
+
+**Special values:**
+
+| Value | Meaning |
+| --------------------- | --------------------------------------- |
+| `$TURBO_DEFAULT$` | Include default inputs, then add/remove |
+| `$TURBO_ROOT$/` | Reference files from repo root |
+
+```json
+{
+ "tasks": {
+ "build": {
+ "inputs": [
+ "$TURBO_DEFAULT$",
+ "!README.md",
+ "$TURBO_ROOT$/tsconfig.base.json"
+ ]
+ }
+ }
+}
+```
+
+## env
+
+Environment variables to include in task hash.
+
+```json
+{
+ "tasks": {
+ "build": {
+ "env": [
+ "API_URL",
+ "NEXT_PUBLIC_*", // Wildcard matching
+ "!DEBUG" // Exclude from hash
+ ]
+ }
+ }
+}
+```
+
+Variables listed here affect cache hits - changing the value invalidates cache.
+
+## cache
+
+Enable/disable caching for a task. Default: `true`.
+
+```json
+{
+ "tasks": {
+ "dev": { "cache": false },
+ "deploy": { "cache": false }
+ }
+}
+```
+
+Disable for: dev servers, deploy commands, tasks with side effects.
+
+## persistent
+
+Mark long-running tasks that don't exit. Default: `false`.
+
+```json
+{
+ "tasks": {
+ "dev": {
+ "cache": false,
+ "persistent": true
+ }
+ }
+}
+```
+
+Required for dev servers - without it, dependent tasks wait forever.
+
+## interactive
+
+Allow task to receive stdin input. Default: `false`.
+
+```json
+{
+ "tasks": {
+ "login": {
+ "cache": false,
+ "interactive": true
+ }
+ }
+}
+```
+
+## outputLogs
+
+Control when logs are shown. Options: `full`, `hash-only`, `new-only`, `errors-only`, `none`.
+
+```json
+{
+ "tasks": {
+ "build": {
+ "outputLogs": "new-only" // Only show logs on cache miss
+ }
+ }
+}
+```
+
+## with
+
+Run tasks alongside this task. For long-running tasks that need runtime dependencies.
+
+```json
+{
+ "tasks": {
+ "dev": {
+ "with": ["api#dev"],
+ "persistent": true,
+ "cache": false
+ }
+ }
+}
+```
+
+Unlike `dependsOn`, `with` runs tasks concurrently (not sequentially). Use for dev servers that need other services running.
+
+## interruptible
+
+Allow `turbo watch` to restart the task on changes. Default: `false`.
+
+```json
+{
+ "tasks": {
+ "dev": {
+ "persistent": true,
+ "interruptible": true,
+ "cache": false
+ }
+ }
+}
+```
+
+Use for dev servers that don't automatically detect dependency changes.
+
+## description
+
+Human-readable description of the task.
+
+```json
+{
+ "tasks": {
+ "build": {
+ "description": "Compiles the application for production deployment"
+ }
+ }
+}
+```
+
+For documentation only - doesn't affect execution or caching.
+
+## passThroughEnv
+
+Environment variables available at runtime but NOT included in cache hash.
+
+```json
+{
+ "tasks": {
+ "build": {
+ "passThroughEnv": ["AWS_SECRET_KEY", "GITHUB_TOKEN"]
+ }
+ }
+}
+```
+
+**Warning**: Changes to these vars won't cause cache misses. Use `env` if changes should invalidate cache.
+
+## extends (Package Configuration only)
+
+Control task inheritance in Package Configurations.
+
+```json
+// packages/ui/turbo.json
+{
+ "extends": ["//"],
+ "tasks": {
+ "lint": {
+ "extends": false // Exclude from this package
+ }
+ }
+}
+```
+
+| Value | Behavior |
+| ---------------- | -------------------------------------------------------------- |
+| `true` (default) | Inherit from root turbo.json |
+| `false` | Exclude task from package, or define fresh without inheritance |
diff --git a/skills/turborepo/references/environment/RULE.md b/skills/turborepo/references/environment/RULE.md
new file mode 100644
index 0000000..790b01b
--- /dev/null
+++ b/skills/turborepo/references/environment/RULE.md
@@ -0,0 +1,96 @@
+# Environment Variables in Turborepo
+
+Turborepo provides fine-grained control over which environment variables affect task hashing and runtime availability.
+
+## Configuration Keys
+
+### `env` - Task-Specific Variables
+
+Variables that affect a specific task's hash. When these change, only that task rebuilds.
+
+```json
+{
+ "tasks": {
+ "build": {
+ "env": ["DATABASE_URL", "API_KEY"]
+ }
+ }
+}
+```
+
+### `globalEnv` - Variables Affecting All Tasks
+
+Variables that affect EVERY task's hash. When these change, all tasks rebuild.
+
+```json
+{
+ "globalEnv": ["CI", "NODE_ENV"]
+}
+```
+
+### `passThroughEnv` - Runtime-Only Variables (Not Hashed)
+
+Variables available at runtime but NOT included in hash. **Use with caution** - changes won't trigger rebuilds.
+
+```json
+{
+ "tasks": {
+ "deploy": {
+ "passThroughEnv": ["AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY"]
+ }
+ }
+}
+```
+
+### `globalPassThroughEnv` - Global Runtime Variables
+
+Same as `passThroughEnv` but for all tasks.
+
+```json
+{
+ "globalPassThroughEnv": ["GITHUB_TOKEN"]
+}
+```
+
+## Wildcards and Negation
+
+### Wildcards
+
+Match multiple variables with `*`:
+
+```json
+{
+ "env": ["MY_API_*", "FEATURE_FLAG_*"]
+}
+```
+
+This matches `MY_API_URL`, `MY_API_KEY`, `FEATURE_FLAG_DARK_MODE`, etc.
+
+### Negation
+
+Exclude variables (useful with framework inference):
+
+```json
+{
+ "env": ["!NEXT_PUBLIC_ANALYTICS_ID"]
+}
+```
+
+## Complete Example
+
+```json
+{
+ "$schema": "https://turborepo.dev/schema.v2.json",
+ "globalEnv": ["CI", "NODE_ENV"],
+ "globalPassThroughEnv": ["GITHUB_TOKEN", "NPM_TOKEN"],
+ "tasks": {
+ "build": {
+ "env": ["DATABASE_URL", "API_*"],
+ "passThroughEnv": ["SENTRY_AUTH_TOKEN"]
+ },
+ "test": {
+ "env": ["TEST_DATABASE_URL"]
+ }
+ }
+}
+```
diff --git a/skills/turborepo/references/environment/gotchas.md b/skills/turborepo/references/environment/gotchas.md
new file mode 100644
index 0000000..eff77a4
--- /dev/null
+++ b/skills/turborepo/references/environment/gotchas.md
@@ -0,0 +1,145 @@
+# Environment Variable Gotchas
+
+Common mistakes and how to fix them.
+
+## .env Files Must Be in `inputs`
+
+Turbo does NOT read `.env` files. Your framework (Next.js, Vite, etc.) or `dotenv` loads them. But Turbo needs to know when they change.
+
+**Wrong:**
+
+```json
+{
+ "tasks": {
+ "build": {
+ "env": ["DATABASE_URL"]
+ }
+ }
+}
+```
+
+**Right:**
+
+```json
+{
+ "tasks": {
+ "build": {
+ "env": ["DATABASE_URL"],
+ "inputs": ["$TURBO_DEFAULT$", ".env", ".env.local", ".env.production"]
+ }
+ }
+}
+```
+
+## Strict Mode Filters CI Variables
+
+In strict mode, CI provider variables (GITHUB_TOKEN, GITLAB_CI, etc.) are filtered unless explicitly listed.
+
+**Symptom:** Task fails with "authentication required" or "permission denied" in CI.
+
+**Solution:**
+
+```json
+{
+ "globalPassThroughEnv": ["GITHUB_TOKEN", "GITLAB_CI", "CI"]
+}
+```
+
+## passThroughEnv Doesn't Affect Hash
+
+Variables in `passThroughEnv` are available at runtime but changes WON'T trigger rebuilds.
+
+**Dangerous example:**
+
+```json
+{
+ "tasks": {
+ "build": {
+ "passThroughEnv": ["API_URL"]
+ }
+ }
+}
+```
+
+If `API_URL` changes from staging to production, Turbo may serve a cached build pointing to the wrong API.
+
+**Use passThroughEnv only for:**
+
+- Auth tokens that don't affect output (SENTRY_AUTH_TOKEN)
+- CI metadata (GITHUB_RUN_ID)
+- Variables consumed after build (deploy credentials)
+
+## Runtime-Created Variables Are Invisible
+
+Turbo captures env vars at startup. Variables created during execution aren't seen.
+
+**Won't work:**
+
+```bash
+# In package.json scripts
+"build": "export API_URL=$COMPUTED_VALUE && next build"
+```
+
+**Solution:** Set vars before invoking turbo:
+
+```bash
+API_URL=$COMPUTED_VALUE turbo run build
+```
+
+## Different .env Files for Different Environments
+
+If you use `.env.development` and `.env.production`, both should be in inputs.
+
+```json
+{
+ "tasks": {
+ "build": {
+ "inputs": [
+ "$TURBO_DEFAULT$",
+ ".env",
+ ".env.local",
+ ".env.development",
+ ".env.development.local",
+ ".env.production",
+ ".env.production.local"
+ ]
+ }
+ }
+}
+```
+
+## Complete Next.js Example
+
+```json
+{
+ "$schema": "https://turborepo.dev/schema.v2.json",
+ "globalEnv": ["CI", "NODE_ENV", "VERCEL"],
+ "globalPassThroughEnv": ["GITHUB_TOKEN", "VERCEL_URL"],
+ "tasks": {
+ "build": {
+ "dependsOn": ["^build"],
+ "env": [
+ "DATABASE_URL",
+ "NEXT_PUBLIC_*",
+ "!NEXT_PUBLIC_ANALYTICS_ID"
+ ],
+ "passThroughEnv": ["SENTRY_AUTH_TOKEN"],
+ "inputs": [
+ "$TURBO_DEFAULT$",
+ ".env",
+ ".env.local",
+ ".env.production",
+ ".env.production.local"
+ ],
+ "outputs": [".next/**", "!.next/cache/**"]
+ }
+ }
+}
+```
+
+This config:
+
+- Hashes DATABASE*URL and NEXT_PUBLIC*\* vars (except analytics)
+- Passes through SENTRY_AUTH_TOKEN without hashing
+- Includes all .env file variants in the hash
+- Makes CI tokens available globally
diff --git a/skills/turborepo/references/environment/modes.md b/skills/turborepo/references/environment/modes.md
new file mode 100644
index 0000000..2e65533
--- /dev/null
+++ b/skills/turborepo/references/environment/modes.md
@@ -0,0 +1,101 @@
+# Environment Modes
+
+Turborepo supports different modes for handling environment variables during task execution.
+
+## Strict Mode (Default)
+
+Only explicitly configured variables are available to tasks.
+
+**Behavior:**
+
+- Tasks only see vars listed in `env`, `globalEnv`, `passThroughEnv`, or `globalPassThroughEnv`
+- Unlisted vars are filtered out
+- Tasks fail if they require unlisted variables
+
+**Benefits:**
+
+- Guarantees cache correctness
+- Prevents accidental dependencies on system vars
+- Reproducible builds across machines
+
+```bash
+# Explicit (though it's the default)
+turbo run build --env-mode=strict
+```
+
+## Loose Mode
+
+All system environment variables are available to tasks.
+
+```bash
+turbo run build --env-mode=loose
+```
+
+**Behavior:**
+
+- Every system env var is passed through
+- Only vars in `env`/`globalEnv` affect the hash
+- Other vars are available but NOT hashed
+
+**Risks:**
+
+- Cache may restore incorrect results if unhashed vars changed
+- "Works on my machine" bugs
+- CI vs local environment mismatches
+
+**Use case:** Migrating legacy projects or debugging strict mode issues.
+
+## Framework Inference (Automatic)
+
+Turborepo automatically detects frameworks and includes their conventional env vars.
+
+### Inferred Variables by Framework
+
+| Framework | Pattern |
+| ---------------- | ------------------- |
+| Next.js | `NEXT_PUBLIC_*` |
+| Vite | `VITE_*` |
+| Create React App | `REACT_APP_*` |
+| Gatsby | `GATSBY_*` |
+| Nuxt | `NUXT_*`, `NITRO_*` |
+| Expo | `EXPO_PUBLIC_*` |
+| Astro | `PUBLIC_*` |
+| SvelteKit | `PUBLIC_*` |
+| Remix | `REMIX_*` |
+| Redwood | `REDWOOD_ENV_*` |
+| Sanity | `SANITY_STUDIO_*` |
+| Solid | `VITE_*` |
+
+### Disabling Framework Inference
+
+Globally via CLI:
+
+```bash
+turbo run build --framework-inference=false
+```
+
+Or exclude specific patterns in config:
+
+```json
+{
+ "tasks": {
+ "build": {
+ "env": ["!NEXT_PUBLIC_*"]
+ }
+ }
+}
+```
+
+### Why Disable?
+
+- You want explicit control over all env vars
+- Framework vars shouldn't bust the cache (e.g., analytics IDs)
+- Debugging unexpected cache misses
+
+## Checking Environment Mode
+
+Use `--dry` to see which vars affect each task:
+
+```bash
+turbo run build --dry=json | jq '.tasks[].environmentVariables'
+```
diff --git a/skills/turborepo/references/filtering/RULE.md b/skills/turborepo/references/filtering/RULE.md
new file mode 100644
index 0000000..04e19cc
--- /dev/null
+++ b/skills/turborepo/references/filtering/RULE.md
@@ -0,0 +1,148 @@
+# Turborepo Filter Syntax Reference
+
+## Running Only Changed Packages: `--affected`
+
+**The primary way to run only changed packages is `--affected`:**
+
+```bash
+# Run build/test/lint only in changed packages and their dependents
+turbo run build test lint --affected
+```
+
+This compares your current branch to the default branch (usually `main` or `master`) and runs tasks in:
+
+1. Packages with file changes
+2. Packages that depend on changed packages (dependents)
+
+### Why Include Dependents?
+
+If you change `@repo/ui`, packages that import `@repo/ui` (like `apps/web`) need to re-run their tasks to verify they still work with the changes.
+
+### Customizing --affected
+
+```bash
+# Use a different base branch
+turbo run build --affected --affected-base=origin/develop
+
+# Use a different head (current state)
+turbo run build --affected --affected-head=HEAD~5
+```
+
+### Common CI Pattern
+
+```yaml
+# .github/workflows/ci.yml
+- run: turbo run build test lint --affected
+```
+
+This is the most efficient CI setup - only run tasks for what actually changed.
+
+---
+
+## Manual Git Comparison with --filter
+
+For more control, use `--filter` with git comparison syntax:
+
+```bash
+# Changed packages + dependents (same as --affected)
+turbo run build --filter=...[origin/main]
+
+# Only changed packages (no dependents)
+turbo run build --filter=[origin/main]
+
+# Changed packages + dependencies (packages they import)
+turbo run build --filter=[origin/main]...
+
+# Changed since last commit
+turbo run build --filter=...[HEAD^1]
+
+# Changed between two commits
+turbo run build --filter=[a1b2c3d...e4f5g6h]
+```
+
+### Comparison Syntax
+
+| Syntax | Meaning |
+| ------------- | ------------------------------------- |
+| `[ref]` | Packages changed since `ref` |
+| `...[ref]` | Changed packages + their dependents |
+| `[ref]...` | Changed packages + their dependencies |
+| `...[ref]...` | Dependencies, changed, AND dependents |
+
+---
+
+## Other Filter Types
+
+Filters select which packages to include in a `turbo run` invocation.
+
+### Basic Syntax
+
+```bash
+turbo run build --filter=
+turbo run build -F
+```
+
+Multiple filters combine as a union (packages matching ANY filter run).
+
+### By Package Name
+
+```bash
+--filter=web # exact match
+--filter=@acme/* # scope glob
+--filter=*-app # name glob
+```
+
+### By Directory
+
+```bash
+--filter=./apps/* # all packages in apps/
+--filter=./packages/ui # specific directory
+```
+
+### By Dependencies/Dependents
+
+| Syntax | Meaning |
+| ----------- | -------------------------------------- |
+| `pkg...` | Package AND all its dependencies |
+| `...pkg` | Package AND all its dependents |
+| `...pkg...` | Dependencies, package, AND dependents |
+| `^pkg...` | Only dependencies (exclude pkg itself) |
+| `...^pkg` | Only dependents (exclude pkg itself) |
+
+### Negation
+
+Exclude packages with `!`:
+
+```bash
+--filter=!web # exclude web
+--filter=./apps/* --filter=!admin # apps except admin
+```
+
+### Task Identifiers
+
+Run a specific task in a specific package:
+
+```bash
+turbo run web#build # only web's build task
+turbo run web#build api#test # web build + api test
+```
+
+### Combining Filters
+
+Multiple `--filter` flags create a union:
+
+```bash
+turbo run build --filter=web --filter=api # runs in both
+```
+
+---
+
+## Quick Reference: Changed Packages
+
+| Goal | Command |
+| ---------------------------------- | ----------------------------------------------------------- |
+| Changed + dependents (recommended) | `turbo run build --affected` |
+| Custom base branch | `turbo run build --affected --affected-base=origin/develop` |
+| Only changed (no dependents) | `turbo run build --filter=[origin/main]` |
+| Changed + dependencies | `turbo run build --filter=[origin/main]...` |
+| Since last commit | `turbo run build --filter=...[HEAD^1]` |
diff --git a/skills/turborepo/references/filtering/patterns.md b/skills/turborepo/references/filtering/patterns.md
new file mode 100644
index 0000000..17b9f1c
--- /dev/null
+++ b/skills/turborepo/references/filtering/patterns.md
@@ -0,0 +1,152 @@
+# Common Filter Patterns
+
+Practical examples for typical monorepo scenarios.
+
+## Single Package
+
+Run task in one package:
+
+```bash
+turbo run build --filter=web
+turbo run test --filter=@acme/api
+```
+
+## Package with Dependencies
+
+Build a package and everything it depends on:
+
+```bash
+turbo run build --filter=web...
+```
+
+Useful for: ensuring all dependencies are built before the target.
+
+## Package Dependents
+
+Run in all packages that depend on a library:
+
+```bash
+turbo run test --filter=...ui
+```
+
+Useful for: testing consumers after changing a shared package.
+
+## Dependents Only (Exclude Target)
+
+Test packages that depend on ui, but not ui itself:
+
+```bash
+turbo run test --filter=...^ui
+```
+
+## Changed Packages
+
+Run only in packages with file changes since last commit:
+
+```bash
+turbo run lint --filter=[HEAD^1]
+```
+
+Since a specific branch point:
+
+```bash
+turbo run lint --filter=[main...HEAD]
+```
+
+## Changed + Dependents (PR Builds)
+
+Run in changed packages AND packages that depend on them:
+
+```bash
+turbo run build test --filter=...[HEAD^1]
+```
+
+Or use the shortcut:
+
+```bash
+turbo run build test --affected
+```
+
+## Directory-Based
+
+Run in all apps:
+
+```bash
+turbo run build --filter=./apps/*
+```
+
+Run in specific directories:
+
+```bash
+turbo run build --filter=./apps/web --filter=./apps/api
+```
+
+## Scope-Based
+
+Run in all packages under a scope:
+
+```bash
+turbo run build --filter=@acme/*
+```
+
+## Exclusions
+
+Run in all apps except admin:
+
+```bash
+turbo run build --filter=./apps/* --filter=!admin
+```
+
+Run everywhere except specific packages:
+
+```bash
+turbo run lint --filter=!legacy-app --filter=!deprecated-pkg
+```
+
+## Complex Combinations
+
+Apps that changed, plus their dependents:
+
+```bash
+turbo run build --filter=...[HEAD^1] --filter=./apps/*
+```
+
+All packages except docs, but only if changed:
+
+```bash
+turbo run build --filter=[main...HEAD] --filter=!docs
+```
+
+## Debugging Filters
+
+Use `--dry` to see what would run without executing:
+
+```bash
+turbo run build --filter=web... --dry
+```
+
+Use `--dry=json` for machine-readable output:
+
+```bash
+turbo run build --filter=...[HEAD^1] --dry=json
+```
+
+## CI/CD Patterns
+
+PR validation (most common):
+
+```bash
+turbo run build test lint --affected
+```
+
+Deploy only changed apps:
+
+```bash
+turbo run deploy --filter=./apps/* --filter=[main...HEAD]
+```
+
+Full rebuild of specific app and deps:
+
+```bash
+turbo run build --filter=production-app...
+```
diff --git a/skills/turborepo/references/watch/RULE.md b/skills/turborepo/references/watch/RULE.md
new file mode 100644
index 0000000..44bcf13
--- /dev/null
+++ b/skills/turborepo/references/watch/RULE.md
@@ -0,0 +1,99 @@
+# turbo watch
+
+Full docs: https://turborepo.dev/docs/reference/watch
+
+Re-run tasks automatically when code changes. Dependency-aware.
+
+```bash
+turbo watch [tasks]
+```
+
+## Basic Usage
+
+```bash
+# Watch and re-run build task when code changes
+turbo watch build
+
+# Watch multiple tasks
+turbo watch build test lint
+```
+
+Tasks re-run in order configured in `turbo.json` when source files change.
+
+## With Persistent Tasks
+
+Persistent tasks (`"persistent": true`) won't exit, so they can't be depended on. They work the same in `turbo watch` as `turbo run`.
+
+### Dependency-Aware Persistent Tasks
+
+If your tool has built-in watching (like `next dev`), use its watcher:
+
+```json
+{
+ "tasks": {
+ "dev": {
+ "persistent": true,
+ "cache": false
+ }
+ }
+}
+```
+
+### Non-Dependency-Aware Tools
+
+For tools that don't detect dependency changes, use `interruptible`:
+
+```json
+{
+ "tasks": {
+ "dev": {
+ "persistent": true,
+ "interruptible": true,
+ "cache": false
+ }
+ }
+}
+```
+
+`turbo watch` will restart interruptible tasks when dependencies change.
+
+## Limitations
+
+### Caching
+
+Caching is experimental with watch mode:
+
+```bash
+turbo watch your-tasks --experimental-write-cache
+```
+
+### Task Outputs in Source Control
+
+If tasks write files tracked by git, watch mode may loop infinitely. Watch mode uses file hashes to prevent this but it's not foolproof.
+
+**Recommendation**: Remove task outputs from git.
+
+## vs turbo run
+
+| Feature | `turbo run` | `turbo watch` |
+| ----------------- | ----------- | ------------- |
+| Runs once | Yes | No |
+| Re-runs on change | No | Yes |
+| Caching | Full | Experimental |
+| Use case | CI, one-off | Development |
+
+## Common Patterns
+
+### Development Workflow
+
+```bash
+# Run dev servers and watch for build changes
+turbo watch dev build
+```
+
+### Type Checking During Development
+
+```bash
+# Watch and re-run type checks
+turbo watch check-types
+```
diff --git a/skills/two-factor-authentication-best-practices/SKILL.md b/skills/two-factor-authentication-best-practices/SKILL.md
new file mode 100644
index 0000000..d44f9a3
--- /dev/null
+++ b/skills/two-factor-authentication-best-practices/SKILL.md
@@ -0,0 +1,417 @@
+---
+name: two-factor-authentication-best-practices
+description: This skill provides guidance and enforcement rules for implementing secure two-factor authentication (2FA) using Better Auth's twoFactor plugin.
+---
+
+## Setting Up Two-Factor Authentication
+
+When adding 2FA to your application, configure the `twoFactor` plugin with your app name as the issuer. This name appears in authenticator apps when users scan the QR code.
+
+```ts
+import { betterAuth } from "better-auth";
+import { twoFactor } from "better-auth/plugins";
+
+export const auth = betterAuth({
+ appName: "My App", // Used as the default issuer for TOTP
+ plugins: [
+ twoFactor({
+ issuer: "My App", // Optional: override the app name for 2FA specifically
+ }),
+ ],
+});
+```
+
+**Note**: After adding the plugin, run `npx @better-auth/cli migrate` to add the required database fields and tables.
+
+### Client-Side Setup
+
+Add the client plugin and configure the redirect behavior for 2FA verification:
+
+```ts
+import { createAuthClient } from "better-auth/client";
+import { twoFactorClient } from "better-auth/client/plugins";
+
+export const authClient = createAuthClient({
+ plugins: [
+ twoFactorClient({
+ onTwoFactorRedirect() {
+ window.location.href = "/2fa"; // Redirect to your 2FA verification page
+ },
+ }),
+ ],
+});
+```
+
+## Enabling 2FA for Users
+
+When a user enables 2FA, require their password for verification. The enable endpoint returns a TOTP URI for QR code generation and backup codes for account recovery.
+
+```ts
+const enable2FA = async (password: string) => {
+ const { data, error } = await authClient.twoFactor.enable({
+ password,
+ });
+
+ if (data) {
+ // data.totpURI - Use this to generate a QR code
+ // data.backupCodes - Display these to the user for safekeeping
+ }
+};
+```
+
+**Important**: The `twoFactorEnabled` flag on the user is not set to `true` until the user successfully verifies their first TOTP code. This ensures users have properly configured their authenticator app before 2FA is fully active.
+
+### Skipping Initial Verification
+
+If you want to enable 2FA immediately without requiring verification, set `skipVerificationOnEnable`:
+
+```ts
+twoFactor({
+ skipVerificationOnEnable: true, // Not recommended for most use cases
+});
+```
+
+**Note**: This is generally not recommended as it doesn't confirm the user has successfully set up their authenticator app.
+
+## TOTP (Authenticator App)
+
+TOTP generates time-based codes using an authenticator app (Google Authenticator, Authy, etc.). Codes are valid for 30 seconds by default.
+
+### Displaying the QR Code
+
+Use the TOTP URI to generate a QR code for users to scan:
+
+```tsx
+import QRCode from "react-qr-code";
+
+const TotpSetup = ({ totpURI }: { totpURI: string }) => {
+ return ;
+};
+```
+
+### Verifying TOTP Codes
+
+Better Auth accepts codes from one period before and one after the current time, accommodating minor clock differences between devices:
+
+```ts
+const verifyTotp = async (code: string) => {
+ const { data, error } = await authClient.twoFactor.verifyTotp({
+ code,
+ trustDevice: true, // Optional: remember this device for 30 days
+ });
+};
+```
+
+### TOTP Configuration Options
+
+```ts
+twoFactor({
+ totpOptions: {
+ digits: 6, // 6 or 8 digits (default: 6)
+ period: 30, // Code validity period in seconds (default: 30)
+ },
+});
+```
+
+## OTP (Email/SMS)
+
+OTP sends a one-time code to the user's email or phone. You must implement the `sendOTP` function to deliver codes.
+
+### Configuring OTP Delivery
+
+```ts
+import { betterAuth } from "better-auth";
+import { twoFactor } from "better-auth/plugins";
+import { sendEmail } from "./email";
+
+export const auth = betterAuth({
+ plugins: [
+ twoFactor({
+ otpOptions: {
+ sendOTP: async ({ user, otp }, ctx) => {
+ await sendEmail({
+ to: user.email,
+ subject: "Your verification code",
+ text: `Your code is: ${otp}`,
+ });
+ },
+ period: 5, // Code validity in minutes (default: 3)
+ digits: 6, // Number of digits (default: 6)
+ allowedAttempts: 5, // Max verification attempts (default: 5)
+ },
+ }),
+ ],
+});
+```
+
+### Sending and Verifying OTP
+
+```ts
+// Request an OTP to be sent
+const sendOtp = async () => {
+ const { data, error } = await authClient.twoFactor.sendOtp();
+};
+
+// Verify the OTP code
+const verifyOtp = async (code: string) => {
+ const { data, error } = await authClient.twoFactor.verifyOtp({
+ code,
+ trustDevice: true,
+ });
+};
+```
+
+### OTP Storage Security
+
+Configure how OTP codes are stored in the database:
+
+```ts
+twoFactor({
+ otpOptions: {
+ storeOTP: "encrypted", // Options: "plain", "encrypted", "hashed"
+ },
+});
+```
+
+For custom encryption:
+
+```ts
+twoFactor({
+ otpOptions: {
+ storeOTP: {
+ encrypt: async (token) => myEncrypt(token),
+ decrypt: async (token) => myDecrypt(token),
+ },
+ },
+});
+```
+
+## Backup Codes
+
+Backup codes provide account recovery when users lose access to their authenticator app or phone. They are generated automatically when 2FA is enabled.
+
+### Displaying Backup Codes
+
+Always show backup codes to users when they enable 2FA:
+
+```tsx
+const BackupCodes = ({ codes }: { codes: string[] }) => {
+ return (
+
+
Save these codes in a secure location:
+
+ {codes.map((code, i) => (
+ {code}
+ ))}
+
+
+ );
+};
+```
+
+### Regenerating Backup Codes
+
+When users need new codes, regenerate them (this invalidates all previous codes):
+
+```ts
+const regenerateBackupCodes = async (password: string) => {
+ const { data, error } = await authClient.twoFactor.generateBackupCodes({
+ password,
+ });
+ // data.backupCodes contains the new codes
+};
+```
+
+### Using Backup Codes for Recovery
+
+```ts
+const verifyBackupCode = async (code: string) => {
+ const { data, error } = await authClient.twoFactor.verifyBackupCode({
+ code,
+ trustDevice: true,
+ });
+};
+```
+
+**Note**: Each backup code can only be used once and is removed from the database after successful verification.
+
+### Backup Code Configuration
+
+```ts
+twoFactor({
+ backupCodeOptions: {
+ amount: 10, // Number of codes to generate (default: 10)
+ length: 10, // Length of each code (default: 10)
+ storeBackupCodes: "encrypted", // Options: "plain", "encrypted"
+ },
+});
+```
+
+## Handling 2FA During Sign-In
+
+When a user with 2FA enabled signs in, the response includes `twoFactorRedirect: true`:
+
+```ts
+const signIn = async (email: string, password: string) => {
+ const { data, error } = await authClient.signIn.email(
+ {
+ email,
+ password,
+ },
+ {
+ onSuccess(context) {
+ if (context.data.twoFactorRedirect) {
+ // Redirect to 2FA verification page
+ window.location.href = "/2fa";
+ }
+ },
+ }
+ );
+};
+```
+
+### Server-Side 2FA Detection
+
+When using `auth.api.signInEmail` on the server, check for 2FA redirect:
+
+```ts
+const response = await auth.api.signInEmail({
+ body: {
+ email: "user@example.com",
+ password: "password",
+ },
+});
+
+if ("twoFactorRedirect" in response) {
+ // Handle 2FA verification
+}
+```
+
+## Trusted Devices
+
+Trusted devices allow users to skip 2FA verification on subsequent sign-ins for a configurable period.
+
+### Enabling Trust on Verification
+
+Pass `trustDevice: true` when verifying 2FA:
+
+```ts
+await authClient.twoFactor.verifyTotp({
+ code: "123456",
+ trustDevice: true,
+});
+```
+
+### Configuring Trust Duration
+
+```ts
+twoFactor({
+ trustDeviceMaxAge: 30 * 24 * 60 * 60, // 30 days in seconds (default)
+});
+```
+
+**Note**: The trust period refreshes on each successful sign-in within the trust window.
+
+## Security Considerations
+
+### Session Management
+
+During the 2FA flow:
+
+1. User signs in with credentials
+2. Session cookie is removed (not yet authenticated)
+3. A temporary two-factor cookie is set (default: 10-minute expiration)
+4. User verifies via TOTP, OTP, or backup code
+5. Session cookie is created upon successful verification
+
+Configure the two-factor cookie expiration:
+
+```ts
+twoFactor({
+ twoFactorCookieMaxAge: 600, // 10 minutes in seconds (default)
+});
+```
+
+### Rate Limiting
+
+Better Auth applies built-in rate limiting to all 2FA endpoints (3 requests per 10 seconds). For OTP verification, additional attempt limiting is applied:
+
+```ts
+twoFactor({
+ otpOptions: {
+ allowedAttempts: 5, // Max attempts per OTP code (default: 5)
+ },
+});
+```
+
+### Encryption at Rest
+
+- TOTP secrets are encrypted using symmetric encryption with your auth secret
+- Backup codes are stored encrypted by default
+- OTP codes can be configured for plain, encrypted, or hashed storage
+
+### Constant-Time Comparison
+
+Better Auth uses constant-time comparison for OTP verification to prevent timing attacks.
+
+### Credential Account Requirement
+
+Two-factor authentication can only be enabled for credential (email/password) accounts. For social accounts, it's assumed the provider already handles 2FA.
+
+## Disabling 2FA
+
+Allow users to disable 2FA with password confirmation:
+
+```ts
+const disable2FA = async (password: string) => {
+ const { data, error } = await authClient.twoFactor.disable({
+ password,
+ });
+};
+```
+
+**Note**: When 2FA is disabled, trusted device records are revoked.
+
+## Complete Configuration Example
+
+```ts
+import { betterAuth } from "better-auth";
+import { twoFactor } from "better-auth/plugins";
+import { sendEmail } from "./email";
+
+export const auth = betterAuth({
+ appName: "My App",
+ plugins: [
+ twoFactor({
+ // TOTP settings
+ issuer: "My App",
+ totpOptions: {
+ digits: 6,
+ period: 30,
+ },
+ // OTP settings
+ otpOptions: {
+ sendOTP: async ({ user, otp }) => {
+ await sendEmail({
+ to: user.email,
+ subject: "Your verification code",
+ text: `Your code is: ${otp}`,
+ });
+ },
+ period: 5,
+ allowedAttempts: 5,
+ storeOTP: "encrypted",
+ },
+ // Backup code settings
+ backupCodeOptions: {
+ amount: 10,
+ length: 10,
+ storeBackupCodes: "encrypted",
+ },
+ // Session settings
+ twoFactorCookieMaxAge: 600, // 10 minutes
+ trustDeviceMaxAge: 30 * 24 * 60 * 60, // 30 days
+ }),
+ ],
+});
+```
diff --git a/skills/two-factor-authentication-best-practices/two-factor-authentication-best-practices b/skills/two-factor-authentication-best-practices/two-factor-authentication-best-practices
new file mode 120000
index 0000000..412d52f
--- /dev/null
+++ b/skills/two-factor-authentication-best-practices/two-factor-authentication-best-practices
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/two-factor-authentication-best-practices/
\ No newline at end of file
diff --git a/skills/ui-animation/ui-animation b/skills/ui-animation/ui-animation
new file mode 120000
index 0000000..d8f9b64
--- /dev/null
+++ b/skills/ui-animation/ui-animation
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/ui-animation/
\ No newline at end of file
diff --git a/skills/unocss/GENERATION.md b/skills/unocss/GENERATION.md
new file mode 100644
index 0000000..dff7d60
--- /dev/null
+++ b/skills/unocss/GENERATION.md
@@ -0,0 +1,5 @@
+# Generation Info
+
+- **Source:** `sources/unocss`
+- **Git SHA:** `2f7f267d0cc0c43d44357208aabb35b049359a08`
+- **Generated:** 2026-01-28
diff --git a/skills/unocss/SKILL.md b/skills/unocss/SKILL.md
new file mode 100644
index 0000000..d87365f
--- /dev/null
+++ b/skills/unocss/SKILL.md
@@ -0,0 +1,64 @@
+---
+name: unocss
+description: UnoCSS instant atomic CSS engine, superset of Tailwind CSS. Use when configuring UnoCSS, writing utility rules, shortcuts, or working with presets like Wind, Icons, Attributify.
+metadata:
+ author: Anthony Fu
+ version: "2026.1.28"
+ source: Generated from https://github.com/unocss/unocss, scripts located at https://github.com/antfu/skills
+---
+
+UnoCSS is an instant atomic CSS engine designed to be flexible and extensible. The core is un-opinionated - all CSS utilities are provided via presets. It's a superset of Tailwind CSS, so you can reuse your Tailwind knowledge for basic syntax usage.
+
+**Important:** Before writing UnoCSS code, agents should check for `uno.config.*` or `unocss.config.*` files in the project root to understand what presets, rules, and shortcuts are available. If the project setup is unclear, avoid using attributify mode and other advanced features - stick to basic `class` usage.
+
+> The skill is based on UnoCSS 66.x, generated at 2026-01-28.
+
+## Core
+
+| Topic | Description | Reference |
+|-------|-------------|-----------|
+| Configuration | Config file setup and all configuration options | [core-config](references/core-config.md) |
+| Rules | Static and dynamic rules for generating CSS utilities | [core-rules](references/core-rules.md) |
+| Shortcuts | Combine multiple rules into single shorthands | [core-shortcuts](references/core-shortcuts.md) |
+| Theme | Theming system for colors, breakpoints, and design tokens | [core-theme](references/core-theme.md) |
+| Variants | Apply variations like hover:, dark:, responsive to rules | [core-variants](references/core-variants.md) |
+| Extracting | How UnoCSS extracts utilities from source code | [core-extracting](references/core-extracting.md) |
+| Safelist & Blocklist | Force include or exclude specific utilities | [core-safelist](references/core-safelist.md) |
+| Layers & Preflights | CSS layer ordering and raw CSS injection | [core-layers](references/core-layers.md) |
+
+## Presets
+
+### Main Presets
+
+| Topic | Description | Reference |
+|-------|-------------|-----------|
+| Preset Wind3 | Tailwind CSS v3 / Windi CSS compatible preset (most common) | [preset-wind3](references/preset-wind3.md) |
+| Preset Wind4 | Tailwind CSS v4 compatible preset with modern CSS features | [preset-wind4](references/preset-wind4.md) |
+| Preset Mini | Minimal preset with essential utilities for custom builds | [preset-mini](references/preset-mini.md) |
+
+### Feature Presets
+
+| Topic | Description | Reference |
+|-------|-------------|-----------|
+| Preset Icons | Pure CSS icons using Iconify with any icon set | [preset-icons](references/preset-icons.md) |
+| Preset Attributify | Group utilities in HTML attributes instead of class | [preset-attributify](references/preset-attributify.md) |
+| Preset Typography | Prose classes for typographic defaults | [preset-typography](references/preset-typography.md) |
+| Preset Web Fonts | Easy Google Fonts and other web fonts integration | [preset-web-fonts](references/preset-web-fonts.md) |
+| Preset Tagify | Use utilities as HTML tag names | [preset-tagify](references/preset-tagify.md) |
+| Preset Rem to Px | Convert rem units to px for utilities | [preset-rem-to-px](references/preset-rem-to-px.md) |
+
+## Transformers
+
+| Topic | Description | Reference |
+|-------|-------------|-----------|
+| Variant Group | Shorthand for grouping utilities with common prefixes | [transformer-variant-group](references/transformer-variant-group.md) |
+| Directives | CSS directives: @apply, @screen, theme(), icon() | [transformer-directives](references/transformer-directives.md) |
+| Compile Class | Compile multiple classes into one hashed class | [transformer-compile-class](references/transformer-compile-class.md) |
+| Attributify JSX | Support valueless attributify in JSX/TSX | [transformer-attributify-jsx](references/transformer-attributify-jsx.md) |
+
+## Integrations
+
+| Topic | Description | Reference |
+|-------|-------------|-----------|
+| Vite Integration | Setting up UnoCSS with Vite and framework-specific tips | [integrations-vite](references/integrations-vite.md) |
+| Nuxt Integration | UnoCSS module for Nuxt applications | [integrations-nuxt](references/integrations-nuxt.md) |
diff --git a/skills/unocss/references/core-config.md b/skills/unocss/references/core-config.md
new file mode 100644
index 0000000..7336e4a
--- /dev/null
+++ b/skills/unocss/references/core-config.md
@@ -0,0 +1,187 @@
+---
+name: unocss-configuration
+description: Config file setup and all configuration options for UnoCSS
+---
+
+# UnoCSS Configuration
+
+UnoCSS is configured via a dedicated config file in your project root.
+
+## Config File
+
+**Recommended:** Use a dedicated `uno.config.ts` file for best IDE support and HMR.
+
+```ts
+// uno.config.ts
+import {
+ defineConfig,
+ presetAttributify,
+ presetIcons,
+ presetTypography,
+ presetWebFonts,
+ presetWind3,
+ transformerDirectives,
+ transformerVariantGroup
+} from 'unocss'
+
+export default defineConfig({
+ shortcuts: [
+ // ...
+ ],
+ theme: {
+ colors: {
+ // ...
+ }
+ },
+ presets: [
+ presetWind3(),
+ presetAttributify(),
+ presetIcons(),
+ presetTypography(),
+ presetWebFonts({
+ fonts: {
+ // ...
+ },
+ }),
+ ],
+ transformers: [
+ transformerDirectives(),
+ transformerVariantGroup(),
+ ],
+})
+```
+
+UnoCSS automatically looks for `uno.config.{js,ts,mjs,mts}` or `unocss.config.{js,ts,mjs,mts}` in the project root.
+
+## Key Configuration Options
+
+### rules
+Define CSS utility rules. Later entries have higher priority.
+
+```ts
+rules: [
+ ['m-1', { margin: '0.25rem' }],
+ [/^m-(\d+)$/, ([, d]) => ({ margin: `${d / 4}rem` })],
+]
+```
+
+### shortcuts
+Combine multiple rules into a single shorthand.
+
+```ts
+shortcuts: {
+ 'btn': 'py-2 px-4 font-semibold rounded-lg shadow-md',
+}
+```
+
+### theme
+Theme object for design tokens shared between rules.
+
+```ts
+theme: {
+ colors: {
+ brand: '#942192',
+ },
+ breakpoints: {
+ sm: '640px',
+ md: '768px',
+ },
+}
+```
+
+### presets
+Predefined configurations bundling rules, variants, and themes.
+
+```ts
+presets: [
+ presetWind3(),
+ presetIcons(),
+]
+```
+
+### transformers
+Transform source code to support special syntax.
+
+```ts
+transformers: [
+ transformerDirectives(),
+ transformerVariantGroup(),
+]
+```
+
+### variants
+Preprocess selectors with ability to rewrite CSS output.
+
+### extractors
+Handle source files and extract utility class names.
+
+### preflights
+Inject raw CSS globally.
+
+### layers
+Control the order of CSS layers. Default is `0`.
+
+```ts
+layers: {
+ 'components': -1,
+ 'default': 1,
+ 'utilities': 2,
+}
+```
+
+### safelist
+Utilities that are always included in output.
+
+```ts
+safelist: ['p-1', 'p-2', 'p-3']
+```
+
+### blocklist
+Utilities that are always excluded.
+
+```ts
+blocklist: ['p-1', /^p-[2-4]$/]
+```
+
+### content
+Configure where to extract utilities from.
+
+```ts
+content: {
+ pipeline: {
+ include: [/\.(vue|svelte|tsx|html)($|\?)/],
+ },
+ filesystem: ['src/**/*.php'],
+}
+```
+
+### separators
+Variant separator characters. Default: `[':', '-']`
+
+### outputToCssLayers
+Output UnoCSS layers as CSS Cascade Layers.
+
+```ts
+outputToCssLayers: true
+```
+
+## Specifying Config File Location
+
+```ts
+// vite.config.ts
+import UnoCSS from 'unocss/vite'
+
+export default defineConfig({
+ plugins: [
+ UnoCSS({
+ configFile: '../my-uno.config.ts',
+ }),
+ ],
+})
+```
+
+
diff --git a/skills/unocss/references/core-extracting.md b/skills/unocss/references/core-extracting.md
new file mode 100644
index 0000000..c3b5b21
--- /dev/null
+++ b/skills/unocss/references/core-extracting.md
@@ -0,0 +1,137 @@
+---
+name: unocss-extracting
+description: How UnoCSS extracts utilities from source code
+---
+
+# Extracting
+
+UnoCSS searches for utility usages in your codebase and generates CSS on-demand.
+
+## Content Sources
+
+### Pipeline Extraction (Vite/Webpack)
+
+Most efficient - extracts from build tool pipeline.
+
+**Default file types:** `.jsx`, `.tsx`, `.vue`, `.md`, `.html`, `.svelte`, `.astro`, `.marko`
+
+**Not included by default:** `.js`, `.ts`
+
+```ts
+export default defineConfig({
+ content: {
+ pipeline: {
+ include: [
+ /\.(vue|svelte|[jt]sx|mdx?|astro|html)($|\?)/,
+ 'src/**/*.{js,ts}', // Add js/ts
+ ],
+ },
+ },
+})
+```
+
+### Filesystem Extraction
+
+For files not in build pipeline:
+
+```ts
+export default defineConfig({
+ content: {
+ filesystem: [
+ 'src/**/*.php',
+ 'public/*.html',
+ ],
+ },
+})
+```
+
+### Inline Text Extraction
+
+```ts
+export default defineConfig({
+ content: {
+ inline: [
+ 'Some text
',
+ async () => (await fetch('https://example.com')).text(),
+ ],
+ },
+})
+```
+
+## Magic Comments
+
+### @unocss-include
+
+Force scan a file:
+
+```ts
+// @unocss-include
+export const classes = {
+ active: 'bg-primary text-white',
+}
+```
+
+### @unocss-ignore
+
+Skip entire file:
+
+```ts
+// @unocss-ignore
+```
+
+### @unocss-skip-start / @unocss-skip-end
+
+Skip specific blocks:
+
+```html
+Extracted
+
+NOT extracted
+
+```
+
+## Limitations
+
+UnoCSS works at **build time** - dynamic classes don't work:
+
+```html
+
+
+```
+
+### Solutions
+
+**1. Safelist** - Pre-generate known values:
+
+```ts
+safelist: ['p-1', 'p-2', 'p-3', 'p-4']
+```
+
+**2. Static mapping** - List combinations statically:
+
+```ts
+const colors = {
+ red: 'text-red border-red',
+ blue: 'text-blue border-blue',
+}
+```
+
+**3. Runtime** - Use `@unocss/runtime` for true runtime generation.
+
+## Custom Extractors
+
+```ts
+extractors: [
+ {
+ name: 'my-extractor',
+ extract({ code }) {
+ return code.match(/class:[\w-]+/g) || []
+ },
+ },
+]
+```
+
+
diff --git a/skills/unocss/references/core-layers.md b/skills/unocss/references/core-layers.md
new file mode 100644
index 0000000..ae297df
--- /dev/null
+++ b/skills/unocss/references/core-layers.md
@@ -0,0 +1,104 @@
+---
+name: unocss-layers-preflights
+description: CSS layer ordering and raw CSS injection
+---
+
+# Layers and Preflights
+
+Control CSS output order and inject global CSS.
+
+## Layers
+
+Set layer on rules:
+
+```ts
+rules: [
+ [/^m-(\d)$/, ([, d]) => ({ margin: `${d / 4}rem` }), { layer: 'utilities' }],
+ ['btn', { padding: '4px' }], // default layer
+]
+```
+
+### Layer Ordering
+
+```ts
+layers: {
+ 'components': -1,
+ 'default': 1,
+ 'utilities': 2,
+}
+```
+
+### Import Layers Separately
+
+```ts
+import 'uno:components.css'
+import 'uno.css'
+import './my-custom.css'
+import 'uno:utilities.css'
+```
+
+### CSS Cascade Layers
+
+```ts
+outputToCssLayers: true
+
+// Or with custom names
+outputToCssLayers: {
+ cssLayerName: (layer) => {
+ if (layer === 'default') return 'utilities'
+ if (layer === 'shortcuts') return 'utilities.shortcuts'
+ }
+}
+```
+
+## Layer Variants
+
+```html
+
+
+
+
+
+```
+
+## Preflights
+
+Inject raw CSS globally:
+
+```ts
+preflights: [
+ {
+ getCSS: ({ theme }) => `
+ * {
+ color: ${theme.colors.gray?.[700] ?? '#333'};
+ margin: 0;
+ }
+ `,
+ },
+]
+```
+
+With layer:
+
+```ts
+preflights: [
+ {
+ layer: 'base',
+ getCSS: () => `html { font-family: system-ui; }`,
+ },
+]
+```
+
+## preset-wind4 Layers
+
+| Layer | Description | Order |
+|-------|-------------|-------|
+| `properties` | CSS @property rules | -200 |
+| `theme` | Theme CSS variables | -150 |
+| `base` | Reset styles | -100 |
+
+
diff --git a/skills/unocss/references/core-rules.md b/skills/unocss/references/core-rules.md
new file mode 100644
index 0000000..310aa99
--- /dev/null
+++ b/skills/unocss/references/core-rules.md
@@ -0,0 +1,166 @@
+---
+name: unocss-rules
+description: Static and dynamic rules for generating CSS utilities in UnoCSS
+---
+
+# UnoCSS Rules
+
+Rules define utility classes and the CSS they generate. UnoCSS has many built-in rules via presets and allows custom rules.
+
+## Static Rules
+
+Simple mapping from class name to CSS properties:
+
+```ts
+rules: [
+ ['m-1', { margin: '0.25rem' }],
+ ['font-bold', { 'font-weight': 700 }],
+]
+```
+
+Usage: `
` generates `.m-1 { margin: 0.25rem; }`
+
+**Note:** Use CSS property syntax with hyphens (e.g., `font-weight` not `fontWeight`). Quote properties with hyphens.
+
+## Dynamic Rules
+
+Use RegExp matcher with function body for flexible utilities:
+
+```ts
+rules: [
+ // Match m-1, m-2, m-100, etc.
+ [/^m-(\d+)$/, ([, d]) => ({ margin: `${d / 4}rem` })],
+
+ // Access theme and context
+ [/^p-(\d+)$/, (match, ctx) => ({ padding: `${match[1] / 4}rem` })],
+]
+```
+
+The function receives:
+1. RegExp match result (destructure to get captured groups)
+2. Context object with `theme`, `symbols`, etc.
+
+## CSS Fallback Values
+
+Return 2D array for CSS property fallbacks (browser compatibility):
+
+```ts
+rules: [
+ [/^h-(\d+)dvh$/, ([_, d]) => [
+ ['height', `${d}vh`],
+ ['height', `${d}dvh`],
+ ]],
+]
+```
+
+Generates: `.h-100dvh { height: 100vh; height: 100dvh; }`
+
+## Special Symbols
+
+Control CSS output with symbols from `@unocss/core`:
+
+```ts
+import { symbols } from '@unocss/core'
+
+rules: [
+ ['grid', {
+ [symbols.parent]: '@supports (display: grid)',
+ display: 'grid',
+ }],
+]
+```
+
+### Available Symbols
+
+| Symbol | Description |
+|--------|-------------|
+| `symbols.parent` | Parent wrapper (e.g., `@supports`, `@media`) |
+| `symbols.selector` | Function to modify the selector |
+| `symbols.layer` | Set the UnoCSS layer |
+| `symbols.variants` | Array of variant handlers |
+| `symbols.shortcutsNoMerge` | Disable merging in shortcuts |
+| `symbols.noMerge` | Disable rule merging |
+| `symbols.sort` | Override sorting order |
+| `symbols.body` | Full control of CSS body |
+
+## Multi-Selector Rules
+
+Use generator functions to yield multiple CSS rules:
+
+```ts
+rules: [
+ [/^button-(.*)$/, function* ([, color], { symbols }) {
+ yield { background: color }
+ yield {
+ [symbols.selector]: selector => `${selector}:hover`,
+ background: `color-mix(in srgb, ${color} 90%, black)`
+ }
+ }],
+]
+```
+
+Generates both `.button-red { background: red; }` and `.button-red:hover { ... }`
+
+## Fully Controlled Rules
+
+Return a string for complete CSS control (advanced):
+
+```ts
+import { defineConfig, toEscapedSelector as e } from 'unocss'
+
+rules: [
+ [/^custom-(.+)$/, ([, name], { rawSelector, theme }) => {
+ const selector = e(rawSelector)
+ return `
+${selector} { font-size: ${theme.fontSize.sm}; }
+${selector}::after { content: 'after'; }
+@media (min-width: ${theme.breakpoints.sm}) {
+ ${selector} { font-size: ${theme.fontSize.lg}; }
+}
+`
+ }],
+]
+```
+
+**Warning:** Fully controlled rules don't work with variants like `hover:`.
+
+## Symbols.body for Variant Support
+
+Use `symbols.body` to keep variant support with custom CSS:
+
+```ts
+rules: [
+ ['custom-red', {
+ [symbols.body]: `
+ font-size: 1rem;
+ &::after { content: 'after'; }
+ & > .bar { color: red; }
+ `,
+ [symbols.selector]: selector => `:is(${selector})`,
+ }]
+]
+```
+
+## Rule Ordering
+
+Later rules have higher priority. Dynamic rules output is sorted alphabetically within the group.
+
+## Rule Merging
+
+UnoCSS merges rules with identical CSS bodies:
+
+```html
+
+```
+
+Generates:
+```css
+.hover\:m2:hover, .m-2 { margin: 0.5rem; }
+```
+
+Use `symbols.noMerge` to disable.
+
+
diff --git a/skills/unocss/references/core-safelist.md b/skills/unocss/references/core-safelist.md
new file mode 100644
index 0000000..d2f0bc8
--- /dev/null
+++ b/skills/unocss/references/core-safelist.md
@@ -0,0 +1,105 @@
+---
+name: unocss-safelist-blocklist
+description: Force include or exclude specific utilities
+---
+
+# Safelist and Blocklist
+
+Control which utilities are always included or excluded.
+
+## Safelist
+
+Utilities always included, regardless of detection:
+
+```ts
+export default defineConfig({
+ safelist: [
+ 'p-1', 'p-2', 'p-3',
+ // Dynamic generation
+ ...Array.from({ length: 4 }, (_, i) => `p-${i + 1}`),
+ ],
+})
+```
+
+### Function Form
+
+```ts
+safelist: [
+ 'p-1',
+ () => ['m-1', 'm-2'],
+ (context) => {
+ const colors = Object.keys(context.theme.colors || {})
+ return colors.map(c => `bg-${c}-500`)
+ },
+]
+```
+
+### Common Use Cases
+
+```ts
+safelist: [
+ // Dynamic colors from CMS
+ () => ['primary', 'secondary'].flatMap(c => [
+ `bg-${c}`, `text-${c}`, `border-${c}`,
+ ]),
+
+ // Component variants
+ () => {
+ const variants = ['primary', 'danger']
+ const sizes = ['sm', 'md', 'lg']
+ return variants.flatMap(v => sizes.map(s => `btn-${v}-${s}`))
+ },
+]
+```
+
+## Blocklist
+
+Utilities never generated:
+
+```ts
+blocklist: [
+ 'p-1', // Exact match
+ /^p-[2-4]$/, // Regex
+]
+```
+
+### With Messages
+
+```ts
+blocklist: [
+ ['bg-red-500', { message: 'Use bg-red-600 instead' }],
+ [/^text-xs$/, { message: 'Use text-sm for accessibility' }],
+]
+```
+
+## Safelist vs Blocklist
+
+| Feature | Safelist | Blocklist |
+|---------|----------|-----------|
+| Purpose | Always include | Always exclude |
+| Strings | ✅ | ✅ |
+| Regex | ❌ | ✅ |
+| Functions | ✅ | ❌ |
+
+**Note:** Blocklist wins if utility is in both.
+
+## Best Practice
+
+Prefer static mappings over safelist:
+
+```ts
+// Better: UnoCSS extracts automatically
+const sizes = {
+ sm: 'text-sm p-2',
+ md: 'text-base p-4',
+}
+
+// Avoid: Large safelist
+safelist: ['text-sm', 'text-base', 'p-2', 'p-4']
+```
+
+
diff --git a/skills/unocss/references/core-shortcuts.md b/skills/unocss/references/core-shortcuts.md
new file mode 100644
index 0000000..9d53ea5
--- /dev/null
+++ b/skills/unocss/references/core-shortcuts.md
@@ -0,0 +1,89 @@
+---
+name: unocss-shortcuts
+description: Combine multiple utility rules into single shorthand classes
+---
+
+# UnoCSS Shortcuts
+
+Shortcuts combine multiple rules into a single shorthand, inspired by Windi CSS.
+
+## Static Shortcuts
+
+Define as an object mapping shortcut names to utility combinations:
+
+```ts
+shortcuts: {
+ // Multiple utilities combined
+ 'btn': 'py-2 px-4 font-semibold rounded-lg shadow-md',
+ 'btn-green': 'text-white bg-green-500 hover:bg-green-700',
+ // Single utility alias
+ 'red': 'text-red-100',
+}
+```
+
+Usage:
+```html
+
Click me
+```
+
+## Dynamic Shortcuts
+
+Use RegExp matcher with function, similar to dynamic rules:
+
+```ts
+shortcuts: [
+ // Static shortcuts can be in array too
+ {
+ btn: 'py-2 px-4 font-semibold rounded-lg shadow-md',
+ },
+ // Dynamic shortcut
+ [/^btn-(.*)$/, ([, c]) => `bg-${c}-400 text-${c}-100 py-2 px-4 rounded-lg`],
+]
+```
+
+Now `btn-green` and `btn-red` generate:
+
+```css
+.btn-green {
+ padding: 0.5rem 1rem;
+ --un-bg-opacity: 1;
+ background-color: rgb(74 222 128 / var(--un-bg-opacity));
+ border-radius: 0.5rem;
+ --un-text-opacity: 1;
+ color: rgb(220 252 231 / var(--un-text-opacity));
+}
+```
+
+## Accessing Theme in Shortcuts
+
+Dynamic shortcuts receive context with theme access:
+
+```ts
+shortcuts: [
+ [/^badge-(.*)$/, ([, c], { theme }) => {
+ if (Object.keys(theme.colors).includes(c))
+ return `bg-${c}4:10 text-${c}5 rounded`
+ }],
+]
+```
+
+## Shortcuts Layer
+
+Shortcuts are output to the `shortcuts` layer by default. Configure with:
+
+```ts
+shortcutsLayer: 'my-shortcuts-layer'
+```
+
+## Key Points
+
+- Later shortcuts have higher priority
+- Shortcuts can reference other shortcuts
+- Dynamic shortcuts work like dynamic rules
+- Shortcuts are expanded at build time, not runtime
+- All variants work with shortcuts (`hover:btn`, `dark:btn`, etc.)
+
+
diff --git a/skills/unocss/references/core-theme.md b/skills/unocss/references/core-theme.md
new file mode 100644
index 0000000..ba28098
--- /dev/null
+++ b/skills/unocss/references/core-theme.md
@@ -0,0 +1,172 @@
+---
+name: unocss-theme
+description: Theming system for colors, breakpoints, and design tokens
+---
+
+# UnoCSS Theme
+
+UnoCSS supports theming similar to Tailwind CSS / Windi CSS. The `theme` property is deep-merged with the default theme.
+
+## Basic Usage
+
+```ts
+theme: {
+ colors: {
+ veryCool: '#0000ff', // class="text-very-cool"
+ brand: {
+ primary: 'hsl(var(--hue, 217) 78% 51%)', // class="bg-brand-primary"
+ DEFAULT: '#942192' // class="bg-brand"
+ },
+ },
+}
+```
+
+## Using Theme in Rules
+
+Access theme values in dynamic rules:
+
+```ts
+rules: [
+ [/^text-(.*)$/, ([, c], { theme }) => {
+ if (theme.colors[c])
+ return { color: theme.colors[c] }
+ }],
+]
+```
+
+## Using Theme in Variants
+
+```ts
+variants: [
+ {
+ name: 'variant-name',
+ match(matcher, { theme }) {
+ // Access theme.breakpoints, theme.colors, etc.
+ },
+ },
+]
+```
+
+## Using Theme in Shortcuts
+
+```ts
+shortcuts: [
+ [/^badge-(.*)$/, ([, c], { theme }) => {
+ if (Object.keys(theme.colors).includes(c))
+ return `bg-${c}4:10 text-${c}5 rounded`
+ }],
+]
+```
+
+## Breakpoints
+
+**Warning:** Custom `breakpoints` object **overrides** the default, not merges.
+
+```ts
+theme: {
+ breakpoints: {
+ sm: '320px',
+ md: '640px',
+ },
+}
+```
+
+Only `sm:` and `md:` variants will be available.
+
+### Inherit Default Breakpoints
+
+Use `extendTheme` to merge with defaults:
+
+```ts
+extendTheme: (theme) => {
+ return {
+ ...theme,
+ breakpoints: {
+ ...theme.breakpoints,
+ sm: '320px',
+ md: '640px',
+ },
+ }
+}
+```
+
+**Note:** `verticalBreakpoints` works the same for vertical layout.
+
+### Breakpoint Sorting
+
+Breakpoints are sorted by size. Use consistent units to avoid errors:
+
+```ts
+theme: {
+ breakpoints: {
+ sm: '320px',
+ // Don't mix units - convert rem to px
+ // md: '40rem', // Bad
+ md: `${40 * 16}px`, // Good
+ lg: '960px',
+ },
+}
+```
+
+## ExtendTheme
+
+`extendTheme` lets you modify the merged theme object:
+
+### Mutate Theme
+
+```ts
+extendTheme: (theme) => {
+ theme.colors.veryCool = '#0000ff'
+ theme.colors.brand = {
+ primary: 'hsl(var(--hue, 217) 78% 51%)',
+ }
+}
+```
+
+### Replace Theme
+
+Return a new object to completely replace:
+
+```ts
+extendTheme: (theme) => {
+ return {
+ ...theme,
+ colors: {
+ ...theme.colors,
+ veryCool: '#0000ff',
+ },
+ }
+}
+```
+
+## Theme Differences in Presets
+
+### preset-wind3 vs preset-wind4
+
+| preset-wind3 | preset-wind4 |
+|--------------|--------------|
+| `fontFamily` | `font` |
+| `fontSize` | `text.fontSize` |
+| `lineHeight` | `text.lineHeight` or `leading` |
+| `letterSpacing` | `text.letterSpacing` or `tracking` |
+| `borderRadius` | `radius` |
+| `easing` | `ease` |
+| `breakpoints` | `breakpoint` |
+| `boxShadow` | `shadow` |
+| `transitionProperty` | `property` |
+
+## Common Theme Keys
+
+- `colors` - Color palette
+- `breakpoints` - Responsive breakpoints
+- `fontFamily` - Font stacks
+- `fontSize` - Text sizes
+- `spacing` - Spacing scale
+- `borderRadius` - Border radius values
+- `boxShadow` - Shadow definitions
+- `animation` - Animation keyframes and timing
+
+
diff --git a/skills/unocss/references/core-variants.md b/skills/unocss/references/core-variants.md
new file mode 100644
index 0000000..b6a2923
--- /dev/null
+++ b/skills/unocss/references/core-variants.md
@@ -0,0 +1,175 @@
+---
+name: unocss-variants
+description: Apply variations like hover:, dark:, responsive to rules
+---
+
+# UnoCSS Variants
+
+Variants apply modifications to utility rules, like `hover:`, `dark:`, or responsive prefixes.
+
+## How Variants Work
+
+When matching `hover:m-2`:
+
+1. `hover:m-2` is extracted from source
+2. Sent to all variants for matching
+3. `hover:` variant matches and returns `m-2`
+4. Result `m-2` continues to next variants
+5. Finally matches the rule `.m-2 { margin: 0.5rem; }`
+6. Variant transformation applied: `.hover\:m-2:hover { margin: 0.5rem; }`
+
+## Creating Custom Variants
+
+```ts
+variants: [
+ // hover: variant
+ (matcher) => {
+ if (!matcher.startsWith('hover:'))
+ return matcher
+ return {
+ // Remove prefix, pass to next variants/rules
+ matcher: matcher.slice(6),
+ // Modify the selector
+ selector: s => `${s}:hover`,
+ }
+ },
+],
+rules: [
+ [/^m-(\d)$/, ([, d]) => ({ margin: `${d / 4}rem` })],
+]
+```
+
+## Variant Return Object
+
+- `matcher` - The processed class name to pass forward
+- `selector` - Function to customize the CSS selector
+- `parent` - Wrapper like `@media`, `@supports`
+- `layer` - Specify output layer
+- `sort` - Control ordering
+
+## Built-in Variants (preset-wind3)
+
+### Pseudo-classes
+- `hover:`, `focus:`, `active:`, `visited:`
+- `first:`, `last:`, `odd:`, `even:`
+- `disabled:`, `checked:`, `required:`
+- `focus-within:`, `focus-visible:`
+
+### Pseudo-elements
+- `before:`, `after:`
+- `placeholder:`, `selection:`
+- `marker:`, `file:`
+
+### Responsive
+- `sm:`, `md:`, `lg:`, `xl:`, `2xl:`
+- `lt-sm:` (less than sm)
+- `at-lg:` (at lg only)
+
+### Dark Mode
+- `dark:` - Class-based dark mode (default)
+- `@dark:` - Media query dark mode
+
+### Group/Peer
+- `group-hover:`, `group-focus:`
+- `peer-checked:`, `peer-focus:`
+
+### Container Queries
+- `@container`, `@sm:`, `@md:`
+
+### Print
+- `print:`
+
+### Supports
+- `supports-[display:grid]:`
+
+### Aria
+- `aria-checked:`, `aria-disabled:`
+
+### Data Attributes
+- `data-[state=open]:`
+
+## Dark Mode Configuration
+
+### Class-based (default)
+```ts
+presetWind3({
+ dark: 'class'
+})
+```
+
+```html
+
+```
+
+Generates: `.dark .dark\:bg-gray-800 { ... }`
+
+### Media Query
+```ts
+presetWind3({
+ dark: 'media'
+})
+```
+
+Generates: `@media (prefers-color-scheme: dark) { ... }`
+
+### Opt-in Media Query
+Use `@dark:` regardless of config:
+
+```html
+
+```
+
+### Custom Selectors
+```ts
+presetWind3({
+ dark: {
+ light: '.light-mode',
+ dark: '.dark-mode',
+ }
+})
+```
+
+## CSS @layer Variant
+
+Native CSS `@layer` support:
+
+```html
+
+```
+
+Generates:
+```css
+@layer foo {
+ .layer-foo\:p-4 { padding: 1rem; }
+}
+```
+
+## Breakpoint Differences from Windi CSS
+
+| Windi CSS | UnoCSS |
+|-----------|--------|
+| `
xl:p-1` | `xl:p-1` |
+
+## Media Hover (Experimental)
+
+Addresses sticky hover on touch devices:
+
+```html
+
+```
+
+Generates:
+```css
+@media (hover: hover) and (pointer: fine) {
+ .\@hover-text-red:hover { color: rgb(248 113 113); }
+}
+```
+
+
diff --git a/skills/unocss/references/integrations-nuxt.md b/skills/unocss/references/integrations-nuxt.md
new file mode 100644
index 0000000..af53921
--- /dev/null
+++ b/skills/unocss/references/integrations-nuxt.md
@@ -0,0 +1,199 @@
+---
+name: unocss-nuxt-integration
+description: UnoCSS module for Nuxt applications
+---
+
+# UnoCSS Nuxt Integration
+
+The official Nuxt module for UnoCSS.
+
+## Installation
+
+```bash
+pnpm add -D unocss @unocss/nuxt
+```
+
+Add to Nuxt config:
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ modules: [
+ '@unocss/nuxt',
+ ],
+})
+```
+
+Create config file:
+
+```ts
+// uno.config.ts
+import { defineConfig, presetWind3 } from 'unocss'
+
+export default defineConfig({
+ presets: [
+ presetWind3(),
+ ],
+})
+```
+
+**Note:** The `uno.css` entry is automatically injected by the module.
+
+## Support Status
+
+| Build Tool | Nuxt 2 | Nuxt Bridge | Nuxt 3 |
+|------------|--------|-------------|--------|
+| Webpack Dev | ✅ | ✅ | 🚧 |
+| Webpack Build | ✅ | ✅ | ✅ |
+| Vite Dev | - | ✅ | ✅ |
+| Vite Build | - | ✅ | ✅ |
+
+## Configuration
+
+### Using uno.config.ts (Recommended)
+
+Use a dedicated config file for best IDE support:
+
+```ts
+// uno.config.ts
+import { defineConfig, presetWind3, presetIcons } from 'unocss'
+
+export default defineConfig({
+ presets: [
+ presetWind3(),
+ presetIcons(),
+ ],
+ shortcuts: {
+ 'btn': 'py-2 px-4 font-semibold rounded-lg',
+ },
+})
+```
+
+### Nuxt Layers Support
+
+Enable automatic config merging from Nuxt layers:
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ unocss: {
+ nuxtLayers: true,
+ },
+})
+```
+
+Then in your root config:
+
+```ts
+// uno.config.ts
+import config from './.nuxt/uno.config.mjs'
+
+export default config
+```
+
+Or extend the merged config:
+
+```ts
+// uno.config.ts
+import { mergeConfigs } from '@unocss/core'
+import config from './.nuxt/uno.config.mjs'
+
+export default mergeConfigs([config, {
+ // Your overrides
+ shortcuts: {
+ 'custom': 'text-red-500',
+ },
+}])
+```
+
+## Common Setup Example
+
+```ts
+// nuxt.config.ts
+export default defineNuxtConfig({
+ modules: [
+ '@unocss/nuxt',
+ ],
+})
+```
+
+```ts
+// uno.config.ts
+import {
+ defineConfig,
+ presetAttributify,
+ presetIcons,
+ presetTypography,
+ presetWebFonts,
+ presetWind3,
+ transformerDirectives,
+ transformerVariantGroup,
+} from 'unocss'
+
+export default defineConfig({
+ presets: [
+ presetWind3(),
+ presetAttributify(),
+ presetIcons({
+ scale: 1.2,
+ }),
+ presetTypography(),
+ presetWebFonts({
+ fonts: {
+ sans: 'DM Sans',
+ mono: 'DM Mono',
+ },
+ }),
+ ],
+ transformers: [
+ transformerDirectives(),
+ transformerVariantGroup(),
+ ],
+ shortcuts: [
+ ['btn', 'px-4 py-1 rounded inline-block bg-teal-600 text-white cursor-pointer hover:bg-teal-700 disabled:cursor-default disabled:bg-gray-600 disabled:opacity-50'],
+ ],
+})
+```
+
+## Usage in Components
+
+```vue
+
+
+
+ Hello UnoCSS!
+
+
+ Click me
+
+
+
+```
+
+With attributify mode:
+
+```vue
+
+
+
+ Hello UnoCSS!
+
+
+
+```
+
+## Inspector
+
+In development, visit `/_nuxt/__unocss` to access the UnoCSS inspector.
+
+## Key Differences from Vite
+
+- No need to import `virtual:uno.css` - automatically injected
+- Config file discovery works the same
+- All Vite plugin features available
+- Nuxt layers config merging available
+
+
diff --git a/skills/unocss/references/integrations-vite.md b/skills/unocss/references/integrations-vite.md
new file mode 100644
index 0000000..d3f51a4
--- /dev/null
+++ b/skills/unocss/references/integrations-vite.md
@@ -0,0 +1,283 @@
+---
+name: unocss-vite-integration
+description: Setting up UnoCSS with Vite and framework-specific tips
+---
+
+# UnoCSS Vite Integration
+
+The Vite plugin is the most common way to use UnoCSS.
+
+## Installation
+
+```bash
+pnpm add -D unocss
+```
+
+```ts
+// vite.config.ts
+import UnoCSS from 'unocss/vite'
+import { defineConfig } from 'vite'
+
+export default defineConfig({
+ plugins: [
+ UnoCSS(),
+ ],
+})
+```
+
+Create config file:
+
+```ts
+// uno.config.ts
+import { defineConfig, presetWind3 } from 'unocss'
+
+export default defineConfig({
+ presets: [
+ presetWind3(),
+ ],
+})
+```
+
+Add to entry:
+
+```ts
+// main.ts
+import 'virtual:uno.css'
+```
+
+## Modes
+
+### global (default)
+
+Standard mode - generates global CSS injected via `uno.css` import.
+
+```ts
+import 'virtual:uno.css'
+```
+
+### vue-scoped
+
+Injects generated CSS into Vue SFC `
+
...
+`
+```
+
+### per-module (experimental)
+
+Generates CSS per module with optional scoping.
+
+### dist-chunk (experimental)
+
+Generates CSS per chunk on build for MPA.
+
+## DevTools Support
+
+Edit classes directly in browser DevTools:
+
+```ts
+import 'virtual:uno.css'
+import 'virtual:unocss-devtools'
+```
+
+**Warning:** Uses MutationObserver to detect changes. Dynamic classes from scripts will also be included.
+
+## Framework-Specific Setup
+
+### React
+
+```ts
+// vite.config.ts
+import React from '@vitejs/plugin-react'
+import UnoCSS from 'unocss/vite'
+
+export default {
+ plugins: [
+ UnoCSS(), // Must be before React when using attributify
+ React(),
+ ],
+}
+```
+
+**Note:** Remove `tsc` from build script if using `@unocss/preset-attributify`.
+
+### Vue
+
+Works out of the box with `@vitejs/plugin-vue`.
+
+### Svelte
+
+```ts
+import { svelte } from '@sveltejs/vite-plugin-svelte'
+import extractorSvelte from '@unocss/extractor-svelte'
+import UnoCSS from 'unocss/vite'
+
+export default {
+ plugins: [
+ UnoCSS({
+ extractors: [extractorSvelte()],
+ }),
+ svelte(),
+ ],
+}
+```
+
+Supports `class:foo` and `class:foo={bar}` syntax.
+
+### SvelteKit
+
+Same as Svelte, use `sveltekit()` from `@sveltejs/kit/vite`.
+
+### Solid
+
+```ts
+import UnoCSS from 'unocss/vite'
+import solidPlugin from 'vite-plugin-solid'
+
+export default {
+ plugins: [
+ UnoCSS(),
+ solidPlugin(),
+ ],
+}
+```
+
+### Preact
+
+```ts
+import Preact from '@preact/preset-vite'
+import UnoCSS from 'unocss/vite'
+
+export default {
+ plugins: [
+ UnoCSS(),
+ Preact(),
+ ],
+}
+```
+
+### Elm
+
+```ts
+import Elm from 'vite-plugin-elm'
+import UnoCSS from 'unocss/vite'
+
+export default {
+ plugins: [
+ Elm(),
+ UnoCSS(),
+ ],
+}
+```
+
+### Web Components (Lit)
+
+```ts
+UnoCSS({
+ mode: 'shadow-dom',
+ shortcuts: [
+ { 'cool-blue': 'bg-blue-500 text-white' },
+ ],
+})
+```
+
+```ts
+// my-element.ts
+@customElement('my-element')
+export class MyElement extends LitElement {
+ static styles = css`
+ :host { ... }
+ @unocss-placeholder
+ `
+}
+```
+
+Supports `part-[
]:` for `::part` styling.
+
+## Inspector
+
+Visit `http://localhost:5173/__unocss` in dev mode to:
+
+- Inspect generated CSS rules
+- See applied classes per file
+- Test utilities in REPL
+
+## Legacy Browser Support
+
+With `@vitejs/plugin-legacy`:
+
+```ts
+import legacy from '@vitejs/plugin-legacy'
+import UnoCSS from 'unocss/vite'
+
+export default {
+ plugins: [
+ UnoCSS({
+ legacy: {
+ renderModernChunks: false,
+ },
+ }),
+ legacy({
+ targets: ['defaults', 'not IE 11'],
+ renderModernChunks: false,
+ }),
+ ],
+}
+```
+
+## VanillaJS / TypeScript
+
+By default, `.js` and `.ts` files are not extracted. Configure to include:
+
+```ts
+// uno.config.ts
+export default defineConfig({
+ content: {
+ pipeline: {
+ include: [
+ /\.(vue|svelte|[jt]sx|html)($|\?)/,
+ 'src/**/*.{js,ts}',
+ ],
+ },
+ },
+})
+```
+
+Or use magic comment in files:
+
+```ts
+// @unocss-include
+export const classes = {
+ active: 'bg-primary text-white',
+}
+```
+
+
diff --git a/skills/unocss/references/preset-attributify.md b/skills/unocss/references/preset-attributify.md
new file mode 100644
index 0000000..4b41e14
--- /dev/null
+++ b/skills/unocss/references/preset-attributify.md
@@ -0,0 +1,142 @@
+---
+name: preset-attributify
+description: Group utilities in HTML attributes instead of class
+---
+
+# Preset Attributify
+
+Group utilities in HTML attributes for better readability.
+
+## Installation
+
+```ts
+import { defineConfig, presetAttributify, presetWind3 } from 'unocss'
+
+export default defineConfig({
+ presets: [
+ presetWind3(),
+ presetAttributify(),
+ ],
+})
+```
+
+## Basic Usage
+
+Instead of long class strings:
+
+```html
+
+ Button
+
+```
+
+Group by prefix in attributes:
+
+```html
+
+ Button
+
+```
+
+## Prefix Self-Referencing
+
+For utilities matching their prefix (`flex`, `grid`, `border`), use `~`:
+
+```html
+
+Button
+
+
+Button
+```
+
+## Valueless Attributify
+
+Use utilities as boolean attributes:
+
+```html
+
+```
+
+## Handling Property Conflicts
+
+When attribute names conflict with HTML properties:
+
+```html
+
+Text color to red
+```
+
+### Enforce Prefix
+
+```ts
+presetAttributify({
+ prefix: 'un-',
+ prefixedOnly: true,
+})
+```
+
+## Options
+
+```ts
+presetAttributify({
+ strict: false, // Only generate CSS for attributify
+ prefix: 'un-', // Attribute prefix
+ prefixedOnly: false, // Require prefix for all
+ nonValuedAttribute: true, // Support valueless attributes
+ ignoreAttributes: [], // Attributes to ignore
+ trueToNonValued: false, // Treat value="true" as valueless
+})
+```
+
+## TypeScript Support
+
+### Vue 3
+
+```ts
+// html.d.ts
+declare module '@vue/runtime-dom' {
+ interface HTMLAttributes { [key: string]: any }
+}
+declare module '@vue/runtime-core' {
+ interface AllowedComponentProps { [key: string]: any }
+}
+export {}
+```
+
+### React
+
+```ts
+import type { AttributifyAttributes } from '@unocss/preset-attributify'
+
+declare module 'react' {
+ interface HTMLAttributes extends AttributifyAttributes {}
+}
+```
+
+## JSX Support
+
+For JSX where `` becomes `
`:
+
+```ts
+import { transformerAttributifyJsx } from 'unocss'
+
+export default defineConfig({
+ transformers: [
+ transformerAttributifyJsx(),
+ ],
+})
+```
+
+**Important:** Only use attributify if `uno.config.*` shows `presetAttributify()` is enabled.
+
+
diff --git a/skills/unocss/references/preset-icons.md b/skills/unocss/references/preset-icons.md
new file mode 100644
index 0000000..a668297
--- /dev/null
+++ b/skills/unocss/references/preset-icons.md
@@ -0,0 +1,184 @@
+---
+name: preset-icons
+description: Pure CSS icons using Iconify with any icon set
+---
+
+# Preset Icons
+
+Use any icon as a pure CSS class, powered by Iconify.
+
+## Installation
+
+```bash
+pnpm add -D @unocss/preset-icons @iconify-json/[collection-name]
+```
+
+Example: `@iconify-json/mdi` for Material Design Icons, `@iconify-json/carbon` for Carbon icons.
+
+```ts
+import { defineConfig, presetIcons } from 'unocss'
+
+export default defineConfig({
+ presets: [
+ presetIcons(),
+ ],
+})
+```
+
+## Usage
+
+Two naming conventions:
+- `
-` → `i-ph-anchor-simple-thin`
+- `:` → `i-ph:anchor-simple-thin`
+
+```html
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+Browse icons at [icones.js.org](https://icones.js.org/) or [Iconify](https://icon-sets.iconify.design/).
+
+## Icon Modes
+
+Icons automatically choose between `mask` (monochrome) and `background-img` (colorful).
+
+### Force Specific Mode
+
+- `?mask` - Render as mask (colorable with `currentColor`)
+- `?bg` - Render as background image (preserves original colors)
+
+```html
+
+
+
+
+
+```
+
+## Options
+
+```ts
+presetIcons({
+ scale: 1.2, // Scale relative to font size
+ prefix: 'i-', // Class prefix (default)
+ mode: 'auto', // 'auto' | 'mask' | 'bg'
+ extraProperties: {
+ 'display': 'inline-block',
+ 'vertical-align': 'middle',
+ },
+ warn: true, // Warn on missing icons
+ autoInstall: true, // Auto-install missing icon sets
+ cdn: 'https://esm.sh/', // CDN for browser usage
+})
+```
+
+## Custom Icon Collections
+
+### Inline SVGs
+
+```ts
+presetIcons({
+ collections: {
+ custom: {
+ circle: ' ',
+ },
+ }
+})
+```
+
+Usage: ` `
+
+### File System Loader
+
+```ts
+import { FileSystemIconLoader } from '@iconify/utils/lib/loader/node-loaders'
+
+presetIcons({
+ collections: {
+ 'my-icons': FileSystemIconLoader(
+ './assets/icons',
+ svg => svg.replace(/#fff/, 'currentColor')
+ ),
+ }
+})
+```
+
+### Dynamic Import (Browser)
+
+```ts
+import presetIcons from '@unocss/preset-icons/browser'
+
+presetIcons({
+ collections: {
+ carbon: () => import('@iconify-json/carbon/icons.json').then(i => i.default),
+ }
+})
+```
+
+## Icon Customization
+
+```ts
+presetIcons({
+ customizations: {
+ // Transform SVG
+ transform(svg, collection, icon) {
+ return svg.replace(/#fff/, 'currentColor')
+ },
+ // Global sizing
+ customize(props) {
+ props.width = '2em'
+ props.height = '2em'
+ return props
+ },
+ // Per-collection
+ iconCustomizer(collection, icon, props) {
+ if (collection === 'mdi') {
+ props.width = '2em'
+ }
+ }
+ }
+})
+```
+
+## CSS Directive
+
+Use `icon()` in CSS (requires transformer-directives):
+
+```css
+.icon {
+ background-image: icon('i-carbon-sun');
+}
+.icon-colored {
+ background-image: icon('i-carbon-moon', '#fff');
+}
+```
+
+## Accessibility
+
+```html
+
+
+
+
+
+
+ My Profile
+
+```
+
+
diff --git a/skills/unocss/references/preset-mini.md b/skills/unocss/references/preset-mini.md
new file mode 100644
index 0000000..9fdb088
--- /dev/null
+++ b/skills/unocss/references/preset-mini.md
@@ -0,0 +1,158 @@
+---
+name: preset-mini
+description: Minimal preset with essential utilities for UnoCSS
+---
+
+# Preset Mini
+
+The minimal preset with only essential rules and variants. Good starting point for custom presets.
+
+## Installation
+
+```ts
+import { defineConfig, presetMini } from 'unocss'
+
+export default defineConfig({
+ presets: [
+ presetMini(),
+ ],
+})
+```
+
+## What's Included
+
+Subset of `preset-wind3` with essential utilities aligned to CSS properties:
+
+- Basic spacing (margin, padding)
+- Display (flex, grid, block, etc.)
+- Positioning (absolute, relative, fixed)
+- Sizing (width, height)
+- Colors (text, background, border)
+- Typography basics (font-size, font-weight)
+- Borders and border-radius
+- Basic transforms and transitions
+
+## What's NOT Included
+
+Opinionated or complex Tailwind utilities:
+- `container`
+- Complex animations
+- Gradients
+- Advanced typography
+- Prose classes
+
+## Use Cases
+
+1. **Building custom presets** - Start with mini and add only what you need
+2. **Minimal bundle size** - When you only need basic utilities
+3. **Learning** - Understand UnoCSS core without Tailwind complexity
+
+## Dark Mode
+
+Same as preset-wind3:
+
+```ts
+presetMini({
+ dark: 'class' // or 'media'
+})
+```
+
+```html
+
+```
+
+Class-based:
+```css
+.dark .dark\:bg-red\:10 {
+ background-color: rgb(248 113 113 / 0.1);
+}
+```
+
+Media query:
+```css
+@media (prefers-color-scheme: dark) {
+ .dark\:bg-red\:10 {
+ background-color: rgb(248 113 113 / 0.1);
+ }
+}
+```
+
+## CSS @layer Variant
+
+Native CSS layer support:
+
+```html
+
+```
+
+```css
+@layer foo {
+ .layer-foo\:p4 {
+ padding: 1rem;
+ }
+}
+```
+
+## Theme Customization
+
+```ts
+presetMini({
+ theme: {
+ colors: {
+ veryCool: '#0000ff',
+ brand: {
+ primary: 'hsl(var(--hue, 217) 78% 51%)',
+ }
+ },
+ }
+})
+```
+
+**Note:** `breakpoints` property is overridden, not merged.
+
+## Options
+
+```ts
+presetMini({
+ // Dark mode: 'class' | 'media' | { light: string, dark: string }
+ dark: 'class',
+
+ // Generate [group=""] instead of .group for attributify
+ attributifyPseudo: false,
+
+ // CSS variable prefix (default: 'un-')
+ variablePrefix: 'un-',
+
+ // Utility prefix
+ prefix: undefined,
+
+ // Preflight generation: true | false | 'on-demand'
+ preflight: true,
+})
+```
+
+## Building on Mini
+
+Create custom preset extending mini:
+
+```ts
+import { presetMini } from 'unocss'
+import type { Preset } from 'unocss'
+
+export const myPreset: Preset = {
+ name: 'my-preset',
+ presets: [presetMini()],
+ rules: [
+ // Add custom rules
+ ['card', { 'border-radius': '8px', 'box-shadow': '0 2px 8px rgba(0,0,0,0.1)' }],
+ ],
+ shortcuts: {
+ 'btn': 'px-4 py-2 rounded bg-blue-500 text-white',
+ },
+}
+```
+
+
diff --git a/skills/unocss/references/preset-rem-to-px.md b/skills/unocss/references/preset-rem-to-px.md
new file mode 100644
index 0000000..59a4df4
--- /dev/null
+++ b/skills/unocss/references/preset-rem-to-px.md
@@ -0,0 +1,97 @@
+---
+name: preset-rem-to-px
+description: Convert rem units to px for utilities
+---
+
+# Preset Rem to Px
+
+Converts `rem` units to `px` in generated utilities.
+
+## Installation
+
+```ts
+import { defineConfig, presetRemToPx, presetWind3 } from 'unocss'
+
+export default defineConfig({
+ presets: [
+ presetWind3(),
+ presetRemToPx(),
+ ],
+})
+```
+
+## What It Does
+
+Transforms all rem values to px:
+
+```html
+
+```
+
+Without preset:
+```css
+.p-4 { padding: 1rem; }
+```
+
+With preset:
+```css
+.p-4 { padding: 16px; }
+```
+
+## Use Cases
+
+- Projects requiring pixel-perfect designs
+- Environments where rem doesn't work well
+- Consistency with pixel-based design systems
+- Email templates (better compatibility)
+
+## Options
+
+```ts
+presetRemToPx({
+ // Base font size for conversion (default: 16)
+ baseFontSize: 16,
+})
+```
+
+Custom base:
+
+```ts
+presetRemToPx({
+ baseFontSize: 14, // 1rem = 14px
+})
+```
+
+## With Preset Wind4
+
+**Note:** `presetRemToPx` is not needed with `preset-wind4`. Use the built-in processor instead:
+
+```ts
+import { createRemToPxProcessor } from '@unocss/preset-wind4/utils'
+
+export default defineConfig({
+ presets: [
+ presetWind4({
+ preflights: {
+ theme: {
+ process: createRemToPxProcessor(),
+ }
+ },
+ }),
+ ],
+ // Also apply to utilities
+ postprocess: [createRemToPxProcessor()],
+})
+```
+
+## Important Notes
+
+- Order matters: place after the preset that generates rem values
+- Affects all utilities with rem units
+- Theme values in rem are also converted
+
+
diff --git a/skills/unocss/references/preset-tagify.md b/skills/unocss/references/preset-tagify.md
new file mode 100644
index 0000000..38ef837
--- /dev/null
+++ b/skills/unocss/references/preset-tagify.md
@@ -0,0 +1,134 @@
+---
+name: preset-tagify
+description: Use utilities as HTML tag names
+---
+
+# Preset Tagify
+
+Use CSS utilities directly as HTML tag names.
+
+## Installation
+
+```ts
+import { defineConfig, presetTagify } from 'unocss'
+
+export default defineConfig({
+ presets: [
+ presetTagify(),
+ ],
+})
+```
+
+## Basic Usage
+
+Instead of:
+
+```html
+
red text
+
flexbox
+
+```
+
+Use tag names directly:
+
+```html
+
red text
+
flexbox
+
+```
+
+Works exactly the same!
+
+## With Prefix
+
+```ts
+presetTagify({
+ prefix: 'un-'
+})
+```
+
+```html
+
+
+
+
+
+```
+
+## Extra Properties
+
+Add CSS properties to matched tags:
+
+```ts
+presetTagify({
+ // Add display: inline-block to icons
+ extraProperties: matched => matched.startsWith('i-')
+ ? { display: 'inline-block' }
+ : {},
+})
+```
+
+Or apply to all:
+
+```ts
+presetTagify({
+ extraProperties: { display: 'block' }
+})
+```
+
+## Options
+
+```ts
+presetTagify({
+ // Tag prefix
+ prefix: '',
+
+ // Excluded tags (won't be processed)
+ excludedTags: ['b', /^h\d+$/, 'table'],
+
+ // Extra CSS properties
+ extraProperties: {},
+
+ // Enable default extractor
+ defaultExtractor: true,
+})
+```
+
+## Excluded Tags
+
+By default, these tags are excluded:
+- `b` (bold)
+- `h1` through `h6` (headings)
+- `table`
+
+Add your own:
+
+```ts
+presetTagify({
+ excludedTags: [
+ 'b',
+ /^h\d+$/,
+ 'table',
+ 'article', // Add custom exclusions
+ /^my-/, // Exclude tags starting with 'my-'
+ ],
+})
+```
+
+## Use Cases
+
+- Quick prototyping
+- Cleaner HTML for simple pages
+- Icon embedding: `
`
+- Semantic-like styling: `
`, ``
+
+## Limitations
+
+- Custom element names must contain a hyphen (HTML spec)
+- Some frameworks may not support all custom elements
+- Utilities without hyphens need the prefix option
+
+
diff --git a/skills/unocss/references/preset-typography.md b/skills/unocss/references/preset-typography.md
new file mode 100644
index 0000000..cd19e14
--- /dev/null
+++ b/skills/unocss/references/preset-typography.md
@@ -0,0 +1,95 @@
+---
+name: preset-typography
+description: Prose classes for typographic defaults on vanilla HTML content
+---
+
+# Preset Typography
+
+Prose classes for adding typographic defaults to vanilla HTML content.
+
+## Installation
+
+```ts
+import { defineConfig, presetTypography, presetWind3 } from 'unocss'
+
+export default defineConfig({
+ presets: [
+ presetWind3(), // Required!
+ presetTypography(),
+ ],
+})
+```
+
+## Basic Usage
+
+```html
+
+ My Article
+ This is styled with typographic defaults...
+
+```
+
+## Sizes
+
+```html
+Small
+Base (default)
+Large
+Extra large
+2X large
+```
+
+Responsive:
+```html
+
+ Responsive typography
+
+```
+
+## Colors
+
+```html
+Gray (default)
+Slate
+Blue
+```
+
+## Dark Mode
+
+```html
+
+ Dark mode typography
+
+```
+
+## Excluding Elements
+
+```html
+
+ Styled
+
+
+```
+
+**Note:** `not-prose` only works as a class.
+
+## Options
+
+```ts
+presetTypography({
+ selectorName: 'prose', // Custom selector
+ cssVarPrefix: '--un-prose', // CSS variable prefix
+ important: false, // Make !important
+ cssExtend: {
+ 'code': { color: '#8b5cf6' },
+ 'a:hover': { color: '#f43f5e' },
+ },
+})
+```
+
+
diff --git a/skills/unocss/references/preset-web-fonts.md b/skills/unocss/references/preset-web-fonts.md
new file mode 100644
index 0000000..227b55a
--- /dev/null
+++ b/skills/unocss/references/preset-web-fonts.md
@@ -0,0 +1,91 @@
+---
+name: preset-web-fonts
+description: Easy Google Fonts and other web fonts integration
+---
+
+# Preset Web Fonts
+
+Easily use web fonts from Google Fonts and other providers.
+
+## Installation
+
+```ts
+import { defineConfig, presetWebFonts, presetWind3 } from 'unocss'
+
+export default defineConfig({
+ presets: [
+ presetWind3(),
+ presetWebFonts({
+ provider: 'google',
+ fonts: {
+ sans: 'Roboto',
+ mono: 'Fira Code',
+ },
+ }),
+ ],
+})
+```
+
+## Providers
+
+- `google` - Google Fonts (default)
+- `bunny` - Privacy-friendly alternative
+- `fontshare` - Quality fonts by ITF
+- `fontsource` - Self-hosted open source fonts
+- `coollabs` - Privacy-friendly drop-in replacement
+- `none` - Treat as system font
+
+## Font Configuration
+
+```ts
+fonts: {
+ // Simple
+ sans: 'Roboto',
+
+ // Multiple (fallback)
+ mono: ['Fira Code', 'Fira Mono:400,700'],
+
+ // Detailed
+ lato: [
+ {
+ name: 'Lato',
+ weights: ['400', '700'],
+ italic: true,
+ },
+ {
+ name: 'sans-serif',
+ provider: 'none',
+ },
+ ],
+}
+```
+
+## Usage
+
+```html
+Roboto
+Fira Code
+```
+
+## Local Fonts
+
+Self-host fonts:
+
+```ts
+import { createLocalFontProcessor } from '@unocss/preset-web-fonts/local'
+
+presetWebFonts({
+ provider: 'none',
+ fonts: { sans: 'Roboto' },
+ processors: createLocalFontProcessor({
+ cacheDir: 'node_modules/.cache/unocss/fonts',
+ fontAssetsDir: 'public/assets/fonts',
+ fontServeBaseUrl: '/assets/fonts',
+ })
+})
+```
+
+
diff --git a/skills/unocss/references/preset-wind3.md b/skills/unocss/references/preset-wind3.md
new file mode 100644
index 0000000..04273f1
--- /dev/null
+++ b/skills/unocss/references/preset-wind3.md
@@ -0,0 +1,194 @@
+---
+name: preset-wind3
+description: Tailwind CSS / Windi CSS compatible preset for UnoCSS
+---
+
+# Preset Wind3
+
+The Tailwind CSS / Windi CSS compatible preset. Most commonly used preset for UnoCSS.
+
+## Installation
+
+```ts
+import { defineConfig, presetWind3 } from 'unocss'
+
+export default defineConfig({
+ presets: [
+ presetWind3(),
+ ],
+})
+```
+
+**Note:** `@unocss/preset-uno` and `@unocss/preset-wind` are deprecated and renamed to `@unocss/preset-wind3`.
+
+## Features
+
+- Full Tailwind CSS v3 compatibility
+- Dark mode (`dark:`, `@dark:`)
+- All responsive variants (`sm:`, `md:`, `lg:`, `xl:`, `2xl:`)
+- All standard utilities (flex, grid, spacing, colors, typography, etc.)
+- Animation support (includes Animate.css animations)
+
+## Dark Mode
+
+### Class-based (default)
+
+```html
+
+```
+
+Generates: `.dark .dark\:bg-gray-800 { ... }`
+
+### Media Query Based
+
+```ts
+presetWind3({
+ dark: 'media'
+})
+```
+
+Generates: `@media (prefers-color-scheme: dark) { ... }`
+
+### Opt-in Media Query
+
+Use `@dark:` regardless of config:
+
+```html
+
+```
+
+## Options
+
+```ts
+presetWind3({
+ // Dark mode strategy
+ dark: 'class', // 'class' | 'media' | { light: '.light', dark: '.dark' }
+
+ // Generate pseudo selector as [group=""] instead of .group
+ attributifyPseudo: false,
+
+ // CSS custom properties prefix
+ variablePrefix: 'un-',
+
+ // Utils prefix
+ prefix: '',
+
+ // Generate preflight CSS
+ preflight: true, // true | false | 'on-demand'
+
+ // Mark all utilities as !important
+ important: false, // boolean | string (selector)
+})
+```
+
+### Important Option
+
+Make all utilities `!important`:
+
+```ts
+presetWind3({
+ important: true,
+})
+```
+
+Or scope with selector to increase specificity without `!important`:
+
+```ts
+presetWind3({
+ important: '#app',
+})
+```
+
+Output: `#app :is(.dark .dark\:bg-blue) { ... }`
+
+## Differences from Tailwind CSS
+
+### Quotes Not Supported
+
+Template quotes don't work due to extractor:
+
+```html
+
+
+
+
+
+```
+
+### Background Position
+
+Use `position:` prefix for custom values:
+
+```html
+
+
+
+
+
+```
+
+### Animations
+
+UnoCSS integrates Animate.css. Use `-alt` suffix for Animate.css versions when names conflict:
+
+- `animate-bounce` - Tailwind version
+- `animate-bounce-alt` - Animate.css version
+
+Custom animations:
+
+```ts
+theme: {
+ animation: {
+ keyframes: {
+ custom: '{0%, 100% { opacity: 0; } 50% { opacity: 1; }}',
+ },
+ durations: {
+ custom: '1s',
+ },
+ timingFns: {
+ custom: 'ease-in-out',
+ },
+ counts: {
+ custom: 'infinite',
+ },
+ }
+}
+```
+
+## Differences from Windi CSS
+
+| Windi CSS | UnoCSS |
+|-----------|--------|
+| `
xl:p-1` | `xl:p-1` |
+
+Bracket syntax uses `_` instead of `,`:
+
+```html
+
+
+
+
+
+```
+
+## Experimental: Media Hover
+
+Addresses sticky hover on touch devices:
+
+```html
+
+```
+
+Generates:
+```css
+@media (hover: hover) and (pointer: fine) {
+ .\@hover-text-red:hover { ... }
+}
+```
+
+
diff --git a/skills/unocss/references/preset-wind4.md b/skills/unocss/references/preset-wind4.md
new file mode 100644
index 0000000..1c6db5f
--- /dev/null
+++ b/skills/unocss/references/preset-wind4.md
@@ -0,0 +1,247 @@
+---
+name: preset-wind4
+description: Tailwind CSS v4 compatible preset with enhanced features
+---
+
+# Preset Wind4
+
+The Tailwind CSS v4 compatible preset. Enhances preset-wind3 with modern CSS features.
+
+## Installation
+
+```ts
+import { defineConfig, presetWind4 } from 'unocss'
+
+export default defineConfig({
+ presets: [
+ presetWind4(),
+ ],
+})
+```
+
+## Key Differences from Wind3
+
+### Built-in CSS Reset
+
+No need for `@unocss/reset` - reset is built-in:
+
+```ts
+// Remove these imports
+import '@unocss/reset/tailwind.css' // ❌ Not needed
+import '@unocss/reset/tailwind-compat.css' // ❌ Not needed
+
+// Enable in config
+presetWind4({
+ preflights: {
+ reset: true,
+ },
+})
+```
+
+### OKLCH Color Model
+
+Uses `oklch` for better color perception and contrast. Not compatible with `presetLegacyCompat`.
+
+### Theme CSS Variables
+
+Automatically generates CSS variables from theme:
+
+```css
+:root, :host {
+ --spacing: 0.25rem;
+ --font-sans: ui-sans-serif, system-ui, sans-serif;
+ --colors-black: #000;
+ --colors-white: #fff;
+ /* ... */
+}
+```
+
+### @property CSS Rules
+
+Uses `@property` for better browser optimization:
+
+```css
+@property --un-text-opacity {
+ syntax: '
';
+ inherits: false;
+ initial-value: 100%;
+}
+```
+
+### Theme Key Changes
+
+| preset-wind3 | preset-wind4 |
+|--------------|--------------|
+| `fontFamily` | `font` |
+| `fontSize` | `text.fontSize` |
+| `lineHeight` | `text.lineHeight` or `leading` |
+| `letterSpacing` | `text.letterSpacing` or `tracking` |
+| `borderRadius` | `radius` |
+| `easing` | `ease` |
+| `breakpoints` | `breakpoint` |
+| `verticalBreakpoints` | `verticalBreakpoint` |
+| `boxShadow` | `shadow` |
+| `transitionProperty` | `property` |
+| `container.maxWidth` | `containers.maxWidth` |
+| Size properties (`width`, `height`, etc.) | Unified to `spacing` |
+
+## Options
+
+```ts
+presetWind4({
+ preflights: {
+ // Built-in reset styles
+ reset: true,
+
+ // Theme CSS variables generation
+ theme: 'on-demand', // true | false | 'on-demand'
+
+ // @property CSS rules
+ property: true,
+ },
+})
+```
+
+### Theme Variable Processing
+
+Convert rem to px for theme variables:
+
+```ts
+import { createRemToPxProcessor } from '@unocss/preset-wind4/utils'
+
+presetWind4({
+ preflights: {
+ theme: {
+ mode: 'on-demand',
+ process: createRemToPxProcessor(),
+ }
+ },
+})
+
+// Also apply to utilities
+export default defineConfig({
+ postprocess: [createRemToPxProcessor()],
+})
+```
+
+### Property Layer Customization
+
+```ts
+presetWind4({
+ preflights: {
+ property: {
+ // Custom parent wrapper
+ parent: '@layer custom-properties',
+ // Custom selector
+ selector: ':where(*, ::before, ::after)',
+ },
+ },
+})
+```
+
+Remove `@supports` wrapper:
+
+```ts
+presetWind4({
+ preflights: {
+ property: {
+ parent: false,
+ },
+ },
+})
+```
+
+## Generated Layers
+
+| Layer | Description | Order |
+|-------|-------------|-------|
+| `properties` | CSS `@property` rules | -200 |
+| `theme` | Theme CSS variables | -150 |
+| `base` | Reset/preflight styles | -100 |
+
+## Theme.defaults
+
+Global default configuration for reset styles:
+
+```ts
+import type { Theme } from '@unocss/preset-wind4/theme'
+
+const defaults: Theme['default'] = {
+ transition: {
+ duration: '150ms',
+ timingFunction: 'cubic-bezier(0.4, 0, 0.2, 1)',
+ },
+ font: {
+ family: 'var(--font-sans)',
+ featureSettings: 'var(--font-sans--font-feature-settings)',
+ variationSettings: 'var(--font-sans--font-variation-settings)',
+ },
+ monoFont: {
+ family: 'var(--font-mono)',
+ // ...
+ },
+}
+```
+
+## Compatibility Notes
+
+### presetRemToPx
+
+Not needed - use built-in processor instead:
+
+```ts
+presetWind4({
+ preflights: {
+ theme: {
+ process: createRemToPxProcessor(),
+ }
+ },
+})
+```
+
+### presetLegacyCompat
+
+**Not compatible** with preset-wind4 due to `oklch` color model.
+
+## Migration from Wind3
+
+1. Update theme keys according to the table above
+2. Remove `@unocss/reset` imports
+3. Enable `preflights.reset: true`
+4. Test color outputs (oklch vs rgb)
+5. Update any custom theme extensions
+
+```ts
+// Before (wind3)
+theme: {
+ fontFamily: { sans: 'Roboto' },
+ fontSize: { lg: '1.125rem' },
+ breakpoints: { sm: '640px' },
+}
+
+// After (wind4)
+theme: {
+ font: { sans: 'Roboto' },
+ text: { lg: { fontSize: '1.125rem' } },
+ breakpoint: { sm: '640px' },
+}
+```
+
+## When to Use Wind4
+
+Choose **preset-wind4** when:
+- Starting a new project
+- Targeting modern browsers
+- Want built-in reset and CSS variables
+- Following Tailwind v4 conventions
+
+Choose **preset-wind3** when:
+- Need legacy browser support
+- Migrating from Tailwind v3
+- Using presetLegacyCompat
+- Want stable, proven preset
+
+
diff --git a/skills/unocss/references/transformer-attributify-jsx.md b/skills/unocss/references/transformer-attributify-jsx.md
new file mode 100644
index 0000000..c55d5c4
--- /dev/null
+++ b/skills/unocss/references/transformer-attributify-jsx.md
@@ -0,0 +1,156 @@
+---
+name: transformer-attributify-jsx
+description: Support valueless attributify in JSX/TSX
+---
+
+# Transformer Attributify JSX
+
+Fixes valueless attributify mode in JSX where `` becomes `
`.
+
+## The Problem
+
+In JSX, valueless attributes are transformed:
+
+```jsx
+// You write
+
+
+// JSX compiles to
+
+```
+
+The `={true}` breaks UnoCSS attributify detection.
+
+## Installation
+
+```ts
+import {
+ defineConfig,
+ presetAttributify,
+ transformerAttributifyJsx
+} from 'unocss'
+
+export default defineConfig({
+ presets: [
+ presetAttributify(),
+ ],
+ transformers: [
+ transformerAttributifyJsx(),
+ ],
+})
+```
+
+## How It Works
+
+The transformer converts JSX boolean attributes back to strings:
+
+```jsx
+// Input (after JSX compilation)
+
+
+// Output (transformed)
+
+```
+
+Now UnoCSS can properly extract the attributify classes.
+
+## Options
+
+```ts
+transformerAttributifyJsx({
+ // Only transform specific attributes
+ // Default: transforms all that match attributify patterns
+ blocklist: ['text', 'font'],
+})
+```
+
+## When to Use
+
+Required when using:
+- React
+- Preact
+- Solid
+- Any JSX-based framework
+
+With valueless attributify syntax:
+
+```jsx
+// This needs the transformer
+
+
+// This works without transformer (has values)
+
+```
+
+## Framework Setup
+
+### React
+
+```ts
+// vite.config.ts
+import React from '@vitejs/plugin-react'
+import UnoCSS from 'unocss/vite'
+
+export default {
+ plugins: [
+ UnoCSS(), // Must be before React
+ React(),
+ ],
+}
+```
+
+```ts
+// uno.config.ts
+import {
+ defineConfig,
+ presetAttributify,
+ presetWind3,
+ transformerAttributifyJsx
+} from 'unocss'
+
+export default defineConfig({
+ presets: [
+ presetWind3(),
+ presetAttributify(),
+ ],
+ transformers: [
+ transformerAttributifyJsx(),
+ ],
+})
+```
+
+### Preact
+
+Same as React, use `@preact/preset-vite` or `@prefresh/vite`.
+
+### Solid
+
+```ts
+import UnoCSS from 'unocss/vite'
+import solidPlugin from 'vite-plugin-solid'
+
+export default {
+ plugins: [
+ UnoCSS(),
+ solidPlugin(),
+ ],
+}
+```
+
+## TypeScript Support
+
+Add type declarations:
+
+```ts
+// shims.d.ts
+import type { AttributifyAttributes } from '@unocss/preset-attributify'
+
+declare module 'react' {
+ interface HTMLAttributes
extends AttributifyAttributes {}
+}
+```
+
+
diff --git a/skills/unocss/references/transformer-compile-class.md b/skills/unocss/references/transformer-compile-class.md
new file mode 100644
index 0000000..a1f08bf
--- /dev/null
+++ b/skills/unocss/references/transformer-compile-class.md
@@ -0,0 +1,128 @@
+---
+name: transformer-compile-class
+description: Compile multiple classes into one hashed class
+---
+
+# Transformer Compile Class
+
+Compiles multiple utility classes into a single hashed class for smaller HTML.
+
+## Installation
+
+```ts
+import { defineConfig, transformerCompileClass } from 'unocss'
+
+export default defineConfig({
+ transformers: [
+ transformerCompileClass(),
+ ],
+})
+```
+
+## Usage
+
+Add `:uno:` prefix to mark classes for compilation:
+
+```html
+
+
+
+
+
+```
+
+## Generated CSS
+
+```css
+.uno-qlmcrp {
+ text-align: center;
+}
+.uno-0qw2gr {
+ font-size: 0.875rem;
+ line-height: 1.25rem;
+ font-weight: 700;
+}
+.uno-0qw2gr:hover {
+ --un-text-opacity: 1;
+ color: rgb(248 113 113 / var(--un-text-opacity));
+}
+@media (min-width: 640px) {
+ .uno-qlmcrp {
+ text-align: left;
+ }
+}
+```
+
+## Options
+
+```ts
+transformerCompileClass({
+ // Custom trigger string (default: ':uno:')
+ trigger: ':uno:',
+
+ // Custom class prefix (default: 'uno-')
+ classPrefix: 'uno-',
+
+ // Hash function for class names
+ hashFn: (str) => /* custom hash */,
+
+ // Keep original classes alongside compiled
+ keepOriginal: false,
+})
+```
+
+## Use Cases
+
+- **Smaller HTML** - Reduce repetitive class strings
+- **Obfuscation** - Hide utility class names in production
+- **Performance** - Fewer class attributes to parse
+
+## ESLint Integration
+
+Enforce compile class usage across project:
+
+```json
+{
+ "rules": {
+ "@unocss/enforce-class-compile": "warn"
+ }
+}
+```
+
+This rule:
+- Warns when class attribute doesn't start with `:uno:`
+- Auto-fixes by adding the prefix
+
+Options:
+
+```json
+{
+ "rules": {
+ "@unocss/enforce-class-compile": ["warn", {
+ "prefix": ":uno:",
+ "enableFix": true
+ }]
+ }
+}
+```
+
+## Combining with Other Transformers
+
+```ts
+export default defineConfig({
+ transformers: [
+ transformerVariantGroup(), // Process variant groups first
+ transformerDirectives(), // Then directives
+ transformerCompileClass(), // Compile last
+ ],
+})
+```
+
+
diff --git a/skills/unocss/references/transformer-directives.md b/skills/unocss/references/transformer-directives.md
new file mode 100644
index 0000000..1842bad
--- /dev/null
+++ b/skills/unocss/references/transformer-directives.md
@@ -0,0 +1,157 @@
+---
+name: transformer-directives
+description: CSS directives @apply, @screen, theme(), and icon()
+---
+
+# Transformer Directives
+
+Enables `@apply`, `@screen`, `theme()`, and `icon()` directives in CSS.
+
+## Installation
+
+```ts
+import { defineConfig, transformerDirectives } from 'unocss'
+
+export default defineConfig({
+ transformers: [
+ transformerDirectives(),
+ ],
+})
+```
+
+## @apply
+
+Apply utility classes in CSS:
+
+```css
+.custom-btn {
+ @apply py-2 px-4 font-semibold rounded-lg;
+}
+
+/* With variants - use quotes */
+.custom-btn {
+ @apply 'hover:bg-blue-600 focus:ring-2';
+}
+```
+
+### CSS Custom Property Alternative
+
+For vanilla CSS compatibility:
+
+```css
+.custom-div {
+ --at-apply: text-center my-0 font-medium;
+}
+```
+
+Supported aliases: `--at-apply`, `--uno-apply`, `--uno`
+
+Configure aliases:
+
+```ts
+transformerDirectives({
+ applyVariable: ['--at-apply', '--uno-apply', '--uno'],
+ // or disable: applyVariable: false
+})
+```
+
+## @screen
+
+Create breakpoint media queries:
+
+```css
+.grid {
+ display: grid;
+ grid-template-columns: repeat(2, 1fr);
+}
+
+@screen sm {
+ .grid {
+ grid-template-columns: repeat(3, 1fr);
+ }
+}
+
+@screen lg {
+ .grid {
+ grid-template-columns: repeat(4, 1fr);
+ }
+}
+```
+
+### Breakpoint Variants
+
+```css
+/* Less than breakpoint */
+@screen lt-sm {
+ .item { display: none; }
+}
+
+/* At specific breakpoint only */
+@screen at-md {
+ .item { width: 50%; }
+}
+```
+
+## theme()
+
+Access theme values in CSS:
+
+```css
+.btn-blue {
+ background-color: theme('colors.blue.500');
+ padding: theme('spacing.4');
+ border-radius: theme('borderRadius.lg');
+}
+```
+
+Dot notation paths into your theme config.
+
+## icon()
+
+Convert icon utility to SVG (requires preset-icons):
+
+```css
+.icon-sun {
+ background-image: icon('i-carbon-sun');
+}
+
+/* With custom color */
+.icon-moon {
+ background-image: icon('i-carbon-moon', '#fff');
+}
+
+/* Using theme color */
+.icon-alert {
+ background-image: icon('i-carbon-warning', 'theme("colors.red.500")');
+}
+```
+
+## Complete Example
+
+```css
+.card {
+ @apply rounded-lg shadow-md p-4;
+ background-color: theme('colors.white');
+}
+
+.card-header {
+ @apply 'font-bold text-lg border-b';
+ padding-bottom: theme('spacing.2');
+}
+
+@screen md {
+ .card {
+ @apply flex gap-4;
+ }
+}
+
+.card-icon {
+ background-image: icon('i-carbon-document');
+ @apply w-6 h-6;
+}
+```
+
+
diff --git a/skills/unocss/references/transformer-variant-group.md b/skills/unocss/references/transformer-variant-group.md
new file mode 100644
index 0000000..5fc9ce7
--- /dev/null
+++ b/skills/unocss/references/transformer-variant-group.md
@@ -0,0 +1,97 @@
+---
+name: transformer-variant-group
+description: Shorthand for grouping utilities with common prefixes
+---
+
+# Transformer Variant Group
+
+Enables shorthand syntax for grouping utilities with common prefixes.
+
+## Installation
+
+```ts
+import { defineConfig, transformerVariantGroup } from 'unocss'
+
+export default defineConfig({
+ transformers: [
+ transformerVariantGroup(),
+ ],
+})
+```
+
+## Usage
+
+Group multiple utilities under one variant prefix using parentheses:
+
+```html
+
+
+
+
+
+```
+
+## Examples
+
+### Hover States
+
+```html
+
+ Hover me
+
+```
+
+Expands to: `hover:bg-blue-600 hover:text-white hover:scale-105`
+
+### Dark Mode
+
+```html
+
+ Dark content
+
+```
+
+Expands to: `dark:bg-gray-800 dark:text-white`
+
+### Responsive
+
+```html
+
+ Responsive flex
+
+```
+
+Expands to: `md:flex md:items-center md:gap-4`
+
+### Nested Groups
+
+```html
+
+ Large screen hover
+
+```
+
+Expands to: `lg:hover:bg-blue-500 lg:hover:text-white`
+
+### Multiple Prefixes
+
+```html
+
+ Styled text
+
+```
+
+Expands to: `text-sm text-gray-600 font-medium font-mono`
+
+## Key Points
+
+- Use parentheses `()` to group utilities
+- The prefix applies to all utilities inside the group
+- Can be combined with any variant (hover, dark, responsive, etc.)
+- Nesting is supported
+- Works in class attributes and other extraction sources
+
+
diff --git a/skills/using-git-worktrees/SKILL.md b/skills/using-git-worktrees/SKILL.md
new file mode 100644
index 0000000..e153843
--- /dev/null
+++ b/skills/using-git-worktrees/SKILL.md
@@ -0,0 +1,218 @@
+---
+name: using-git-worktrees
+description: Use when starting feature work that needs isolation from current workspace or before executing implementation plans - creates isolated git worktrees with smart directory selection and safety verification
+---
+
+# Using Git Worktrees
+
+## Overview
+
+Git worktrees create isolated workspaces sharing the same repository, allowing work on multiple branches simultaneously without switching.
+
+**Core principle:** Systematic directory selection + safety verification = reliable isolation.
+
+**Announce at start:** "I'm using the using-git-worktrees skill to set up an isolated workspace."
+
+## Directory Selection Process
+
+Follow this priority order:
+
+### 1. Check Existing Directories
+
+```bash
+# Check in priority order
+ls -d .worktrees 2>/dev/null # Preferred (hidden)
+ls -d worktrees 2>/dev/null # Alternative
+```
+
+**If found:** Use that directory. If both exist, `.worktrees` wins.
+
+### 2. Check CLAUDE.md
+
+```bash
+grep -i "worktree.*director" CLAUDE.md 2>/dev/null
+```
+
+**If preference specified:** Use it without asking.
+
+### 3. Ask User
+
+If no directory exists and no CLAUDE.md preference:
+
+```
+No worktree directory found. Where should I create worktrees?
+
+1. .worktrees/ (project-local, hidden)
+2. ~/.config/superpowers/worktrees// (global location)
+
+Which would you prefer?
+```
+
+## Safety Verification
+
+### For Project-Local Directories (.worktrees or worktrees)
+
+**MUST verify directory is ignored before creating worktree:**
+
+```bash
+# Check if directory is ignored (respects local, global, and system gitignore)
+git check-ignore -q .worktrees 2>/dev/null || git check-ignore -q worktrees 2>/dev/null
+```
+
+**If NOT ignored:**
+
+Per Jesse's rule "Fix broken things immediately":
+1. Add appropriate line to .gitignore
+2. Commit the change
+3. Proceed with worktree creation
+
+**Why critical:** Prevents accidentally committing worktree contents to repository.
+
+### For Global Directory (~/.config/superpowers/worktrees)
+
+No .gitignore verification needed - outside project entirely.
+
+## Creation Steps
+
+### 1. Detect Project Name
+
+```bash
+project=$(basename "$(git rev-parse --show-toplevel)")
+```
+
+### 2. Create Worktree
+
+```bash
+# Determine full path
+case $LOCATION in
+ .worktrees|worktrees)
+ path="$LOCATION/$BRANCH_NAME"
+ ;;
+ ~/.config/superpowers/worktrees/*)
+ path="~/.config/superpowers/worktrees/$project/$BRANCH_NAME"
+ ;;
+esac
+
+# Create worktree with new branch
+git worktree add "$path" -b "$BRANCH_NAME"
+cd "$path"
+```
+
+### 3. Run Project Setup
+
+Auto-detect and run appropriate setup:
+
+```bash
+# Node.js
+if [ -f package.json ]; then npm install; fi
+
+# Rust
+if [ -f Cargo.toml ]; then cargo build; fi
+
+# Python
+if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
+if [ -f pyproject.toml ]; then poetry install; fi
+
+# Go
+if [ -f go.mod ]; then go mod download; fi
+```
+
+### 4. Verify Clean Baseline
+
+Run tests to ensure worktree starts clean:
+
+```bash
+# Examples - use project-appropriate command
+npm test
+cargo test
+pytest
+go test ./...
+```
+
+**If tests fail:** Report failures, ask whether to proceed or investigate.
+
+**If tests pass:** Report ready.
+
+### 5. Report Location
+
+```
+Worktree ready at
+Tests passing ( tests, 0 failures)
+Ready to implement
+```
+
+## Quick Reference
+
+| Situation | Action |
+|-----------|--------|
+| `.worktrees/` exists | Use it (verify ignored) |
+| `worktrees/` exists | Use it (verify ignored) |
+| Both exist | Use `.worktrees/` |
+| Neither exists | Check CLAUDE.md → Ask user |
+| Directory not ignored | Add to .gitignore + commit |
+| Tests fail during baseline | Report failures + ask |
+| No package.json/Cargo.toml | Skip dependency install |
+
+## Common Mistakes
+
+### Skipping ignore verification
+
+- **Problem:** Worktree contents get tracked, pollute git status
+- **Fix:** Always use `git check-ignore` before creating project-local worktree
+
+### Assuming directory location
+
+- **Problem:** Creates inconsistency, violates project conventions
+- **Fix:** Follow priority: existing > CLAUDE.md > ask
+
+### Proceeding with failing tests
+
+- **Problem:** Can't distinguish new bugs from pre-existing issues
+- **Fix:** Report failures, get explicit permission to proceed
+
+### Hardcoding setup commands
+
+- **Problem:** Breaks on projects using different tools
+- **Fix:** Auto-detect from project files (package.json, etc.)
+
+## Example Workflow
+
+```
+You: I'm using the using-git-worktrees skill to set up an isolated workspace.
+
+[Check .worktrees/ - exists]
+[Verify ignored - git check-ignore confirms .worktrees/ is ignored]
+[Create worktree: git worktree add .worktrees/auth -b feature/auth]
+[Run npm install]
+[Run npm test - 47 passing]
+
+Worktree ready at /Users/jesse/myproject/.worktrees/auth
+Tests passing (47 tests, 0 failures)
+Ready to implement auth feature
+```
+
+## Red Flags
+
+**Never:**
+- Create worktree without verifying it's ignored (project-local)
+- Skip baseline test verification
+- Proceed with failing tests without asking
+- Assume directory location when ambiguous
+- Skip CLAUDE.md check
+
+**Always:**
+- Follow directory priority: existing > CLAUDE.md > ask
+- Verify directory is ignored for project-local
+- Auto-detect and run project setup
+- Verify clean test baseline
+
+## Integration
+
+**Called by:**
+- **brainstorming** (Phase 4) - REQUIRED when design is approved and implementation follows
+- **subagent-driven-development** - REQUIRED before executing any tasks
+- **executing-plans** - REQUIRED before executing any tasks
+- Any skill needing isolated workspace
+
+**Pairs with:**
+- **finishing-a-development-branch** - REQUIRED for cleanup after work complete
diff --git a/skills/using-git-worktrees/using-git-worktrees b/skills/using-git-worktrees/using-git-worktrees
new file mode 120000
index 0000000..f340a9d
--- /dev/null
+++ b/skills/using-git-worktrees/using-git-worktrees
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/using-git-worktrees/
\ No newline at end of file
diff --git a/skills/using-superpowers/SKILL.md b/skills/using-superpowers/SKILL.md
new file mode 100644
index 0000000..b227eec
--- /dev/null
+++ b/skills/using-superpowers/SKILL.md
@@ -0,0 +1,95 @@
+---
+name: using-superpowers
+description: Use when starting any conversation - establishes how to find and use skills, requiring Skill tool invocation before ANY response including clarifying questions
+---
+
+
+If you think there is even a 1% chance a skill might apply to what you are doing, you ABSOLUTELY MUST invoke the skill.
+
+IF A SKILL APPLIES TO YOUR TASK, YOU DO NOT HAVE A CHOICE. YOU MUST USE IT.
+
+This is not negotiable. This is not optional. You cannot rationalize your way out of this.
+
+
+## How to Access Skills
+
+**In Claude Code:** Use the `Skill` tool. When you invoke a skill, its content is loaded and presented to you—follow it directly. Never use the Read tool on skill files.
+
+**In other environments:** Check your platform's documentation for how skills are loaded.
+
+# Using Skills
+
+## The Rule
+
+**Invoke relevant or requested skills BEFORE any response or action.** Even a 1% chance a skill might apply means that you should invoke the skill to check. If an invoked skill turns out to be wrong for the situation, you don't need to use it.
+
+```dot
+digraph skill_flow {
+ "User message received" [shape=doublecircle];
+ "About to EnterPlanMode?" [shape=doublecircle];
+ "Already brainstormed?" [shape=diamond];
+ "Invoke brainstorming skill" [shape=box];
+ "Might any skill apply?" [shape=diamond];
+ "Invoke Skill tool" [shape=box];
+ "Announce: 'Using [skill] to [purpose]'" [shape=box];
+ "Has checklist?" [shape=diamond];
+ "Create TodoWrite todo per item" [shape=box];
+ "Follow skill exactly" [shape=box];
+ "Respond (including clarifications)" [shape=doublecircle];
+
+ "About to EnterPlanMode?" -> "Already brainstormed?";
+ "Already brainstormed?" -> "Invoke brainstorming skill" [label="no"];
+ "Already brainstormed?" -> "Might any skill apply?" [label="yes"];
+ "Invoke brainstorming skill" -> "Might any skill apply?";
+
+ "User message received" -> "Might any skill apply?";
+ "Might any skill apply?" -> "Invoke Skill tool" [label="yes, even 1%"];
+ "Might any skill apply?" -> "Respond (including clarifications)" [label="definitely not"];
+ "Invoke Skill tool" -> "Announce: 'Using [skill] to [purpose]'";
+ "Announce: 'Using [skill] to [purpose]'" -> "Has checklist?";
+ "Has checklist?" -> "Create TodoWrite todo per item" [label="yes"];
+ "Has checklist?" -> "Follow skill exactly" [label="no"];
+ "Create TodoWrite todo per item" -> "Follow skill exactly";
+}
+```
+
+## Red Flags
+
+These thoughts mean STOP—you're rationalizing:
+
+| Thought | Reality |
+|---------|---------|
+| "This is just a simple question" | Questions are tasks. Check for skills. |
+| "I need more context first" | Skill check comes BEFORE clarifying questions. |
+| "Let me explore the codebase first" | Skills tell you HOW to explore. Check first. |
+| "I can check git/files quickly" | Files lack conversation context. Check for skills. |
+| "Let me gather information first" | Skills tell you HOW to gather information. |
+| "This doesn't need a formal skill" | If a skill exists, use it. |
+| "I remember this skill" | Skills evolve. Read current version. |
+| "This doesn't count as a task" | Action = task. Check for skills. |
+| "The skill is overkill" | Simple things become complex. Use it. |
+| "I'll just do this one thing first" | Check BEFORE doing anything. |
+| "This feels productive" | Undisciplined action wastes time. Skills prevent this. |
+| "I know what that means" | Knowing the concept ≠ using the skill. Invoke it. |
+
+## Skill Priority
+
+When multiple skills could apply, use this order:
+
+1. **Process skills first** (brainstorming, debugging) - these determine HOW to approach the task
+2. **Implementation skills second** (frontend-design, mcp-builder) - these guide execution
+
+"Let's build X" → brainstorming first, then implementation skills.
+"Fix this bug" → debugging first, then domain-specific skills.
+
+## Skill Types
+
+**Rigid** (TDD, debugging): Follow exactly. Don't adapt away discipline.
+
+**Flexible** (patterns): Adapt principles to context.
+
+The skill itself tells you which.
+
+## User Instructions
+
+Instructions say WHAT, not HOW. "Add X" or "Fix Y" doesn't mean skip workflows.
diff --git a/skills/using-superpowers/using-superpowers b/skills/using-superpowers/using-superpowers
new file mode 120000
index 0000000..f9b3aca
--- /dev/null
+++ b/skills/using-superpowers/using-superpowers
@@ -0,0 +1 @@
+/home/localadmin/src/agent-skills/skills/using-superpowers/
\ No newline at end of file
diff --git a/skills/vercel-composition-patterns/AGENTS.md b/skills/vercel-composition-patterns/AGENTS.md
new file mode 100644
index 0000000..558bf9a
--- /dev/null
+++ b/skills/vercel-composition-patterns/AGENTS.md
@@ -0,0 +1,946 @@
+# React Composition Patterns
+
+**Version 1.0.0**
+Engineering
+January 2026
+
+> **Note:**
+> This document is mainly for agents and LLMs to follow when maintaining,
+> generating, or refactoring React codebases using composition. Humans
+> may also find it useful, but guidance here is optimized for automation
+> and consistency by AI-assisted workflows.
+
+---
+
+## Abstract
+
+Composition patterns for building flexible, maintainable React components. Avoid boolean prop proliferation by using compound components, lifting state, and composing internals. These patterns make codebases easier for both humans and AI agents to work with as they scale.
+
+---
+
+## Table of Contents
+
+1. [Component Architecture](#1-component-architecture) — **HIGH**
+ - 1.1 [Avoid Boolean Prop Proliferation](#11-avoid-boolean-prop-proliferation)
+ - 1.2 [Use Compound Components](#12-use-compound-components)
+2. [State Management](#2-state-management) — **MEDIUM**
+ - 2.1 [Decouple State Management from UI](#21-decouple-state-management-from-ui)
+ - 2.2 [Define Generic Context Interfaces for Dependency Injection](#22-define-generic-context-interfaces-for-dependency-injection)
+ - 2.3 [Lift State into Provider Components](#23-lift-state-into-provider-components)
+3. [Implementation Patterns](#3-implementation-patterns) — **MEDIUM**
+ - 3.1 [Create Explicit Component Variants](#31-create-explicit-component-variants)
+ - 3.2 [Prefer Composing Children Over Render Props](#32-prefer-composing-children-over-render-props)
+4. [React 19 APIs](#4-react-19-apis) — **MEDIUM**
+ - 4.1 [React 19 API Changes](#41-react-19-api-changes)
+
+---
+
+## 1. Component Architecture
+
+**Impact: HIGH**
+
+Fundamental patterns for structuring components to avoid prop
+proliferation and enable flexible composition.
+
+### 1.1 Avoid Boolean Prop Proliferation
+
+**Impact: CRITICAL (prevents unmaintainable component variants)**
+
+Don't add boolean props like `isThread`, `isEditing`, `isDMThread` to customize
+
+component behavior. Each boolean doubles possible states and creates
+
+unmaintainable conditional logic. Use composition instead.
+
+**Incorrect: boolean props create exponential complexity**
+
+```tsx
+function Composer({
+ onSubmit,
+ isThread,
+ channelId,
+ isDMThread,
+ dmId,
+ isEditing,
+ isForwarding,
+}: Props) {
+ return (
+
+ )
+}
+```
+
+**Correct: composition eliminates conditionals**
+
+```tsx
+// Channel composer
+function ChannelComposer() {
+ return (
+
+
+
+
+
+
+
+
+
+
+ )
+}
+
+// Thread composer - adds "also send to channel" field
+function ThreadComposer({ channelId }: { channelId: string }) {
+ return (
+
+
+
+
+
+
+
+
+
+
+ )
+}
+
+// Edit composer - different footer actions
+function EditComposer() {
+ return (
+
+
+
+
+
+
+
+
+
+ )
+}
+```
+
+Each variant is explicit about what it renders. We can share internals without
+
+sharing a single monolithic parent.
+
+### 1.2 Use Compound Components
+
+**Impact: HIGH (enables flexible composition without prop drilling)**
+
+Structure complex components as compound components with a shared context. Each
+
+subcomponent accesses shared state via context, not props. Consumers compose the
+
+pieces they need.
+
+**Incorrect: monolithic component with render props**
+
+```tsx
+function Composer({
+ renderHeader,
+ renderFooter,
+ renderActions,
+ showAttachments,
+ showFormatting,
+ showEmojis,
+}: Props) {
+ return (
+
+ )
+}
+```
+
+**Correct: compound components with shared context**
+
+```tsx
+const ComposerContext = createContext(null)
+
+function ComposerProvider({ children, state, actions, meta }: ProviderProps) {
+ return (
+
+ {children}
+
+ )
+}
+
+function ComposerFrame({ children }: { children: React.ReactNode }) {
+ return
+}
+
+function ComposerInput() {
+ const {
+ state,
+ actions: { update },
+ meta: { inputRef },
+ } = use(ComposerContext)
+ return (
+ update((s) => ({ ...s, input: text }))}
+ />
+ )
+}
+
+function ComposerSubmit() {
+ const {
+ actions: { submit },
+ } = use(ComposerContext)
+ return Send
+}
+
+// Export as compound component
+const Composer = {
+ Provider: ComposerProvider,
+ Frame: ComposerFrame,
+ Input: ComposerInput,
+ Submit: ComposerSubmit,
+ Header: ComposerHeader,
+ Footer: ComposerFooter,
+ Attachments: ComposerAttachments,
+ Formatting: ComposerFormatting,
+ Emojis: ComposerEmojis,
+}
+```
+
+**Usage:**
+
+```tsx
+
+