Files
bootstrap/guides/ci-cd-pipelines.md

905 lines
26 KiB
Markdown

# CI/CD Pipeline Guide
> **Load this guide when:** Adding Docker build/push steps, configuring Woodpecker CI pipelines, publishing packages to registries, or implementing CI/CD for a new project.
## Overview
This guide covers the canonical CI/CD pattern used across projects. The pipeline runs in Woodpecker CI and follows this flow:
```
GIT PUSH
QUALITY GATES (lint, typecheck, test, audit)
↓ all pass
BUILD (compile all packages)
↓ only on main/develop/tags
DOCKER BUILD & PUSH (Kaniko → Gitea Container Registry)
↓ all images pushed
PACKAGE LINKING (associate images with repository in Gitea)
```
## Reference Implementations
### Split Pipelines (Preferred for Monorepos)
**Mosaic Telemetry** (`~/src/mosaic-telemetry-monorepo/.woodpecker/`) is the canonical example of **split per-package pipelines** with path filtering, full security chain (source + container scanning), and efficient CI resource usage.
**Key features:**
- One YAML per package in `.woodpecker/` directory
- Path filtering: only the affected package's pipeline runs on push
- Security chain: source scanning (bandit/npm audit) + dependency audit (pip-audit) + container scanning (Trivy)
- Docker build gates on ALL quality steps
**Always use this pattern for monorepos.** It saves CI minutes and isolates failures.
### Single Pipeline (Legacy/Simple Projects)
**Mosaic Stack** (`~/src/mosaic-stack/.woodpecker/build.yml`) uses a single pipeline that builds everything on every push. This works but wastes CI resources on large monorepos. **Mosaic Stack is scheduled for migration to split pipelines.**
Always read the telemetry pipelines first when implementing a new pipeline.
## Infrastructure Instances
| Project | Gitea | Woodpecker | Registry |
|---------|-------|------------|----------|
| Mosaic Stack | `git.mosaicstack.dev` | `ci.mosaicstack.dev` | `git.mosaicstack.dev` |
| U-Connect | `git.uscllc.com` | `woodpecker.uscllc.net` | `git.uscllc.com` |
The patterns are identical — only the hostnames and org/repo names differ.
## Woodpecker Pipeline Structure
### YAML Anchors (DRY)
Define reusable values at the top of `.woodpecker.yml`:
```yaml
variables:
- &node_image "node:20-alpine"
- &install_deps |
corepack enable
npm ci
# For pnpm projects, use:
# - &install_deps |
# corepack enable
# pnpm install --frozen-lockfile
- &kaniko_setup |
mkdir -p /kaniko/.docker
echo "{\"auths\":{\"REGISTRY_HOST\":{\"username\":\"$GITEA_USER\",\"password\":\"$GITEA_TOKEN\"}}}" > /kaniko/.docker/config.json
```
Replace `REGISTRY_HOST` with the actual Gitea hostname (e.g., `git.uscllc.com`).
### Step Dependencies
Woodpecker runs steps in parallel by default. Use `depends_on` to create the dependency graph:
```yaml
steps:
install:
image: *node_image
commands:
- *install_deps
lint:
image: *node_image
commands:
- npm run lint
depends_on:
- install
typecheck:
image: *node_image
commands:
- npm run type-check
depends_on:
- install
test:
image: *node_image
commands:
- npm run test
depends_on:
- install
build:
image: *node_image
environment:
NODE_ENV: "production"
commands:
- npm run build
depends_on:
- lint
- typecheck
- test
```
### Conditional Execution
Use `when` clauses to limit expensive steps (Docker builds) to relevant branches:
```yaml
when:
# Top-level: run quality gates on everything
- event: [push, pull_request, manual]
# Per-step: only build Docker images on main/develop/tags
docker-build-api:
when:
- branch: [main, develop]
event: [push, manual, tag]
```
## Docker Build & Push with Kaniko
### Why Kaniko
Kaniko builds container images without requiring a Docker daemon. This is the standard approach in Woodpecker CI because:
- No privileged mode needed
- No Docker-in-Docker security concerns
- Multi-destination tagging in a single build
- Works in any container runtime
### Kaniko Step Template
```yaml
docker-build-SERVICE:
image: gcr.io/kaniko-project/executor:debug
environment:
GITEA_USER:
from_secret: gitea_username
GITEA_TOKEN:
from_secret: gitea_token
CI_COMMIT_BRANCH: ${CI_COMMIT_BRANCH}
CI_COMMIT_TAG: ${CI_COMMIT_TAG}
CI_COMMIT_SHA: ${CI_COMMIT_SHA}
commands:
- *kaniko_setup
- |
DESTINATIONS="--destination REGISTRY/ORG/IMAGE_NAME:${CI_COMMIT_SHA:0:8}"
if [ "$CI_COMMIT_BRANCH" = "main" ]; then
DESTINATIONS="$DESTINATIONS --destination REGISTRY/ORG/IMAGE_NAME:latest"
elif [ "$CI_COMMIT_BRANCH" = "develop" ]; then
DESTINATIONS="$DESTINATIONS --destination REGISTRY/ORG/IMAGE_NAME:dev"
fi
if [ -n "$CI_COMMIT_TAG" ]; then
DESTINATIONS="$DESTINATIONS --destination REGISTRY/ORG/IMAGE_NAME:$CI_COMMIT_TAG"
fi
/kaniko/executor --context . --dockerfile PATH/TO/Dockerfile $DESTINATIONS
when:
- branch: [main, develop]
event: [push, manual, tag]
depends_on:
- build
```
**Replace these placeholders:**
| Placeholder | Example (Mosaic) | Example (U-Connect) |
|-------------|-------------------|----------------------|
| `REGISTRY` | `git.mosaicstack.dev` | `git.uscllc.com` |
| `ORG` | `mosaic` | `usc` |
| `IMAGE_NAME` | `stack-api` | `uconnect-backend-api` |
| `PATH/TO/Dockerfile` | `apps/api/Dockerfile` | `src/backend-api/Dockerfile` |
### Image Tagging Strategy
Every build produces multiple tags:
| Condition | Tag | Purpose |
|-----------|-----|---------|
| Always | `${CI_COMMIT_SHA:0:8}` | Immutable reference to exact commit |
| `main` branch | `latest` | Current production release |
| `develop` branch | `dev` | Current development build |
| Git tag (e.g., `v1.0.0`) | `v1.0.0` | Semantic version release |
### Kaniko Options
Common flags for `/kaniko/executor`:
| Flag | Purpose |
|------|---------|
| `--context .` | Build context directory |
| `--dockerfile path/Dockerfile` | Dockerfile location |
| `--destination registry/org/image:tag` | Push target (repeatable) |
| `--build-arg KEY=VALUE` | Pass build arguments |
| `--cache=true` | Enable layer caching |
| `--cache-repo registry/org/image-cache` | Cache storage location |
### Build Arguments
Pass environment-specific values at build time:
```yaml
/kaniko/executor --context . --dockerfile apps/web/Dockerfile \
--build-arg NEXT_PUBLIC_API_URL=https://api.example.com \
$DESTINATIONS
```
## Gitea Container Registry
### How It Works
Gitea has a built-in container registry. When you push an image to `git.example.com/org/image:tag`, Gitea stores it and makes it available in the Packages section.
### Authentication
Kaniko authenticates via a Docker config file created at pipeline start:
```json
{
"auths": {
"git.example.com": {
"username": "GITEA_USER",
"password": "GITEA_TOKEN"
}
}
}
```
The token must have `package:write` scope. Generate it at: `https://GITEA_HOST/user/settings/applications`
### Pulling Images
After pushing, images are available at:
```bash
docker pull git.example.com/org/image:tag
```
In `docker-compose.yml`:
```yaml
services:
api:
image: git.example.com/org/image:${IMAGE_TAG:-dev}
```
## Package Linking
After pushing images to the Gitea registry, link them to the source repository so they appear on the repository's Packages tab.
### Gitea Package Linking API
```
POST /api/v1/packages/{owner}/{type}/{name}/-/link/{repo}
```
| Parameter | Value |
|-----------|-------|
| `owner` | Organization name (e.g., `mosaic`, `usc`) |
| `type` | `container` |
| `name` | Image name (e.g., `stack-api`) |
| `repo` | Repository name (e.g., `stack`, `uconnect`) |
### Link Step Template
```yaml
link-packages:
image: alpine:3
environment:
GITEA_TOKEN:
from_secret: gitea_token
commands:
- apk add --no-cache curl
- echo "Waiting 10 seconds for packages to be indexed..."
- sleep 10
- |
set -e
link_package() {
PKG="$$1"
echo "Linking $$PKG..."
for attempt in 1 2 3; do
STATUS=$$(curl -s -o /tmp/link-response.txt -w "%{http_code}" -X POST \
-H "Authorization: token $$GITEA_TOKEN" \
"https://GITEA_HOST/api/v1/packages/ORG/container/$$PKG/-/link/REPO")
if [ "$$STATUS" = "201" ] || [ "$$STATUS" = "204" ]; then
echo " Linked $$PKG"
return 0
elif [ "$$STATUS" = "400" ]; then
echo " $$PKG already linked"
return 0
elif [ "$$STATUS" = "404" ] && [ $$attempt -lt 3 ]; then
echo " $$PKG not found yet, retrying in 5s (attempt $$attempt/3)..."
sleep 5
else
echo " FAILED: $$PKG status $$STATUS"
cat /tmp/link-response.txt
return 1
fi
done
}
link_package "image-name-1"
link_package "image-name-2"
when:
- branch: [main, develop]
event: [push, manual, tag]
depends_on:
- docker-build-image-1
- docker-build-image-2
```
**Replace:** `GITEA_HOST`, `ORG`, `REPO`, and the `link_package` calls with actual image names.
**Note on `$$`:** Woodpecker uses `$$` to escape `$` in shell commands within YAML. Use `$$` for shell variables and `${CI_*}` (single `$`) for Woodpecker CI variables.
### Status Codes
| Code | Meaning | Action |
|------|---------|--------|
| 201 | Created | Success |
| 204 | No content | Success |
| 400 | Bad request | Already linked (OK) |
| 404 | Not found | Retry — package may not be indexed yet |
### Known Issue
The Gitea package linking API (added in Gitea 1.24.0) can return 404 for recently pushed packages. The retry logic with 5-second delays handles this. If linking still fails, packages are usable — they just won't appear on the repository Packages tab. They can be linked manually via the Gitea web UI.
## Woodpecker Secrets
### Required Secrets
Configure these in the Woodpecker UI (Settings > Secrets) or via CLI:
| Secret Name | Value | Scope |
|-------------|-------|-------|
| `gitea_username` | Gitea username or service account | `push`, `manual`, `tag` |
| `gitea_token` | Gitea token with `package:write` scope | `push`, `manual`, `tag` |
### Setting Secrets via CLI
```bash
# Woodpecker CLI
woodpecker secret add ORG/REPO --name gitea_username --value "USERNAME"
woodpecker secret add ORG/REPO --name gitea_token --value "TOKEN"
```
### Security Rules
- Never hardcode tokens in pipeline YAML
- Use `from_secret` for all credentials
- Limit secret event scope (don't expose on `pull_request` from forks)
- Use dedicated service accounts, not personal tokens
- Rotate tokens periodically
## npm Package Publishing
For projects with publishable npm packages (e.g., shared libraries, design systems).
### Publishing to Gitea npm Registry
Gitea includes a built-in npm registry at `https://GITEA_HOST/api/packages/ORG/npm/`.
**Pipeline step:**
```yaml
publish-packages:
image: *node_image
environment:
GITEA_TOKEN:
from_secret: gitea_token
commands:
- |
echo "//GITEA_HOST/api/packages/ORG/npm/:_authToken=$$GITEA_TOKEN" > .npmrc
echo "@SCOPE:registry=https://GITEA_HOST/api/packages/ORG/npm/" >> .npmrc
- npm publish -w @SCOPE/package-name
when:
- branch: [main]
event: [push, manual, tag]
depends_on:
- build
```
**Replace:** `GITEA_HOST`, `ORG`, `SCOPE`, `package-name`.
### Why Gitea npm (not Verdaccio)
Gitea's built-in npm registry eliminates the need for a separate Verdaccio instance. Benefits:
- **Same auth** — Gitea token with `package:write` scope works for git, containers, AND npm
- **No extra service** — No Verdaccio container, no OAuth/Authentik integration, no separate compose stack
- **Same UI** — Packages appear alongside container images in Gitea's Packages tab
- **Same secrets** — `gitea_token` in Woodpecker handles both Docker push and npm publish
If a project currently uses Verdaccio (e.g., U-Connect at `npm.uscllc.net`), migrate to Gitea npm. See the migration checklist below.
### Versioning
Only publish when the version in `package.json` has changed. Add a version check:
```yaml
commands:
- |
CURRENT=$(node -p "require('./src/PACKAGE/package.json').version")
PUBLISHED=$(npm view @SCOPE/PACKAGE version 2>/dev/null || echo "0.0.0")
if [ "$CURRENT" = "$PUBLISHED" ]; then
echo "Version $CURRENT already published, skipping"
exit 0
fi
echo "Publishing $CURRENT (was $PUBLISHED)"
npm publish -w @SCOPE/PACKAGE
```
## CI Services (Test Databases)
For projects that need a database during CI (migrations, integration tests):
```yaml
services:
postgres:
image: postgres:17-alpine
environment:
POSTGRES_DB: test_db
POSTGRES_USER: test_user
POSTGRES_PASSWORD: test_password
steps:
test:
image: *node_image
environment:
DATABASE_URL: "postgresql://test_user:test_password@postgres:5432/test_db?schema=public"
commands:
- npm run test
depends_on:
- install
```
The service name (`postgres`) becomes the hostname within the pipeline network.
## Split Pipelines for Monorepos (REQUIRED)
For any monorepo with multiple packages/apps, use **split pipelines** — one YAML per package in `.woodpecker/`.
### Why Split?
| Aspect | Single pipeline | Split pipelines |
|--------|----------------|-----------------|
| Path filtering | None — everything rebuilds | Per-package — only affected code |
| Security scanning | Often missing | Required per-package |
| CI minutes | Wasted on unaffected packages | Efficient |
| Failure isolation | One failure blocks everything | Per-package failures isolated |
| Readability | One massive file | Focused, maintainable |
### Structure
```
.woodpecker/
├── api.yml # Only runs when apps/api/** changes
├── web.yml # Only runs when apps/web/** changes
└── (infra.yml) # Optional: shared infra (DB images, etc.)
```
**IMPORTANT:** Do NOT also have `.woodpecker.yml` at root — `.woodpecker/` directory takes precedence and the `.yml` file will be silently ignored.
### Path Filtering Template
```yaml
when:
- event: [push, pull_request, manual]
path:
include: ['apps/api/**', '.woodpecker/api.yml']
```
Each pipeline self-triggers on its own YAML changes. Manual triggers run regardless of path.
### Kaniko Context Scoping
In split pipelines, scope the Kaniko context to the app directory:
```yaml
/kaniko/executor --context apps/api --dockerfile apps/api/Dockerfile $$DESTINATIONS
```
This means Dockerfile `COPY . .` only copies the app's files, not the entire monorepo.
### Reference: Telemetry Split Pipeline
See `~/src/mosaic-telemetry-monorepo/.woodpecker/api.yml` and `web.yml` for a complete working example with path filtering, security chain, and Trivy scanning.
## Security Scanning (REQUIRED)
Every pipeline MUST include security scanning. Docker build steps MUST gate on all security steps passing.
### Source-Level Security (per tech stack)
**Python:**
```yaml
security-bandit:
image: *uv_image
commands:
- |
cd apps/api
uv sync --all-extras --frozen
uv run bandit -r src/ -f screen
depends_on: [install]
security-audit:
image: *uv_image
commands:
- |
cd apps/api
uv sync --all-extras --frozen
uv run pip-audit
depends_on: [install]
```
**Node.js:**
```yaml
security-audit:
image: node:22-alpine
commands:
- cd apps/web && npm audit --audit-level=high
depends_on: [install]
```
### Container Scanning (Trivy) — Post-Build
Run Trivy against every built image to catch OS-level and runtime vulnerabilities:
```yaml
security-trivy:
image: aquasec/trivy:latest
environment:
GITEA_USER:
from_secret: gitea_username
GITEA_TOKEN:
from_secret: gitea_token
CI_COMMIT_SHA: ${CI_COMMIT_SHA}
commands:
- |
mkdir -p ~/.docker
echo "{\"auths\":{\"REGISTRY\":{\"username\":\"$$GITEA_USER\",\"password\":\"$$GITEA_TOKEN\"}}}" > ~/.docker/config.json
trivy image --exit-code 1 --severity HIGH,CRITICAL --ignore-unfixed \
REGISTRY/ORG/IMAGE:$${CI_COMMIT_SHA:0:8}
when:
- branch: [main, develop]
event: [push, manual, tag]
depends_on:
- docker-build-SERVICE
```
**Replace:** `REGISTRY`, `ORG`, `IMAGE`, `SERVICE`.
### Full Dependency Chain
```
install → [lint, typecheck, security-source, security-deps, test] → docker-build → trivy → link-package
```
Docker build MUST depend on ALL quality + security steps. Trivy runs AFTER build. Package linking runs AFTER Trivy.
## Monorepo Considerations
### pnpm + Turbo (Mosaic Stack pattern)
```yaml
variables:
- &install_deps |
corepack enable
pnpm install --frozen-lockfile
steps:
build:
commands:
- *install_deps
- pnpm build # Turbo handles dependency order and caching
```
### npm Workspaces (U-Connect pattern)
```yaml
variables:
- &install_deps |
corepack enable
npm ci
steps:
# Build shared dependencies first
build-deps:
commands:
- npm run build -w @scope/shared-auth
- npm run build -w @scope/shared-types
# Then build everything
build-all:
commands:
- npm run build -w @scope/package-1
- npm run build -w @scope/package-2
# ... in dependency order
depends_on:
- build-deps
```
### Per-Package Quality Checks
For large monorepos, run checks per-package in parallel:
```yaml
lint-api:
commands:
- npm run lint -w @scope/api
depends_on: [install]
lint-web:
commands:
- npm run lint -w @scope/web
depends_on: [install]
# These run in parallel since they share the same dependency
```
## Complete Pipeline Example
This is a minimal but complete pipeline for a project with two services:
```yaml
when:
- event: [push, pull_request, manual]
variables:
- &node_image "node:20-alpine"
- &install_deps |
corepack enable
npm ci
- &kaniko_setup |
mkdir -p /kaniko/.docker
echo "{\"auths\":{\"git.example.com\":{\"username\":\"$GITEA_USER\",\"password\":\"$GITEA_TOKEN\"}}}" > /kaniko/.docker/config.json
steps:
# === Quality Gates ===
install:
image: *node_image
commands:
- *install_deps
lint:
image: *node_image
commands:
- npm run lint
depends_on: [install]
test:
image: *node_image
commands:
- npm run test
depends_on: [install]
build:
image: *node_image
environment:
NODE_ENV: "production"
commands:
- npm run build
depends_on: [lint, test]
# === Docker Build & Push ===
docker-build-api:
image: gcr.io/kaniko-project/executor:debug
environment:
GITEA_USER:
from_secret: gitea_username
GITEA_TOKEN:
from_secret: gitea_token
CI_COMMIT_BRANCH: ${CI_COMMIT_BRANCH}
CI_COMMIT_TAG: ${CI_COMMIT_TAG}
CI_COMMIT_SHA: ${CI_COMMIT_SHA}
commands:
- *kaniko_setup
- |
DESTINATIONS="--destination git.example.com/org/api:${CI_COMMIT_SHA:0:8}"
if [ "$CI_COMMIT_BRANCH" = "main" ]; then
DESTINATIONS="$DESTINATIONS --destination git.example.com/org/api:latest"
elif [ "$CI_COMMIT_BRANCH" = "develop" ]; then
DESTINATIONS="$DESTINATIONS --destination git.example.com/org/api:dev"
fi
if [ -n "$CI_COMMIT_TAG" ]; then
DESTINATIONS="$DESTINATIONS --destination git.example.com/org/api:$CI_COMMIT_TAG"
fi
/kaniko/executor --context . --dockerfile src/api/Dockerfile $DESTINATIONS
when:
- branch: [main, develop]
event: [push, manual, tag]
depends_on: [build]
docker-build-web:
image: gcr.io/kaniko-project/executor:debug
environment:
GITEA_USER:
from_secret: gitea_username
GITEA_TOKEN:
from_secret: gitea_token
CI_COMMIT_BRANCH: ${CI_COMMIT_BRANCH}
CI_COMMIT_TAG: ${CI_COMMIT_TAG}
CI_COMMIT_SHA: ${CI_COMMIT_SHA}
commands:
- *kaniko_setup
- |
DESTINATIONS="--destination git.example.com/org/web:${CI_COMMIT_SHA:0:8}"
if [ "$CI_COMMIT_BRANCH" = "main" ]; then
DESTINATIONS="$DESTINATIONS --destination git.example.com/org/web:latest"
elif [ "$CI_COMMIT_BRANCH" = "develop" ]; then
DESTINATIONS="$DESTINATIONS --destination git.example.com/org/web:dev"
fi
if [ -n "$CI_COMMIT_TAG" ]; then
DESTINATIONS="$DESTINATIONS --destination git.example.com/org/web:$CI_COMMIT_TAG"
fi
/kaniko/executor --context . --dockerfile src/web/Dockerfile $DESTINATIONS
when:
- branch: [main, develop]
event: [push, manual, tag]
depends_on: [build]
# === Package Linking ===
link-packages:
image: alpine:3
environment:
GITEA_TOKEN:
from_secret: gitea_token
commands:
- apk add --no-cache curl
- sleep 10
- |
set -e
link_package() {
PKG="$$1"
for attempt in 1 2 3; do
STATUS=$$(curl -s -o /dev/null -w "%{http_code}" -X POST \
-H "Authorization: token $$GITEA_TOKEN" \
"https://git.example.com/api/v1/packages/org/container/$$PKG/-/link/repo")
if [ "$$STATUS" = "201" ] || [ "$$STATUS" = "204" ] || [ "$$STATUS" = "400" ]; then
echo "Linked $$PKG ($$STATUS)"
return 0
elif [ $$attempt -lt 3 ]; then
sleep 5
else
echo "FAILED: $$PKG ($$STATUS)"
return 1
fi
done
}
link_package "api"
link_package "web"
when:
- branch: [main, develop]
event: [push, manual, tag]
depends_on:
- docker-build-api
- docker-build-web
```
## Checklist: Adding CI/CD to a Project
1. **Verify Dockerfiles exist** for each service that needs an image
2. **Create Woodpecker secrets** (`gitea_username`, `gitea_token`) in the Woodpecker UI
3. **Verify Gitea token scope** includes `package:write`
4. **Add Docker build steps** to `.woodpecker.yml` using the Kaniko template above
5. **Add package linking step** after all Docker builds
6. **Update `docker-compose.yml`** to reference registry images instead of local builds:
```yaml
image: git.example.com/org/service:${IMAGE_TAG:-dev}
```
7. **Test on develop branch first** — push a small change and verify the pipeline
8. **Verify images appear** in Gitea Packages tab after successful pipeline
## Gitea as Unified Platform
Gitea provides **three services in one**, eliminating the need for separate Harbor and Verdaccio deployments:
| Service | What Gitea Replaces | Registry URL |
|---------|---------------------|-------------|
| **Git hosting** | GitHub/GitLab | `https://GITEA_HOST/org/repo` |
| **Container registry** | Harbor, Docker Hub | `docker pull GITEA_HOST/org/image:tag` |
| **npm registry** | Verdaccio, Artifactory | `https://GITEA_HOST/api/packages/org/npm/` |
### Additional Package Types
Gitea also supports PyPI, Maven, NuGet, Cargo, Composer, Conan, Conda, Generic, and more. All use the same token authentication.
### Single Token, Three Services
A Gitea token with `package:write` scope handles:
- `git push` / `git pull`
- `docker push` / `docker pull` (container registry)
- `npm publish` / `npm install` (npm registry)
This means a single `gitea_token` secret in Woodpecker CI covers all CI/CD operations.
### Architecture Simplification
**Before (3 services):**
```
Gitea (git) + Harbor (containers) + Verdaccio (npm)
↓ separate auth ↓ separate auth ↓ OAuth/Authentik
3 tokens 1 robot account 1 OIDC integration
3 backup targets complex RBAC group-based access
```
**After (1 service):**
```
Gitea (git + containers + npm)
↓ single token
1 secret in Woodpecker
1 backup target
unified RBAC via Gitea teams
```
## Migrating from Verdaccio to Gitea npm
If a project currently uses Verdaccio (e.g., U-Connect at `npm.uscllc.net`), follow this migration checklist:
### Migration Steps
1. **Verify Gitea npm registry is accessible:**
```bash
curl -s https://GITEA_HOST/api/packages/ORG/npm/ | head -5
```
2. **Update `.npmrc` in project root:**
```ini
# Before (Verdaccio)
@uconnect:registry=https://npm.uscllc.net
# After (Gitea)
@uconnect:registry=https://git.uscllc.com/api/packages/usc/npm/
```
3. **Update CI pipeline** — replace `npm_token` secret with `gitea_token`:
```yaml
# Uses same token as Docker push — no extra secret needed
echo "//GITEA_HOST/api/packages/ORG/npm/:_authToken=$$GITEA_TOKEN" > .npmrc
```
4. **Re-publish existing packages** to Gitea registry:
```bash
# For each @scope/package
npm publish -w @scope/package --registry https://GITEA_HOST/api/packages/ORG/npm/
```
5. **Update consumer projects** — any project that `npm install`s from the old registry needs its `.npmrc` updated
6. **Remove Verdaccio infrastructure:**
- Docker compose stack (`compose.verdaccio.yml`)
- Authentik OAuth provider/blueprints
- Verdaccio config files
- DNS entry for `npm.uscllc.net` (eventually)
### What You Can Remove
| Component | Location | Purpose (was) |
|-----------|----------|---------------|
| Verdaccio compose | `compose.verdaccio.yml` | npm registry container |
| Verdaccio config | `config/verdaccio/` | Server configuration |
| Authentik blueprints | `config/authentik/blueprints/*/verdaccio-*` | OAuth integration |
| Verdaccio scripts | `scripts/verdaccio/` | Blueprint application |
| OIDC env vars | `.env` | `AUTHENTIK_VERDACCIO_*`, `VERDACCIO_OPENID_*` |
## Troubleshooting
### "unauthorized: authentication required"
- Verify `gitea_username` and `gitea_token` secrets are set in Woodpecker
- Verify the token has `package:write` scope
- Check the registry hostname in `kaniko_setup` matches the Gitea instance
### Kaniko build fails with "error building image"
- Verify the Dockerfile path is correct relative to `--context`
- Check that multi-stage builds don't reference stages that don't exist
- Run `docker build` locally first to verify the Dockerfile works
### Package linking returns 404
- Normal for recently pushed packages — the retry logic handles this
- If persistent: verify the package name matches exactly (case-sensitive)
- Check Gitea version is 1.24.0+ (package linking API requirement)
### Images not visible in Gitea Packages
- Linking may have failed — check the `link-packages` step logs
- Images are still usable via `docker pull` even without linking
- Link manually: Gitea UI > Packages > Select package > Link to repository
### Pipeline runs Docker builds on pull requests
- Verify `when` clause on Docker build steps restricts to `branch: [main, develop]`
- Pull requests should only run quality gates, not build/push images