Files
bootstrap/guides/ci-cd-pipelines.md
2026-02-21 09:55:34 -06:00

1062 lines
32 KiB
Markdown

# CI/CD Pipeline Guide
> **Load this guide when:** Adding Docker build/push steps, configuring Woodpecker CI pipelines, publishing packages to registries, or implementing CI/CD for a new project.
## Overview
This guide covers the canonical CI/CD pattern used across projects. The pipeline runs in Woodpecker CI and follows this flow:
```
GIT PUSH
QUALITY GATES (lint, typecheck, test, audit)
↓ all pass
BUILD (compile all packages)
↓ only on main/tags
DOCKER BUILD & PUSH (Kaniko → Gitea Container Registry)
↓ all images pushed
PACKAGE LINKING (associate images with repository in Gitea)
```
## Reference Implementations
### Split Pipelines (Preferred for Monorepos)
**Mosaic Telemetry** (`~/src/mosaic-telemetry-monorepo/.woodpecker/`) is the canonical example of **split per-package pipelines** with path filtering, full security chain (source + container scanning), and efficient CI resource usage.
**Key features:**
- One YAML per package in `.woodpecker/` directory
- Path filtering: only the affected package's pipeline runs on push
- Security chain: source scanning (bandit/npm audit) + dependency audit (pip-audit) + container scanning (Trivy)
- Docker build gates on ALL quality steps
**Always use this pattern for monorepos.** It saves CI minutes and isolates failures.
### Single Pipeline (Legacy/Simple Projects)
**Mosaic Stack** (`~/src/mosaic-stack/.woodpecker/build.yml`) uses a single pipeline that builds everything on every push. This works but wastes CI resources on large monorepos. **Mosaic Stack is scheduled for migration to split pipelines.**
Always read the telemetry pipelines first when implementing a new pipeline.
## Infrastructure Instances
| Project | Gitea | Woodpecker | Registry |
|---------|-------|------------|----------|
| Mosaic Stack | `git.mosaicstack.dev` | `ci.mosaicstack.dev` | `git.mosaicstack.dev` |
| U-Connect | `git.uscllc.com` | `woodpecker.uscllc.net` | `git.uscllc.com` |
The patterns are identical — only the hostnames and org/repo names differ.
## Woodpecker Pipeline Structure
### YAML Anchors (DRY)
Define reusable values at the top of `.woodpecker.yml`:
```yaml
variables:
- &node_image "node:20-alpine"
- &install_deps |
corepack enable
npm ci
# For pnpm projects, use:
# - &install_deps |
# corepack enable
# pnpm install --frozen-lockfile
- &kaniko_setup |
mkdir -p /kaniko/.docker
echo "{\"auths\":{\"REGISTRY_HOST\":{\"username\":\"$GITEA_USER\",\"password\":\"$GITEA_TOKEN\"}}}" > /kaniko/.docker/config.json
```
Replace `REGISTRY_HOST` with the actual Gitea hostname (e.g., `git.uscllc.com`).
### Step Dependencies
Woodpecker runs steps in parallel by default. Use `depends_on` to create the dependency graph:
```yaml
steps:
install:
image: *node_image
commands:
- *install_deps
lint:
image: *node_image
commands:
- npm run lint
depends_on:
- install
typecheck:
image: *node_image
commands:
- npm run type-check
depends_on:
- install
test:
image: *node_image
commands:
- npm run test
depends_on:
- install
build:
image: *node_image
environment:
NODE_ENV: "production"
commands:
- npm run build
depends_on:
- lint
- typecheck
- test
```
### Conditional Execution
Use `when` clauses to limit expensive steps (Docker builds) to relevant branches:
```yaml
when:
# Top-level: run quality gates on everything
- event: [push, pull_request, manual]
# Per-step: only build Docker images on main/tags
docker-build-api:
when:
- branch: [main]
event: [push, manual, tag]
```
## Docker Build & Push with Kaniko
### Why Kaniko
Kaniko builds container images without requiring a Docker daemon. This is the standard approach in Woodpecker CI because:
- No privileged mode needed
- No Docker-in-Docker security concerns
- Multi-destination tagging in a single build
- Works in any container runtime
### Kaniko Step Template
```yaml
docker-build-SERVICE:
image: gcr.io/kaniko-project/executor:debug
environment:
GITEA_USER:
from_secret: gitea_username
GITEA_TOKEN:
from_secret: gitea_token
RELEASE_BASE_VERSION: ${RELEASE_BASE_VERSION}
CI_COMMIT_BRANCH: ${CI_COMMIT_BRANCH}
CI_COMMIT_TAG: ${CI_COMMIT_TAG}
CI_COMMIT_SHA: ${CI_COMMIT_SHA}
CI_PIPELINE_NUMBER: ${CI_PIPELINE_NUMBER}
commands:
- *kaniko_setup
- |
SHORT_SHA="${CI_COMMIT_SHA:0:8}"
BUILD_ID="${CI_PIPELINE_NUMBER:-$SHORT_SHA}"
BASE_VERSION="${RELEASE_BASE_VERSION:?RELEASE_BASE_VERSION is required (example: 0.0.1)}"
DESTINATIONS="--destination REGISTRY/ORG/IMAGE_NAME:sha-$SHORT_SHA"
if [ "$CI_COMMIT_BRANCH" = "main" ]; then
DESTINATIONS="$DESTINATIONS --destination REGISTRY/ORG/IMAGE_NAME:v${BASE_VERSION}-rc.${BUILD_ID}"
DESTINATIONS="$DESTINATIONS --destination REGISTRY/ORG/IMAGE_NAME:testing"
fi
if [ -n "$CI_COMMIT_TAG" ]; then
DESTINATIONS="$DESTINATIONS --destination REGISTRY/ORG/IMAGE_NAME:$CI_COMMIT_TAG"
fi
/kaniko/executor --context . --dockerfile PATH/TO/Dockerfile $DESTINATIONS
when:
- branch: [main]
event: [push, manual, tag]
depends_on:
- build
```
**Replace these placeholders:**
| Placeholder | Example (Mosaic) | Example (U-Connect) |
|-------------|-------------------|----------------------|
| `REGISTRY` | `git.mosaicstack.dev` | `git.uscllc.com` |
| `ORG` | `mosaic` | `usc` |
| `IMAGE_NAME` | `stack-api` | `uconnect-backend-api` |
| `PATH/TO/Dockerfile` | `apps/api/Dockerfile` | `src/backend-api/Dockerfile` |
### Image Tagging Strategy
Tagging MUST follow a two-layer model: immutable identity tags + mutable environment tags.
Immutable tags:
| Condition | Tag | Purpose |
|-----------|-----|---------|
| Always | `sha-${CI_COMMIT_SHA:0:8}` | Immutable reference to exact commit |
| `main` branch | `v{BASE_VERSION}-rc.{BUILD_ID}` | Intermediate release candidate for the active milestone |
| Git tag (e.g., `v1.0.0`) | `v1.0.0` | Semantic version release |
Mutable environment tags:
| Tag | Purpose |
|-----|---------|
| `testing` | Current candidate under situational validation |
| `staging` (optional) | Pre-production validation target |
| `prod` | Current production pointer |
Hard rules:
- Do NOT use `latest` for deployment.
- Do NOT use `dev` as the primary deployment tag.
- Deployments MUST resolve to an immutable image digest.
### Digest-First Promotion (Hard Rule)
Deploy and promote by digest, not by mutable tag:
1. Build and push candidate tags (`sha-*`, `vX.Y.Z-rc.N`, `testing`).
2. Resolve the digest from `sha-*` tag.
3. Deploy that digest to testing and run situational tests.
4. If green, promote the same digest to `staging`/`prod` tags.
5. Create final semantic release tag (`vX.Y.Z`) only at milestone completion.
Example with `crane`:
```bash
DIGEST=$(crane digest REGISTRY/ORG/IMAGE:sha-${CI_COMMIT_SHA:0:8})
crane tag REGISTRY/ORG/IMAGE@${DIGEST} testing
# after situational tests pass:
crane tag REGISTRY/ORG/IMAGE@${DIGEST} prod
```
### Deployment Strategy: Blue-Green Default
- Blue-green is the default release strategy for lights-out operation.
- Canary is OPTIONAL and allowed only when automated SLO/error-rate monitoring and rollback triggers are configured.
- If canary guardrails are missing, you MUST use blue-green.
### Image Retention and Cleanup (Hard Rule)
Registry cleanup MUST be automated (daily or weekly job).
Retention policy:
- Keep all final release tags (`vX.Y.Z`) indefinitely.
- Keep digests currently referenced by `prod` and `testing` tags.
- Keep the most recent 20 RC tags (`vX.Y.Z-rc.N`) per service.
- Delete RC and `sha-*` tags older than 30 days when they are not referenced by active environments/releases.
Before deleting any image/tag:
- Verify digest is not currently deployed.
- Verify digest is not referenced by any active release/tag notes.
- Log cleanup actions in CI job output.
### Kaniko Options
Common flags for `/kaniko/executor`:
| Flag | Purpose |
|------|---------|
| `--context .` | Build context directory |
| `--dockerfile path/Dockerfile` | Dockerfile location |
| `--destination registry/org/image:tag` | Push target (repeatable) |
| `--build-arg KEY=VALUE` | Pass build arguments |
| `--cache=true` | Enable layer caching |
| `--cache-repo registry/org/image-cache` | Cache storage location |
### Build Arguments
Pass environment-specific values at build time:
```yaml
/kaniko/executor --context . --dockerfile apps/web/Dockerfile \
--build-arg NEXT_PUBLIC_API_URL=https://api.example.com \
$DESTINATIONS
```
## Gitea Container Registry
### How It Works
Gitea has a built-in container registry. When you push an image to `git.example.com/org/image:tag`, Gitea stores it and makes it available in the Packages section.
### Authentication
Kaniko authenticates via a Docker config file created at pipeline start:
```json
{
"auths": {
"git.example.com": {
"username": "GITEA_USER",
"password": "GITEA_TOKEN"
}
}
}
```
The token must have `package:write` scope. Generate it at: `https://GITEA_HOST/user/settings/applications`
### Pulling Images
After pushing, images are available at:
```bash
docker pull git.example.com/org/image:tag
```
In `docker-compose.yml`:
```yaml
services:
api:
# Preferred: pin digest produced by CI and promoted by environment
image: git.example.com/org/image@${IMAGE_DIGEST}
# Optional channel pointer for non-prod:
# image: git.example.com/org/image:${IMAGE_TAG:-testing}
```
## Package Linking
After pushing images to the Gitea registry, link them to the source repository so they appear on the repository's Packages tab.
### Gitea Package Linking API
```
POST /api/v1/packages/{owner}/{type}/{name}/-/link/{repo}
```
| Parameter | Value |
|-----------|-------|
| `owner` | Organization name (e.g., `mosaic`, `usc`) |
| `type` | `container` |
| `name` | Image name (e.g., `stack-api`) |
| `repo` | Repository name (e.g., `stack`, `uconnect`) |
### Link Step Template
```yaml
link-packages:
image: alpine:3
environment:
GITEA_TOKEN:
from_secret: gitea_token
commands:
- apk add --no-cache curl
- echo "Waiting 10 seconds for packages to be indexed..."
- sleep 10
- |
set -e
link_package() {
PKG="$$1"
echo "Linking $$PKG..."
for attempt in 1 2 3; do
STATUS=$$(curl -s -o /tmp/link-response.txt -w "%{http_code}" -X POST \
-H "Authorization: token $$GITEA_TOKEN" \
"https://GITEA_HOST/api/v1/packages/ORG/container/$$PKG/-/link/REPO")
if [ "$$STATUS" = "201" ] || [ "$$STATUS" = "204" ]; then
echo " Linked $$PKG"
return 0
elif [ "$$STATUS" = "400" ]; then
echo " $$PKG already linked"
return 0
elif [ "$$STATUS" = "404" ] && [ $$attempt -lt 3 ]; then
echo " $$PKG not found yet, retrying in 5s (attempt $$attempt/3)..."
sleep 5
else
echo " FAILED: $$PKG status $$STATUS"
cat /tmp/link-response.txt
return 1
fi
done
}
link_package "image-name-1"
link_package "image-name-2"
when:
- branch: [main]
event: [push, manual, tag]
depends_on:
- docker-build-image-1
- docker-build-image-2
```
**Replace:** `GITEA_HOST`, `ORG`, `REPO`, and the `link_package` calls with actual image names.
**Note on `$$`:** Woodpecker uses `$$` to escape `$` in shell commands within YAML. Use `$$` for shell variables and `${CI_*}` (single `$`) for Woodpecker CI variables.
### Status Codes
| Code | Meaning | Action |
|------|---------|--------|
| 201 | Created | Success |
| 204 | No content | Success |
| 400 | Bad request | Already linked (OK) |
| 404 | Not found | Retry — package may not be indexed yet |
### Known Issue
The Gitea package linking API (added in Gitea 1.24.0) can return 404 for recently pushed packages. The retry logic with 5-second delays handles this. If linking still fails, packages are usable — they just won't appear on the repository Packages tab. They can be linked manually via the Gitea web UI.
## Woodpecker Secrets
### Required Secrets
Configure these in the Woodpecker UI (Settings > Secrets) or via CLI:
| Secret Name | Value | Scope |
|-------------|-------|-------|
| `gitea_username` | Gitea username or service account | `push`, `manual`, `tag` |
| `gitea_token` | Gitea token with `package:write` scope | `push`, `manual`, `tag` |
### Required CI Variables (Non-Secret)
| Variable | Example | Purpose |
|----------|---------|---------|
| `RELEASE_BASE_VERSION` | `0.0.1` | Base milestone version used to generate RC tags (`v0.0.1-rc.N`) |
### Setting Secrets via CLI
```bash
# Woodpecker CLI
woodpecker secret add ORG/REPO --name gitea_username --value "USERNAME"
woodpecker secret add ORG/REPO --name gitea_token --value "TOKEN"
```
### Security Rules
- Never hardcode tokens in pipeline YAML
- Use `from_secret` for all credentials
- Limit secret event scope (don't expose on `pull_request` from forks)
- Use dedicated service accounts, not personal tokens
- Rotate tokens periodically
## npm Package Publishing
For projects with publishable npm packages (e.g., shared libraries, design systems).
### Publishing to Gitea npm Registry
Gitea includes a built-in npm registry at `https://GITEA_HOST/api/packages/ORG/npm/`.
**Pipeline step:**
```yaml
publish-packages:
image: *node_image
environment:
GITEA_TOKEN:
from_secret: gitea_token
commands:
- |
echo "//GITEA_HOST/api/packages/ORG/npm/:_authToken=$$GITEA_TOKEN" > .npmrc
echo "@SCOPE:registry=https://GITEA_HOST/api/packages/ORG/npm/" >> .npmrc
- npm publish -w @SCOPE/package-name
when:
- branch: [main]
event: [push, manual, tag]
depends_on:
- build
```
**Replace:** `GITEA_HOST`, `ORG`, `SCOPE`, `package-name`.
### Why Gitea npm (not Verdaccio)
Gitea's built-in npm registry eliminates the need for a separate Verdaccio instance. Benefits:
- **Same auth** — Gitea token with `package:write` scope works for git, containers, AND npm
- **No extra service** — No Verdaccio container, no OAuth/Authentik integration, no separate compose stack
- **Same UI** — Packages appear alongside container images in Gitea's Packages tab
- **Same secrets** — `gitea_token` in Woodpecker handles both Docker push and npm publish
If a project currently uses Verdaccio (e.g., U-Connect at `npm.uscllc.net`), migrate to Gitea npm. See the migration checklist below.
### Versioning
Only publish when the version in `package.json` has changed. Add a version check:
```yaml
commands:
- |
CURRENT=$(node -p "require('./src/PACKAGE/package.json').version")
PUBLISHED=$(npm view @SCOPE/PACKAGE version 2>/dev/null || echo "0.0.0")
if [ "$CURRENT" = "$PUBLISHED" ]; then
echo "Version $CURRENT already published, skipping"
exit 0
fi
echo "Publishing $CURRENT (was $PUBLISHED)"
npm publish -w @SCOPE/PACKAGE
```
## CI Services (Test Databases)
For projects that need a database during CI (migrations, integration tests):
```yaml
services:
postgres:
image: postgres:17-alpine
environment:
POSTGRES_DB: test_db
POSTGRES_USER: test_user
POSTGRES_PASSWORD: test_password
steps:
test:
image: *node_image
environment:
DATABASE_URL: "postgresql://test_user:test_password@postgres:5432/test_db?schema=public"
commands:
- npm run test
depends_on:
- install
```
The service name (`postgres`) becomes the hostname within the pipeline network.
## Split Pipelines for Monorepos (REQUIRED)
For any monorepo with multiple packages/apps, use **split pipelines** — one YAML per package in `.woodpecker/`.
### Why Split?
| Aspect | Single pipeline | Split pipelines |
|--------|----------------|-----------------|
| Path filtering | None — everything rebuilds | Per-package — only affected code |
| Security scanning | Often missing | Required per-package |
| CI minutes | Wasted on unaffected packages | Efficient |
| Failure isolation | One failure blocks everything | Per-package failures isolated |
| Readability | One massive file | Focused, maintainable |
### Structure
```
.woodpecker/
├── api.yml # Only runs when apps/api/** changes
├── web.yml # Only runs when apps/web/** changes
└── (infra.yml) # Optional: shared infra (DB images, etc.)
```
**IMPORTANT:** Do NOT also have `.woodpecker.yml` at root — `.woodpecker/` directory takes precedence and the `.yml` file will be silently ignored.
### Path Filtering Template
```yaml
when:
- event: [push, pull_request, manual]
path:
include: ['apps/api/**', '.woodpecker/api.yml']
```
Each pipeline self-triggers on its own YAML changes. Manual triggers run regardless of path.
### Kaniko Context Scoping
In split pipelines, scope the Kaniko context to the app directory:
```yaml
/kaniko/executor --context apps/api --dockerfile apps/api/Dockerfile $$DESTINATIONS
```
This means Dockerfile `COPY . .` only copies the app's files, not the entire monorepo.
### Reference: Telemetry Split Pipeline
See `~/src/mosaic-telemetry-monorepo/.woodpecker/api.yml` and `web.yml` for a complete working example with path filtering, security chain, and Trivy scanning.
## Security Scanning (REQUIRED)
Every pipeline MUST include security scanning. Docker build steps MUST gate on all security steps passing.
### Source-Level Security (per tech stack)
**Python:**
```yaml
security-bandit:
image: *uv_image
commands:
- |
cd apps/api
uv sync --all-extras --frozen
uv run bandit -r src/ -f screen
depends_on: [install]
security-audit:
image: *uv_image
commands:
- |
cd apps/api
uv sync --all-extras --frozen
uv run pip-audit
depends_on: [install]
```
**Node.js:**
```yaml
security-audit:
image: node:22-alpine
commands:
- cd apps/web && npm audit --audit-level=high
depends_on: [install]
```
### Container Scanning (Trivy) — Post-Build
Run Trivy against every built image to catch OS-level and runtime vulnerabilities:
```yaml
security-trivy:
image: aquasec/trivy:latest
environment:
GITEA_USER:
from_secret: gitea_username
GITEA_TOKEN:
from_secret: gitea_token
CI_COMMIT_SHA: ${CI_COMMIT_SHA}
commands:
- |
mkdir -p ~/.docker
echo "{\"auths\":{\"REGISTRY\":{\"username\":\"$$GITEA_USER\",\"password\":\"$$GITEA_TOKEN\"}}}" > ~/.docker/config.json
trivy image --exit-code 1 --severity HIGH,CRITICAL --ignore-unfixed \
REGISTRY/ORG/IMAGE:sha-$${CI_COMMIT_SHA:0:8}
when:
- branch: [main]
event: [push, manual, tag]
depends_on:
- docker-build-SERVICE
```
**Replace:** `REGISTRY`, `ORG`, `IMAGE`, `SERVICE`.
### Full Dependency Chain
```
install → [lint, typecheck, security-source, security-deps, test] → docker-build → trivy → link-package
```
Docker build MUST depend on ALL quality + security steps. Trivy runs AFTER build. Package linking runs AFTER Trivy.
## Monorepo Considerations
### pnpm + Turbo
```yaml
variables:
- &install_deps |
corepack enable
pnpm install --frozen-lockfile
steps:
build:
commands:
- *install_deps
- pnpm build # Turbo handles dependency order and caching
```
### npm Workspaces
```yaml
variables:
- &install_deps |
corepack enable
npm ci
steps:
# Build shared dependencies first
build-deps:
commands:
- npm run build -w @scope/shared-auth
- npm run build -w @scope/shared-types
# Then build everything
build-all:
commands:
- npm run build -w @scope/package-1
- npm run build -w @scope/package-2
# ... in dependency order
depends_on:
- build-deps
```
### Per-Package Quality Checks
For large monorepos, run checks per-package in parallel:
```yaml
lint-api:
commands:
- npm run lint -w @scope/api
depends_on: [install]
lint-web:
commands:
- npm run lint -w @scope/web
depends_on: [install]
# These run in parallel since they share the same dependency
```
## Complete Pipeline Example
This is a minimal but complete pipeline for a project with two services:
```yaml
when:
- event: [push, pull_request, manual]
variables:
- &node_image "node:20-alpine"
- &install_deps |
corepack enable
npm ci
- &kaniko_setup |
mkdir -p /kaniko/.docker
echo "{\"auths\":{\"git.example.com\":{\"username\":\"$GITEA_USER\",\"password\":\"$GITEA_TOKEN\"}}}" > /kaniko/.docker/config.json
steps:
# === Quality Gates ===
install:
image: *node_image
commands:
- *install_deps
lint:
image: *node_image
commands:
- npm run lint
depends_on: [install]
test:
image: *node_image
commands:
- npm run test
depends_on: [install]
build:
image: *node_image
environment:
NODE_ENV: "production"
commands:
- npm run build
depends_on: [lint, test]
# === Docker Build & Push ===
docker-build-api:
image: gcr.io/kaniko-project/executor:debug
environment:
GITEA_USER:
from_secret: gitea_username
GITEA_TOKEN:
from_secret: gitea_token
RELEASE_BASE_VERSION: ${RELEASE_BASE_VERSION}
CI_COMMIT_BRANCH: ${CI_COMMIT_BRANCH}
CI_COMMIT_TAG: ${CI_COMMIT_TAG}
CI_COMMIT_SHA: ${CI_COMMIT_SHA}
CI_PIPELINE_NUMBER: ${CI_PIPELINE_NUMBER}
commands:
- *kaniko_setup
- |
SHORT_SHA="${CI_COMMIT_SHA:0:8}"
BUILD_ID="${CI_PIPELINE_NUMBER:-$SHORT_SHA}"
BASE_VERSION="${RELEASE_BASE_VERSION:?RELEASE_BASE_VERSION is required}"
DESTINATIONS="--destination git.example.com/org/api:sha-$SHORT_SHA"
if [ "$CI_COMMIT_BRANCH" = "main" ]; then
DESTINATIONS="$DESTINATIONS --destination git.example.com/org/api:v${BASE_VERSION}-rc.${BUILD_ID}"
DESTINATIONS="$DESTINATIONS --destination git.example.com/org/api:testing"
fi
if [ -n "$CI_COMMIT_TAG" ]; then
DESTINATIONS="$DESTINATIONS --destination git.example.com/org/api:$CI_COMMIT_TAG"
fi
/kaniko/executor --context . --dockerfile src/api/Dockerfile $DESTINATIONS
when:
- branch: [main]
event: [push, manual, tag]
depends_on: [build]
docker-build-web:
image: gcr.io/kaniko-project/executor:debug
environment:
GITEA_USER:
from_secret: gitea_username
GITEA_TOKEN:
from_secret: gitea_token
RELEASE_BASE_VERSION: ${RELEASE_BASE_VERSION}
CI_COMMIT_BRANCH: ${CI_COMMIT_BRANCH}
CI_COMMIT_TAG: ${CI_COMMIT_TAG}
CI_COMMIT_SHA: ${CI_COMMIT_SHA}
CI_PIPELINE_NUMBER: ${CI_PIPELINE_NUMBER}
commands:
- *kaniko_setup
- |
SHORT_SHA="${CI_COMMIT_SHA:0:8}"
BUILD_ID="${CI_PIPELINE_NUMBER:-$SHORT_SHA}"
BASE_VERSION="${RELEASE_BASE_VERSION:?RELEASE_BASE_VERSION is required}"
DESTINATIONS="--destination git.example.com/org/web:sha-$SHORT_SHA"
if [ "$CI_COMMIT_BRANCH" = "main" ]; then
DESTINATIONS="$DESTINATIONS --destination git.example.com/org/web:v${BASE_VERSION}-rc.${BUILD_ID}"
DESTINATIONS="$DESTINATIONS --destination git.example.com/org/web:testing"
fi
if [ -n "$CI_COMMIT_TAG" ]; then
DESTINATIONS="$DESTINATIONS --destination git.example.com/org/web:$CI_COMMIT_TAG"
fi
/kaniko/executor --context . --dockerfile src/web/Dockerfile $DESTINATIONS
when:
- branch: [main]
event: [push, manual, tag]
depends_on: [build]
# === Package Linking ===
link-packages:
image: alpine:3
environment:
GITEA_TOKEN:
from_secret: gitea_token
commands:
- apk add --no-cache curl
- sleep 10
- |
set -e
link_package() {
PKG="$$1"
for attempt in 1 2 3; do
STATUS=$$(curl -s -o /dev/null -w "%{http_code}" -X POST \
-H "Authorization: token $$GITEA_TOKEN" \
"https://git.example.com/api/v1/packages/org/container/$$PKG/-/link/repo")
if [ "$$STATUS" = "201" ] || [ "$$STATUS" = "204" ] || [ "$$STATUS" = "400" ]; then
echo "Linked $$PKG ($$STATUS)"
return 0
elif [ $$attempt -lt 3 ]; then
sleep 5
else
echo "FAILED: $$PKG ($$STATUS)"
return 1
fi
done
}
link_package "api"
link_package "web"
when:
- branch: [main]
event: [push, manual, tag]
depends_on:
- docker-build-api
- docker-build-web
```
## Checklist: Adding CI/CD to a Project
1. **Verify Dockerfiles exist** for each service that needs an image
2. **Create Woodpecker secrets** (`gitea_username`, `gitea_token`) in the Woodpecker UI
3. **Verify Gitea token scope** includes `package:write`
4. **Add Docker build steps** to `.woodpecker.yml` using the Kaniko template above
5. **Add package linking step** after all Docker builds
6. **Update `docker-compose.yml`** to reference registry images instead of local builds:
```yaml
image: git.example.com/org/service@${IMAGE_DIGEST}
```
7. **Test on a short-lived non-main branch first** — open a PR and verify quality gates before merging to `main`
8. **Verify images appear** in Gitea Packages tab after successful pipeline
## Post-Merge CI Monitoring (Hard Rule)
For source-code delivery, completion is not allowed at "PR opened" stage.
Required sequence:
1. Merge PR to `main` (squash) via Mosaic wrapper.
2. Monitor CI to terminal status:
```bash
~/.config/mosaic/rails/git/pr-ci-wait.sh -n <PR_NUMBER>
```
3. Require green status before claiming completion.
4. If CI fails, create remediation task(s) and continue until green.
5. If monitoring command fails, report blocker with the exact failed wrapper command and stop.
Woodpecker note:
- In Gitea + Woodpecker environments, commit status contexts generally reflect Woodpecker pipeline results.
- Always include CI run/status evidence in completion report.
## Queue Guard Before Push/Merge (Hard Rule)
Before pushing a branch or merging a PR, guard against overlapping project pipelines:
```bash
~/.config/mosaic/rails/git/ci-queue-wait.sh --purpose push -B main
~/.config/mosaic/rails/git/ci-queue-wait.sh --purpose merge -B main
```
Behavior:
- If pipeline state is running/queued/pending, wait until queue clears.
- If timeout or API/auth failure occurs, treat as `blocked`, report exact failed wrapper command, and stop.
## Gitea as Unified Platform
Gitea provides **multiple services in one**, eliminating the need for separate registry platforms:
| Service | What Gitea Replaces | Registry URL |
|---------|---------------------|-------------|
| **Git hosting** | GitHub/GitLab | `https://GITEA_HOST/org/repo` |
| **Container registry** | Harbor, Docker Hub | `docker pull GITEA_HOST/org/image:tag` |
| **npm registry** | Verdaccio, Artifactory | `https://GITEA_HOST/api/packages/org/npm/` |
| **PyPI registry** | Private PyPI/Artifactory | `https://GITEA_HOST/api/packages/org/pypi` |
| **Maven registry** | Nexus, Artifactory | `https://GITEA_HOST/api/packages/org/maven` |
| **NuGet registry** | Azure Artifacts, Artifactory | `https://GITEA_HOST/api/packages/org/nuget/index.json` |
| **Cargo registry** | crates.io mirrors, Artifactory | `https://GITEA_HOST/api/packages/org/cargo` |
| **Composer registry** | Private Packagist, Artifactory | `https://GITEA_HOST/api/packages/org/composer` |
| **Conan registry** | Artifactory Conan | `https://GITEA_HOST/api/packages/org/conan` |
| **Conda registry** | Anaconda Server, Artifactory | `https://GITEA_HOST/api/packages/org/conda` |
| **Generic registry** | Generic binary stores | `https://GITEA_HOST/api/packages/org/generic` |
### Single Token, Multiple Services
A Gitea token with `package:write` scope handles:
- `git push` / `git pull`
- `docker push` / `docker pull` (container registry)
- `npm publish` / `npm install` (npm registry)
- `twine upload` / `pip install` (PyPI registry)
- package operations for Maven/NuGet/Cargo/Composer/Conan/Conda/Generic registries
This means a single `gitea_token` secret in Woodpecker CI covers all CI/CD package operations.
## Python Packages on Gitea PyPI
For Python libraries and internal packages, use Gitea's built-in PyPI registry.
### Publish (Local or CI)
```bash
python -m pip install --upgrade build twine
python -m build
python -m twine upload \
--repository-url "https://GITEA_HOST/api/packages/ORG/pypi" \
--username "$GITEA_USERNAME" \
--password "$GITEA_TOKEN" \
dist/*
```
### Install (Consumer Projects)
```bash
pip install \
--extra-index-url "https://$GITEA_USERNAME:$GITEA_TOKEN@GITEA_HOST/api/packages/ORG/pypi/simple" \
your-package-name
```
### Woodpecker Step (Python Publish)
```yaml
publish-python-package:
image: python:3.12-slim
environment:
GITEA_USERNAME:
from_secret: gitea_username
GITEA_TOKEN:
from_secret: gitea_token
commands:
- python -m pip install --upgrade build twine
- python -m build
- python -m twine upload --repository-url https://GITEA_HOST/api/packages/ORG/pypi --username "$$GITEA_USERNAME" --password "$$GITEA_TOKEN" dist/*
when:
branch: [main]
event: [push]
```
### Architecture Simplification
**Before (4 services):**
```
Gitea (git) + Harbor (containers) + Verdaccio (npm) + Private PyPI
↓ separate auth ↓ separate auth ↓ extra auth ↓ extra auth
multiple tokens robot/service users npm-specific token pip/twine token
fragmented access fragmented RBAC fragmented RBAC fragmented RBAC
```
**After (1 service):**
```
Gitea (git + containers + npm + pypi)
↓ unified secrets
1 credentials model in CI
1 backup target
unified RBAC via Gitea teams
```
## Migrating from Verdaccio to Gitea npm
If a project currently uses Verdaccio (e.g., U-Connect at `npm.uscllc.net`), follow this migration checklist:
### Migration Steps
1. **Verify Gitea npm registry is accessible:**
```bash
curl -s https://GITEA_HOST/api/packages/ORG/npm/ | head -5
```
2. **Update `.npmrc` in project root:**
```ini
# Before (Verdaccio)
@uconnect:registry=https://npm.uscllc.net
# After (Gitea)
@uconnect:registry=https://git.uscllc.com/api/packages/usc/npm/
```
3. **Update CI pipeline** — replace `npm_token` secret with `gitea_token`:
```yaml
# Uses same token as Docker push — no extra secret needed
echo "//GITEA_HOST/api/packages/ORG/npm/:_authToken=$$GITEA_TOKEN" > .npmrc
```
4. **Re-publish existing packages** to Gitea registry:
```bash
# For each @scope/package
npm publish -w @scope/package --registry https://GITEA_HOST/api/packages/ORG/npm/
```
5. **Update consumer projects** — any project that `npm install`s from the old registry needs its `.npmrc` updated
6. **Remove Verdaccio infrastructure:**
- Docker compose stack (`compose.verdaccio.yml`)
- Authentik OAuth provider/blueprints
- Verdaccio config files
- DNS entry for `npm.uscllc.net` (eventually)
### What You Can Remove
| Component | Location | Purpose (was) |
|-----------|----------|---------------|
| Verdaccio compose | `compose.verdaccio.yml` | npm registry container |
| Verdaccio config | `config/verdaccio/` | Server configuration |
| Authentik blueprints | `config/authentik/blueprints/*/verdaccio-*` | OAuth integration |
| Verdaccio scripts | `scripts/verdaccio/` | Blueprint application |
| OIDC env vars | `.env` | `AUTHENTIK_VERDACCIO_*`, `VERDACCIO_OPENID_*` |
## Troubleshooting
### "unauthorized: authentication required"
- Verify `gitea_username` and `gitea_token` secrets are set in Woodpecker
- Verify the token has `package:write` scope
- Check the registry hostname in `kaniko_setup` matches the Gitea instance
### Kaniko build fails with "error building image"
- Verify the Dockerfile path is correct relative to `--context`
- Check that multi-stage builds don't reference stages that don't exist
- Run `docker build` locally first to verify the Dockerfile works
### Package linking returns 404
- Normal for recently pushed packages — the retry logic handles this
- If persistent: verify the package name matches exactly (case-sensitive)
- Check Gitea version is 1.24.0+ (package linking API requirement)
### Images not visible in Gitea Packages
- Linking may have failed — check the `link-packages` step logs
- Images are still usable via `docker pull` even without linking
- Link manually: Gitea UI > Packages > Select package > Link to repository
### Pipeline runs Docker builds on pull requests
- Verify `when` clause on Docker build steps restricts to `branch: [main]`
- Pull requests should only run quality gates, not build/push images