generalize runtime ownership with doctor and local skill migration
This commit is contained in:
14
README.md
14
README.md
@@ -21,6 +21,7 @@ bash ~/src/mosaic-bootstrap/install.sh
|
|||||||
- Runtime overlays: `~/.mosaic/runtime/`
|
- Runtime overlays: `~/.mosaic/runtime/`
|
||||||
- Shared wrapper commands: `~/.mosaic/bin/`
|
- Shared wrapper commands: `~/.mosaic/bin/`
|
||||||
- Canonical skills directory: `~/.mosaic/skills`
|
- Canonical skills directory: `~/.mosaic/skills`
|
||||||
|
- Local cross-runtime skills: `~/.mosaic/skills-local`
|
||||||
|
|
||||||
## Universal Skills
|
## Universal Skills
|
||||||
|
|
||||||
@@ -38,6 +39,8 @@ Then links each skill into runtime directories:
|
|||||||
- `~/.codex/skills`
|
- `~/.codex/skills`
|
||||||
- `~/.config/opencode/skills`
|
- `~/.config/opencode/skills`
|
||||||
|
|
||||||
|
Local skills under `~/.mosaic/skills-local` are also linked into runtimes.
|
||||||
|
|
||||||
Manual commands:
|
Manual commands:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@@ -50,15 +53,17 @@ Manual commands:
|
|||||||
Installer also links Claude-compatible paths back to Mosaic canonicals:
|
Installer also links Claude-compatible paths back to Mosaic canonicals:
|
||||||
|
|
||||||
- `~/.claude/agent-guides` -> `~/.mosaic/guides`
|
- `~/.claude/agent-guides` -> `~/.mosaic/guides`
|
||||||
- `~/.claude/scripts/{git,codex,bootstrap,cicd}` -> `~/.mosaic/rails/...`
|
- `~/.claude/scripts/{git,codex,bootstrap,cicd,portainer}` -> `~/.mosaic/rails/...`
|
||||||
- `~/.claude/templates` -> `~/.mosaic/templates/agent`
|
- `~/.claude/templates` -> `~/.mosaic/templates/agent`
|
||||||
- `~/.claude/presets/{domains,tech-stacks,workflows}` -> `~/.mosaic/profiles/...`
|
- `~/.claude/presets/{domains,tech-stacks,workflows}` -> `~/.mosaic/profiles/...`
|
||||||
- `~/.claude/presets/*.json` runtime overlays -> `~/.mosaic/runtime/claude/settings-overlays/`
|
- `~/.claude/presets/*.json` runtime overlays -> `~/.mosaic/runtime/claude/settings-overlays/`
|
||||||
|
- `~/.claude/{CLAUDE.md,settings.json,hooks-config.json,context7-integration.md}` -> `~/.mosaic/runtime/claude/...`
|
||||||
|
|
||||||
Run manually:
|
Run manually:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
~/.mosaic/bin/mosaic-link-runtime-assets
|
~/.mosaic/bin/mosaic-link-runtime-assets
|
||||||
|
~/.mosaic/bin/mosaic-migrate-local-skills --apply
|
||||||
```
|
```
|
||||||
|
|
||||||
Prune migrated legacy backups from runtime folders (dry-run by default):
|
Prune migrated legacy backups from runtime folders (dry-run by default):
|
||||||
@@ -68,6 +73,13 @@ Prune migrated legacy backups from runtime folders (dry-run by default):
|
|||||||
~/.mosaic/bin/mosaic-prune-legacy-runtime --runtime claude --apply
|
~/.mosaic/bin/mosaic-prune-legacy-runtime --runtime claude --apply
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Audit runtime drift:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
~/.mosaic/bin/mosaic-doctor
|
||||||
|
~/.mosaic/bin/mosaic-doctor --fail-on-warn
|
||||||
|
```
|
||||||
|
|
||||||
Opt-out during install:
|
Opt-out during install:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
|||||||
@@ -53,3 +53,4 @@ Runtime-compatible guides and rails are hosted at:
|
|||||||
- `~/.mosaic/rails/`
|
- `~/.mosaic/rails/`
|
||||||
- `~/.mosaic/profiles/` (runtime-neutral domain/workflow/stack presets)
|
- `~/.mosaic/profiles/` (runtime-neutral domain/workflow/stack presets)
|
||||||
- `~/.mosaic/runtime/` (runtime-specific overlays)
|
- `~/.mosaic/runtime/` (runtime-specific overlays)
|
||||||
|
- `~/.mosaic/skills-local/` (local private skills shared across runtimes)
|
||||||
|
|||||||
201
bin/mosaic-doctor
Executable file
201
bin/mosaic-doctor
Executable file
@@ -0,0 +1,201 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
MOSAIC_HOME="${MOSAIC_HOME:-$HOME/.mosaic}"
|
||||||
|
FAIL_ON_WARN=0
|
||||||
|
VERBOSE=0
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
cat <<USAGE
|
||||||
|
Usage: $(basename "$0") [options]
|
||||||
|
|
||||||
|
Audit Mosaic runtime linkage and detect drift across agent runtimes.
|
||||||
|
|
||||||
|
Options:
|
||||||
|
--fail-on-warn Exit non-zero when warnings are found
|
||||||
|
--verbose Print pass checks too
|
||||||
|
-h, --help Show help
|
||||||
|
USAGE
|
||||||
|
}
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case "$1" in
|
||||||
|
--fail-on-warn)
|
||||||
|
FAIL_ON_WARN=1
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--verbose)
|
||||||
|
VERBOSE=1
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
usage
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown argument: $1" >&2
|
||||||
|
usage >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
warn_count=0
|
||||||
|
|
||||||
|
warn() {
|
||||||
|
warn_count=$((warn_count + 1))
|
||||||
|
echo "[WARN] $*"
|
||||||
|
}
|
||||||
|
|
||||||
|
pass() {
|
||||||
|
if [[ $VERBOSE -eq 1 ]]; then
|
||||||
|
echo "[OK] $*"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
expect_dir() {
|
||||||
|
local d="$1"
|
||||||
|
if [[ ! -d "$d" ]]; then
|
||||||
|
warn "Missing directory: $d"
|
||||||
|
else
|
||||||
|
pass "Directory present: $d"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
expect_file() {
|
||||||
|
local f="$1"
|
||||||
|
if [[ ! -f "$f" ]]; then
|
||||||
|
warn "Missing file: $f"
|
||||||
|
else
|
||||||
|
pass "File present: $f"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
check_tree_links() {
|
||||||
|
local src_root="$1"
|
||||||
|
local dst_root="$2"
|
||||||
|
|
||||||
|
[[ -d "$src_root" ]] || return
|
||||||
|
|
||||||
|
while IFS= read -r -d '' src; do
|
||||||
|
local rel dst
|
||||||
|
rel="${src#$src_root/}"
|
||||||
|
dst="$dst_root/$rel"
|
||||||
|
|
||||||
|
if [[ ! -L "$dst" ]]; then
|
||||||
|
warn "Not symlinked: $dst (expected -> $src)"
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
local dst_real src_real
|
||||||
|
dst_real="$(readlink -f "$dst" 2>/dev/null || true)"
|
||||||
|
src_real="$(readlink -f "$src" 2>/dev/null || true)"
|
||||||
|
|
||||||
|
if [[ -z "$dst_real" || -z "$src_real" || "$dst_real" != "$src_real" ]]; then
|
||||||
|
warn "Drifted link: $dst (expected -> $src)"
|
||||||
|
else
|
||||||
|
pass "Linked: $dst"
|
||||||
|
fi
|
||||||
|
done < <(find "$src_root" -type f -print0)
|
||||||
|
}
|
||||||
|
|
||||||
|
echo "[mosaic-doctor] Mosaic home: $MOSAIC_HOME"
|
||||||
|
|
||||||
|
# Canonical Mosaic checks
|
||||||
|
expect_file "$MOSAIC_HOME/STANDARDS.md"
|
||||||
|
expect_dir "$MOSAIC_HOME/guides"
|
||||||
|
expect_dir "$MOSAIC_HOME/rails"
|
||||||
|
expect_dir "$MOSAIC_HOME/profiles"
|
||||||
|
expect_dir "$MOSAIC_HOME/templates/agent"
|
||||||
|
expect_dir "$MOSAIC_HOME/skills"
|
||||||
|
expect_dir "$MOSAIC_HOME/skills-local"
|
||||||
|
expect_file "$MOSAIC_HOME/bin/mosaic-link-runtime-assets"
|
||||||
|
expect_file "$MOSAIC_HOME/bin/mosaic-sync-skills"
|
||||||
|
|
||||||
|
# Claude runtime checks
|
||||||
|
check_tree_links "$MOSAIC_HOME/guides" "$HOME/.claude/agent-guides"
|
||||||
|
check_tree_links "$MOSAIC_HOME/rails/git" "$HOME/.claude/scripts/git"
|
||||||
|
check_tree_links "$MOSAIC_HOME/rails/codex" "$HOME/.claude/scripts/codex"
|
||||||
|
check_tree_links "$MOSAIC_HOME/rails/bootstrap" "$HOME/.claude/scripts/bootstrap"
|
||||||
|
check_tree_links "$MOSAIC_HOME/rails/cicd" "$HOME/.claude/scripts/cicd"
|
||||||
|
check_tree_links "$MOSAIC_HOME/rails/portainer" "$HOME/.claude/scripts/portainer"
|
||||||
|
check_tree_links "$MOSAIC_HOME/templates/agent" "$HOME/.claude/templates"
|
||||||
|
check_tree_links "$MOSAIC_HOME/profiles/domains" "$HOME/.claude/presets/domains"
|
||||||
|
check_tree_links "$MOSAIC_HOME/profiles/tech-stacks" "$HOME/.claude/presets/tech-stacks"
|
||||||
|
check_tree_links "$MOSAIC_HOME/profiles/workflows" "$HOME/.claude/presets/workflows"
|
||||||
|
check_tree_links "$MOSAIC_HOME/runtime/claude/settings-overlays" "$HOME/.claude/presets"
|
||||||
|
|
||||||
|
for rf in CLAUDE.md settings.json hooks-config.json context7-integration.md; do
|
||||||
|
src="$MOSAIC_HOME/runtime/claude/$rf"
|
||||||
|
dst="$HOME/.claude/$rf"
|
||||||
|
[[ -f "$src" ]] || continue
|
||||||
|
if [[ ! -L "$dst" ]]; then
|
||||||
|
warn "Not symlinked: $dst (expected -> $src)"
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
dst_real="$(readlink -f "$dst" 2>/dev/null || true)"
|
||||||
|
src_real="$(readlink -f "$src" 2>/dev/null || true)"
|
||||||
|
if [[ -z "$dst_real" || -z "$src_real" || "$dst_real" != "$src_real" ]]; then
|
||||||
|
warn "Drifted link: $dst (expected -> $src)"
|
||||||
|
else
|
||||||
|
pass "Linked: $dst"
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
# Skills runtime checks
|
||||||
|
for runtime_skills in "$HOME/.claude/skills" "$HOME/.codex/skills" "$HOME/.config/opencode/skills"; do
|
||||||
|
[[ -d "$runtime_skills" ]] || continue
|
||||||
|
|
||||||
|
while IFS= read -r -d '' skill; do
|
||||||
|
name="$(basename "$skill")"
|
||||||
|
[[ "$name" == .* ]] && continue
|
||||||
|
target="$runtime_skills/$name"
|
||||||
|
|
||||||
|
if [[ ! -e "$target" ]]; then
|
||||||
|
warn "Missing skill link: $target"
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! -L "$target" ]]; then
|
||||||
|
# Runtime-specific local skills are allowed only for hidden/system entries.
|
||||||
|
warn "Non-symlink skill entry: $target"
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
target_real="$(readlink -f "$target" 2>/dev/null || true)"
|
||||||
|
skill_real="$(readlink -f "$skill" 2>/dev/null || true)"
|
||||||
|
if [[ -z "$target_real" || -z "$skill_real" || "$target_real" != "$skill_real" ]]; then
|
||||||
|
warn "Drifted skill link: $target (expected -> $skill)"
|
||||||
|
else
|
||||||
|
pass "Linked skill: $target"
|
||||||
|
fi
|
||||||
|
done < <(find "$MOSAIC_HOME/skills" "$MOSAIC_HOME/skills-local" -mindepth 1 -maxdepth 1 -type d -print0)
|
||||||
|
done
|
||||||
|
|
||||||
|
link_roots=(
|
||||||
|
"$HOME/.claude/agent-guides"
|
||||||
|
"$HOME/.claude/scripts"
|
||||||
|
"$HOME/.claude/templates"
|
||||||
|
"$HOME/.claude/presets"
|
||||||
|
"$HOME/.claude/skills"
|
||||||
|
"$HOME/.codex/skills"
|
||||||
|
"$HOME/.config/opencode/skills"
|
||||||
|
)
|
||||||
|
|
||||||
|
existing_link_roots=()
|
||||||
|
for d in "${link_roots[@]}"; do
|
||||||
|
[[ -e "$d" ]] && existing_link_roots+=("$d")
|
||||||
|
done
|
||||||
|
|
||||||
|
broken_links=0
|
||||||
|
if [[ ${#existing_link_roots[@]} -gt 0 ]]; then
|
||||||
|
broken_links=$(find "${existing_link_roots[@]}" -xtype l 2>/dev/null | wc -l | tr -d ' ')
|
||||||
|
fi
|
||||||
|
if [[ "$broken_links" != "0" ]]; then
|
||||||
|
warn "Broken symlinks detected across runtimes: $broken_links"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "[mosaic-doctor] warnings=$warn_count"
|
||||||
|
if [[ $FAIL_ON_WARN -eq 1 && $warn_count -gt 0 ]]; then
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
@@ -48,12 +48,23 @@ link_tree_files "$MOSAIC_HOME/rails/git" "$HOME/.claude/scripts/git"
|
|||||||
link_tree_files "$MOSAIC_HOME/rails/codex" "$HOME/.claude/scripts/codex"
|
link_tree_files "$MOSAIC_HOME/rails/codex" "$HOME/.claude/scripts/codex"
|
||||||
link_tree_files "$MOSAIC_HOME/rails/bootstrap" "$HOME/.claude/scripts/bootstrap"
|
link_tree_files "$MOSAIC_HOME/rails/bootstrap" "$HOME/.claude/scripts/bootstrap"
|
||||||
link_tree_files "$MOSAIC_HOME/rails/cicd" "$HOME/.claude/scripts/cicd"
|
link_tree_files "$MOSAIC_HOME/rails/cicd" "$HOME/.claude/scripts/cicd"
|
||||||
|
link_tree_files "$MOSAIC_HOME/rails/portainer" "$HOME/.claude/scripts/portainer"
|
||||||
link_tree_files "$MOSAIC_HOME/templates/agent" "$HOME/.claude/templates"
|
link_tree_files "$MOSAIC_HOME/templates/agent" "$HOME/.claude/templates"
|
||||||
link_tree_files "$MOSAIC_HOME/profiles/domains" "$HOME/.claude/presets/domains"
|
link_tree_files "$MOSAIC_HOME/profiles/domains" "$HOME/.claude/presets/domains"
|
||||||
link_tree_files "$MOSAIC_HOME/profiles/tech-stacks" "$HOME/.claude/presets/tech-stacks"
|
link_tree_files "$MOSAIC_HOME/profiles/tech-stacks" "$HOME/.claude/presets/tech-stacks"
|
||||||
link_tree_files "$MOSAIC_HOME/profiles/workflows" "$HOME/.claude/presets/workflows"
|
link_tree_files "$MOSAIC_HOME/profiles/workflows" "$HOME/.claude/presets/workflows"
|
||||||
link_tree_files "$MOSAIC_HOME/runtime/claude/settings-overlays" "$HOME/.claude/presets"
|
link_tree_files "$MOSAIC_HOME/runtime/claude/settings-overlays" "$HOME/.claude/presets"
|
||||||
|
|
||||||
|
for runtime_file in \
|
||||||
|
CLAUDE.md \
|
||||||
|
settings.json \
|
||||||
|
hooks-config.json \
|
||||||
|
context7-integration.md; do
|
||||||
|
src="$MOSAIC_HOME/runtime/claude/$runtime_file"
|
||||||
|
[[ -f "$src" ]] || continue
|
||||||
|
link_file "$src" "$HOME/.claude/$runtime_file"
|
||||||
|
done
|
||||||
|
|
||||||
for qa_script in \
|
for qa_script in \
|
||||||
debug-hook.sh \
|
debug-hook.sh \
|
||||||
qa-hook-handler.sh \
|
qa-hook-handler.sh \
|
||||||
|
|||||||
87
bin/mosaic-migrate-local-skills
Executable file
87
bin/mosaic-migrate-local-skills
Executable file
@@ -0,0 +1,87 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
MOSAIC_HOME="${MOSAIC_HOME:-$HOME/.mosaic}"
|
||||||
|
APPLY=0
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
cat <<USAGE
|
||||||
|
Usage: $(basename "$0") [--apply]
|
||||||
|
|
||||||
|
Migrate runtime-local skill directories (e.g. ~/.claude/skills/jarvis) to Mosaic-managed
|
||||||
|
skills by replacing local directories with symlinks to ~/.mosaic/skills-local.
|
||||||
|
|
||||||
|
Default mode is dry-run.
|
||||||
|
USAGE
|
||||||
|
}
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case "$1" in
|
||||||
|
--apply)
|
||||||
|
APPLY=1
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
-h|--help)
|
||||||
|
usage
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown argument: $1" >&2
|
||||||
|
usage >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
skill_roots=(
|
||||||
|
"$HOME/.claude/skills"
|
||||||
|
"$HOME/.codex/skills"
|
||||||
|
"$HOME/.config/opencode/skills"
|
||||||
|
)
|
||||||
|
|
||||||
|
if [[ ! -d "$MOSAIC_HOME/skills-local" ]]; then
|
||||||
|
echo "[mosaic-local-skills] Missing local skills dir: $MOSAIC_HOME/skills-local" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
count=0
|
||||||
|
|
||||||
|
while IFS= read -r -d '' local_skill; do
|
||||||
|
name="$(basename "$local_skill")"
|
||||||
|
src="$MOSAIC_HOME/skills-local/$name"
|
||||||
|
[[ -d "$src" ]] || continue
|
||||||
|
|
||||||
|
for root in "${skill_roots[@]}"; do
|
||||||
|
[[ -d "$root" ]] || continue
|
||||||
|
target="$root/$name"
|
||||||
|
|
||||||
|
# Already linked correctly.
|
||||||
|
if [[ -L "$target" ]]; then
|
||||||
|
target_real="$(readlink -f "$target" 2>/dev/null || true)"
|
||||||
|
src_real="$(readlink -f "$src" 2>/dev/null || true)"
|
||||||
|
if [[ -n "$target_real" && -n "$src_real" && "$target_real" == "$src_real" ]]; then
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Only migrate local directories containing SKILL.md
|
||||||
|
if [[ -d "$target" && -f "$target/SKILL.md" && ! -L "$target" ]]; then
|
||||||
|
count=$((count + 1))
|
||||||
|
if [[ $APPLY -eq 1 ]]; then
|
||||||
|
stamp="$(date +%Y%m%d%H%M%S)"
|
||||||
|
mv "$target" "${target}.mosaic-bak-${stamp}"
|
||||||
|
ln -s "$src" "$target"
|
||||||
|
echo "[mosaic-local-skills] migrated: $target -> $src"
|
||||||
|
else
|
||||||
|
echo "[mosaic-local-skills] would migrate: $target -> $src"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
done < <(find "$MOSAIC_HOME/skills-local" -mindepth 1 -maxdepth 1 -type d -print0)
|
||||||
|
|
||||||
|
if [[ $APPLY -eq 1 ]]; then
|
||||||
|
echo "[mosaic-local-skills] complete: migrated=$count"
|
||||||
|
else
|
||||||
|
echo "[mosaic-local-skills] dry-run: migratable=$count"
|
||||||
|
echo "[mosaic-local-skills] re-run with --apply to migrate"
|
||||||
|
fi
|
||||||
@@ -80,12 +80,12 @@ while IFS= read -r -d '' bak; do
|
|||||||
|
|
||||||
count_deletable=$((count_deletable + 1))
|
count_deletable=$((count_deletable + 1))
|
||||||
if [[ $APPLY -eq 1 ]]; then
|
if [[ $APPLY -eq 1 ]]; then
|
||||||
rm -f "$bak"
|
rm -rf "$bak"
|
||||||
echo "[mosaic-prune] deleted: $bak"
|
echo "[mosaic-prune] deleted: $bak"
|
||||||
else
|
else
|
||||||
echo "[mosaic-prune] would delete: $bak"
|
echo "[mosaic-prune] would delete: $bak"
|
||||||
fi
|
fi
|
||||||
done < <(find "$TARGET_ROOT" -type f -name '*.mosaic-bak-*' -print0)
|
done < <(find "$TARGET_ROOT" \( -type f -o -type d \) -name '*.mosaic-bak-*' -print0)
|
||||||
|
|
||||||
if [[ $APPLY -eq 1 ]]; then
|
if [[ $APPLY -eq 1 ]]; then
|
||||||
echo "[mosaic-prune] complete: deleted=$count_deletable candidates=$count_candidates runtime=$RUNTIME"
|
echo "[mosaic-prune] complete: deleted=$count_deletable candidates=$count_candidates runtime=$RUNTIME"
|
||||||
|
|||||||
@@ -5,6 +5,7 @@ MOSAIC_HOME="${MOSAIC_HOME:-$HOME/.mosaic}"
|
|||||||
SKILLS_REPO_URL="${MOSAIC_SKILLS_REPO_URL:-https://git.mosaicstack.dev/mosaic/agent-skills.git}"
|
SKILLS_REPO_URL="${MOSAIC_SKILLS_REPO_URL:-https://git.mosaicstack.dev/mosaic/agent-skills.git}"
|
||||||
SKILLS_REPO_DIR="${MOSAIC_SKILLS_REPO_DIR:-$MOSAIC_HOME/sources/agent-skills}"
|
SKILLS_REPO_DIR="${MOSAIC_SKILLS_REPO_DIR:-$MOSAIC_HOME/sources/agent-skills}"
|
||||||
MOSAIC_SKILLS_DIR="$MOSAIC_HOME/skills"
|
MOSAIC_SKILLS_DIR="$MOSAIC_HOME/skills"
|
||||||
|
MOSAIC_LOCAL_SKILLS_DIR="$MOSAIC_HOME/skills-local"
|
||||||
|
|
||||||
fetch=1
|
fetch=1
|
||||||
link_only=0
|
link_only=0
|
||||||
@@ -13,10 +14,10 @@ usage() {
|
|||||||
cat <<USAGE
|
cat <<USAGE
|
||||||
Usage: $(basename "$0") [options]
|
Usage: $(basename "$0") [options]
|
||||||
|
|
||||||
Sync canonical skills into ~/.mosaic/skills and link them into runtime skill directories.
|
Sync canonical skills into ~/.mosaic/skills and link all Mosaic skills into runtime skill directories.
|
||||||
|
|
||||||
Options:
|
Options:
|
||||||
--link-only Skip git clone/pull and only relink from ~/.mosaic/skills
|
--link-only Skip git clone/pull and only relink from ~/.mosaic/{skills,skills-local}
|
||||||
--no-link Sync canonical skills but do not update runtime links
|
--no-link Sync canonical skills but do not update runtime links
|
||||||
-h, --help Show help
|
-h, --help Show help
|
||||||
|
|
||||||
@@ -49,7 +50,7 @@ while [[ $# -gt 0 ]]; do
|
|||||||
esac
|
esac
|
||||||
done
|
done
|
||||||
|
|
||||||
mkdir -p "$MOSAIC_HOME" "$MOSAIC_SKILLS_DIR"
|
mkdir -p "$MOSAIC_HOME" "$MOSAIC_SKILLS_DIR" "$MOSAIC_LOCAL_SKILLS_DIR"
|
||||||
|
|
||||||
if [[ $fetch -eq 1 ]]; then
|
if [[ $fetch -eq 1 ]]; then
|
||||||
if [[ -d "$SKILLS_REPO_DIR/.git" ]]; then
|
if [[ -d "$SKILLS_REPO_DIR/.git" ]]; then
|
||||||
@@ -134,6 +135,12 @@ for target in "${link_targets[@]}"; do
|
|||||||
link_skill_into_target "$skill" "$target"
|
link_skill_into_target "$skill" "$target"
|
||||||
done < <(find "$MOSAIC_SKILLS_DIR" -mindepth 1 -maxdepth 1 -type d -print0)
|
done < <(find "$MOSAIC_SKILLS_DIR" -mindepth 1 -maxdepth 1 -type d -print0)
|
||||||
|
|
||||||
|
if [[ -d "$MOSAIC_LOCAL_SKILLS_DIR" ]]; then
|
||||||
|
while IFS= read -r -d '' skill; do
|
||||||
|
link_skill_into_target "$skill" "$target"
|
||||||
|
done < <(find "$MOSAIC_LOCAL_SKILLS_DIR" -mindepth 1 -maxdepth 1 -type d -print0)
|
||||||
|
fi
|
||||||
|
|
||||||
echo "[mosaic-skills] Linked skills into: $target"
|
echo "[mosaic-skills] Linked skills into: $target"
|
||||||
done
|
done
|
||||||
|
|
||||||
|
|||||||
10
install.sh
10
install.sh
@@ -32,4 +32,14 @@ else
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
echo "[mosaic-install] Migrating runtime-local skills to Mosaic links"
|
||||||
|
if ! "$TARGET_DIR/bin/mosaic-migrate-local-skills" --apply; then
|
||||||
|
echo "[mosaic-install] WARNING: local skill migration failed (framework install still complete)" >&2
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "[mosaic-install] Running health audit"
|
||||||
|
if ! "$TARGET_DIR/bin/mosaic-doctor"; then
|
||||||
|
echo "[mosaic-install] WARNING: doctor reported issues (run ~/.mosaic/bin/mosaic-doctor --fail-on-warn)" >&2
|
||||||
|
fi
|
||||||
|
|
||||||
echo "[mosaic-install] Add to PATH: export PATH=\"$TARGET_DIR/bin:$PATH\""
|
echo "[mosaic-install] Add to PATH: export PATH=\"$TARGET_DIR/bin:$PATH\""
|
||||||
|
|||||||
210
rails/portainer/README.md
Normal file
210
rails/portainer/README.md
Normal file
@@ -0,0 +1,210 @@
|
|||||||
|
# Portainer CLI Scripts
|
||||||
|
|
||||||
|
CLI tools for managing Portainer stacks via the API.
|
||||||
|
|
||||||
|
## Setup
|
||||||
|
|
||||||
|
### Environment Variables
|
||||||
|
|
||||||
|
Set these environment variables before using the scripts:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export PORTAINER_URL="https://portainer.example.com:9443"
|
||||||
|
export PORTAINER_API_KEY="your-api-key-here"
|
||||||
|
```
|
||||||
|
|
||||||
|
You can add these to your shell profile (`~/.bashrc`, `~/.zshrc`) or use a `.env` file.
|
||||||
|
|
||||||
|
### Creating an API Key
|
||||||
|
|
||||||
|
1. Log in to Portainer
|
||||||
|
2. Click your username in the top right corner > "My account"
|
||||||
|
3. Scroll to "Access tokens" section
|
||||||
|
4. Click "Add access token"
|
||||||
|
5. Enter a descriptive name (e.g., "CLI scripts")
|
||||||
|
6. Copy the token immediately (you cannot view it again)
|
||||||
|
|
||||||
|
### Dependencies
|
||||||
|
|
||||||
|
- `curl` - HTTP client
|
||||||
|
- `jq` - JSON processor
|
||||||
|
|
||||||
|
Both are typically pre-installed on most Linux distributions.
|
||||||
|
|
||||||
|
## Scripts
|
||||||
|
|
||||||
|
### stack-list.sh
|
||||||
|
|
||||||
|
List all Portainer stacks.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all stacks in table format
|
||||||
|
stack-list.sh
|
||||||
|
|
||||||
|
# List stacks in JSON format
|
||||||
|
stack-list.sh -f json
|
||||||
|
|
||||||
|
# List only stack names
|
||||||
|
stack-list.sh -f names
|
||||||
|
stack-list.sh -q
|
||||||
|
|
||||||
|
# Filter by endpoint ID
|
||||||
|
stack-list.sh -e 1
|
||||||
|
```
|
||||||
|
|
||||||
|
### stack-status.sh
|
||||||
|
|
||||||
|
Show status and containers for a stack.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Show stack status
|
||||||
|
stack-status.sh -n mystack
|
||||||
|
|
||||||
|
# Show status in JSON format
|
||||||
|
stack-status.sh -n mystack -f json
|
||||||
|
|
||||||
|
# Use stack ID instead of name
|
||||||
|
stack-status.sh -i 5
|
||||||
|
```
|
||||||
|
|
||||||
|
### stack-redeploy.sh
|
||||||
|
|
||||||
|
Redeploy a stack. For git-based stacks, this pulls the latest from the repository.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Redeploy a stack by name
|
||||||
|
stack-redeploy.sh -n mystack
|
||||||
|
|
||||||
|
# Redeploy and pull latest images
|
||||||
|
stack-redeploy.sh -n mystack -p
|
||||||
|
|
||||||
|
# Redeploy by stack ID
|
||||||
|
stack-redeploy.sh -i 5 -p
|
||||||
|
```
|
||||||
|
|
||||||
|
### stack-logs.sh
|
||||||
|
|
||||||
|
View logs for stack services/containers.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List available services in a stack
|
||||||
|
stack-logs.sh -n mystack
|
||||||
|
|
||||||
|
# View logs for a specific service
|
||||||
|
stack-logs.sh -n mystack -s webapp
|
||||||
|
|
||||||
|
# Show last 200 lines
|
||||||
|
stack-logs.sh -n mystack -s webapp -t 200
|
||||||
|
|
||||||
|
# Follow logs (stream)
|
||||||
|
stack-logs.sh -n mystack -s webapp -f
|
||||||
|
|
||||||
|
# Include timestamps
|
||||||
|
stack-logs.sh -n mystack -s webapp --timestamps
|
||||||
|
```
|
||||||
|
|
||||||
|
### stack-start.sh
|
||||||
|
|
||||||
|
Start an inactive stack.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
stack-start.sh -n mystack
|
||||||
|
stack-start.sh -i 5
|
||||||
|
```
|
||||||
|
|
||||||
|
### stack-stop.sh
|
||||||
|
|
||||||
|
Stop a running stack.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
stack-stop.sh -n mystack
|
||||||
|
stack-stop.sh -i 5
|
||||||
|
```
|
||||||
|
|
||||||
|
### endpoint-list.sh
|
||||||
|
|
||||||
|
List all Portainer endpoints/environments.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List in table format
|
||||||
|
endpoint-list.sh
|
||||||
|
|
||||||
|
# List in JSON format
|
||||||
|
endpoint-list.sh -f json
|
||||||
|
```
|
||||||
|
|
||||||
|
## Common Workflows
|
||||||
|
|
||||||
|
### CI/CD Redeploy
|
||||||
|
|
||||||
|
After pushing changes to a git-based stack's repository:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Redeploy with latest images
|
||||||
|
stack-redeploy.sh -n myapp -p
|
||||||
|
|
||||||
|
# Check status
|
||||||
|
stack-status.sh -n myapp
|
||||||
|
|
||||||
|
# View logs to verify startup
|
||||||
|
stack-logs.sh -n myapp -s api -t 50
|
||||||
|
```
|
||||||
|
|
||||||
|
### Debugging a Failing Stack
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check overall status
|
||||||
|
stack-status.sh -n myapp
|
||||||
|
|
||||||
|
# List all services
|
||||||
|
stack-logs.sh -n myapp
|
||||||
|
|
||||||
|
# View logs for failing service
|
||||||
|
stack-logs.sh -n myapp -s worker -t 200
|
||||||
|
|
||||||
|
# Follow logs in real-time
|
||||||
|
stack-logs.sh -n myapp -s worker -f
|
||||||
|
```
|
||||||
|
|
||||||
|
### Restart a Stack
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Stop the stack
|
||||||
|
stack-stop.sh -n myapp
|
||||||
|
|
||||||
|
# Start it again
|
||||||
|
stack-start.sh -n myapp
|
||||||
|
|
||||||
|
# Or just redeploy (pulls latest images)
|
||||||
|
stack-redeploy.sh -n myapp -p
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
All scripts:
|
||||||
|
- Exit with code 0 on success
|
||||||
|
- Exit with code 1 on error
|
||||||
|
- Print errors to stderr
|
||||||
|
- Validate required environment variables before making API calls
|
||||||
|
|
||||||
|
## API Reference
|
||||||
|
|
||||||
|
These scripts use the Portainer CE API. Key endpoints:
|
||||||
|
|
||||||
|
| Endpoint | Method | Description |
|
||||||
|
|----------|--------|-------------|
|
||||||
|
| `/api/stacks` | GET | List all stacks |
|
||||||
|
| `/api/stacks/{id}` | GET | Get stack details |
|
||||||
|
| `/api/stacks/{id}/file` | GET | Get stack compose file |
|
||||||
|
| `/api/stacks/{id}` | PUT | Update/redeploy stack |
|
||||||
|
| `/api/stacks/{id}/git/redeploy` | PUT | Redeploy git-based stack |
|
||||||
|
| `/api/stacks/{id}/start` | POST | Start inactive stack |
|
||||||
|
| `/api/stacks/{id}/stop` | POST | Stop running stack |
|
||||||
|
| `/api/endpoints` | GET | List all environments |
|
||||||
|
| `/api/endpoints/{id}/docker/containers/json` | GET | List containers |
|
||||||
|
| `/api/endpoints/{id}/docker/containers/{id}/logs` | GET | Get container logs |
|
||||||
|
|
||||||
|
For full API documentation, see:
|
||||||
|
- [Portainer API Access](https://docs.portainer.io/api/access)
|
||||||
|
- [Portainer API Examples](https://docs.portainer.io/api/examples)
|
||||||
|
- [Portainer API Docs](https://docs.portainer.io/api/docs)
|
||||||
85
rails/portainer/endpoint-list.sh
Executable file
85
rails/portainer/endpoint-list.sh
Executable file
@@ -0,0 +1,85 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# endpoint-list.sh - List all Portainer endpoints/environments
|
||||||
|
#
|
||||||
|
# Usage: endpoint-list.sh [-f format]
|
||||||
|
#
|
||||||
|
# Environment variables:
|
||||||
|
# PORTAINER_URL - Portainer instance URL (e.g., https://portainer.example.com:9443)
|
||||||
|
# PORTAINER_API_KEY - API access token
|
||||||
|
#
|
||||||
|
# Options:
|
||||||
|
# -f format Output format: table (default), json
|
||||||
|
# -h Show this help
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Default values
|
||||||
|
FORMAT="table"
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
while getopts "f:h" opt; do
|
||||||
|
case $opt in
|
||||||
|
f) FORMAT="$OPTARG" ;;
|
||||||
|
h)
|
||||||
|
head -16 "$0" | grep "^#" | sed 's/^# \?//'
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Usage: $0 [-f format]" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Validate environment
|
||||||
|
if [[ -z "${PORTAINER_URL:-}" ]]; then
|
||||||
|
echo "Error: PORTAINER_URL environment variable not set" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -z "${PORTAINER_API_KEY:-}" ]]; then
|
||||||
|
echo "Error: PORTAINER_API_KEY environment variable not set" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Remove trailing slash from URL
|
||||||
|
PORTAINER_URL="${PORTAINER_URL%/}"
|
||||||
|
|
||||||
|
# Fetch endpoints
|
||||||
|
response=$(curl -s -w "\n%{http_code}" \
|
||||||
|
-H "X-API-Key: ${PORTAINER_API_KEY}" \
|
||||||
|
"${PORTAINER_URL}/api/endpoints")
|
||||||
|
|
||||||
|
http_code=$(echo "$response" | tail -n1)
|
||||||
|
body=$(echo "$response" | sed '$d')
|
||||||
|
|
||||||
|
if [[ "$http_code" != "200" ]]; then
|
||||||
|
echo "Error: API request failed with status $http_code" >&2
|
||||||
|
echo "$body" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Output based on format
|
||||||
|
case "$FORMAT" in
|
||||||
|
json)
|
||||||
|
echo "$body" | jq '.'
|
||||||
|
;;
|
||||||
|
table)
|
||||||
|
echo "ID NAME TYPE STATUS URL"
|
||||||
|
echo "---- ---------------------------- ---------- -------- ---"
|
||||||
|
echo "$body" | jq -r '.[] | [
|
||||||
|
.Id,
|
||||||
|
.Name,
|
||||||
|
(if .Type == 1 then "docker" elif .Type == 2 then "agent" elif .Type == 3 then "azure" elif .Type == 4 then "edge" elif .Type == 5 then "kubernetes" else "unknown" end),
|
||||||
|
(if .Status == 1 then "up" elif .Status == 2 then "down" else "unknown" end),
|
||||||
|
.URL
|
||||||
|
] | @tsv' | while IFS=$'\t' read -r id name type status url; do
|
||||||
|
printf "%-4s %-28s %-10s %-8s %s\n" "$id" "$name" "$type" "$status" "$url"
|
||||||
|
done
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Error: Unknown format '$FORMAT'. Use: table, json" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
100
rails/portainer/stack-list.sh
Executable file
100
rails/portainer/stack-list.sh
Executable file
@@ -0,0 +1,100 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# stack-list.sh - List all Portainer stacks
|
||||||
|
#
|
||||||
|
# Usage: stack-list.sh [-e endpoint_id] [-f format] [-q]
|
||||||
|
#
|
||||||
|
# Environment variables:
|
||||||
|
# PORTAINER_URL - Portainer instance URL (e.g., https://portainer.example.com:9443)
|
||||||
|
# PORTAINER_API_KEY - API access token
|
||||||
|
#
|
||||||
|
# Options:
|
||||||
|
# -e endpoint_id Filter by endpoint/environment ID
|
||||||
|
# -f format Output format: table (default), json, names
|
||||||
|
# -q Quiet mode - only output stack names (shortcut for -f names)
|
||||||
|
# -h Show this help
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Default values
|
||||||
|
ENDPOINT_FILTER=""
|
||||||
|
FORMAT="table"
|
||||||
|
QUIET=false
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
while getopts "e:f:qh" opt; do
|
||||||
|
case $opt in
|
||||||
|
e) ENDPOINT_FILTER="$OPTARG" ;;
|
||||||
|
f) FORMAT="$OPTARG" ;;
|
||||||
|
q) QUIET=true; FORMAT="names" ;;
|
||||||
|
h)
|
||||||
|
head -20 "$0" | grep "^#" | sed 's/^# \?//'
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Usage: $0 [-e endpoint_id] [-f format] [-q]" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Validate environment
|
||||||
|
if [[ -z "${PORTAINER_URL:-}" ]]; then
|
||||||
|
echo "Error: PORTAINER_URL environment variable not set" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -z "${PORTAINER_API_KEY:-}" ]]; then
|
||||||
|
echo "Error: PORTAINER_API_KEY environment variable not set" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Remove trailing slash from URL
|
||||||
|
PORTAINER_URL="${PORTAINER_URL%/}"
|
||||||
|
|
||||||
|
# Fetch stacks
|
||||||
|
response=$(curl -s -w "\n%{http_code}" \
|
||||||
|
-H "X-API-Key: ${PORTAINER_API_KEY}" \
|
||||||
|
"${PORTAINER_URL}/api/stacks")
|
||||||
|
|
||||||
|
http_code=$(echo "$response" | tail -n1)
|
||||||
|
body=$(echo "$response" | sed '$d')
|
||||||
|
|
||||||
|
if [[ "$http_code" != "200" ]]; then
|
||||||
|
echo "Error: API request failed with status $http_code" >&2
|
||||||
|
echo "$body" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Filter by endpoint if specified
|
||||||
|
if [[ -n "$ENDPOINT_FILTER" ]]; then
|
||||||
|
body=$(echo "$body" | jq --arg eid "$ENDPOINT_FILTER" '[.[] | select(.EndpointId == ($eid | tonumber))]')
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Output based on format
|
||||||
|
case "$FORMAT" in
|
||||||
|
json)
|
||||||
|
echo "$body" | jq '.'
|
||||||
|
;;
|
||||||
|
names)
|
||||||
|
echo "$body" | jq -r '.[].Name'
|
||||||
|
;;
|
||||||
|
table)
|
||||||
|
echo "ID NAME STATUS TYPE ENDPOINT CREATED"
|
||||||
|
echo "---- ---------------------------- -------- -------- -------- -------"
|
||||||
|
echo "$body" | jq -r '.[] | [
|
||||||
|
.Id,
|
||||||
|
.Name,
|
||||||
|
(if .Status == 1 then "active" elif .Status == 2 then "inactive" else "unknown" end),
|
||||||
|
(if .Type == 1 then "swarm" elif .Type == 2 then "compose" elif .Type == 3 then "k8s" else "unknown" end),
|
||||||
|
.EndpointId,
|
||||||
|
(.CreationDate | split("T")[0] // "N/A")
|
||||||
|
] | @tsv' | while IFS=$'\t' read -r id name status type endpoint created; do
|
||||||
|
printf "%-4s %-28s %-8s %-8s %-8s %s\n" "$id" "$name" "$status" "$type" "$endpoint" "$created"
|
||||||
|
done
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Error: Unknown format '$FORMAT'. Use: table, json, names" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
183
rails/portainer/stack-logs.sh
Executable file
183
rails/portainer/stack-logs.sh
Executable file
@@ -0,0 +1,183 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# stack-logs.sh - Get logs for a stack service/container
|
||||||
|
#
|
||||||
|
# Usage: stack-logs.sh -n <stack-name> [-s service-name] [-t tail] [-f]
|
||||||
|
#
|
||||||
|
# Environment variables:
|
||||||
|
# PORTAINER_URL - Portainer instance URL (e.g., https://portainer.example.com:9443)
|
||||||
|
# PORTAINER_API_KEY - API access token
|
||||||
|
#
|
||||||
|
# Options:
|
||||||
|
# -n name Stack name (required)
|
||||||
|
# -s service Service/container name (optional - if omitted, lists available services)
|
||||||
|
# -t tail Number of lines to show from the end (default: 100)
|
||||||
|
# -f Follow log output (stream logs)
|
||||||
|
# --timestamps Show timestamps
|
||||||
|
# -h Show this help
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Default values
|
||||||
|
STACK_NAME=""
|
||||||
|
SERVICE_NAME=""
|
||||||
|
TAIL_LINES="100"
|
||||||
|
FOLLOW=false
|
||||||
|
TIMESTAMPS=false
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-n) STACK_NAME="$2"; shift 2 ;;
|
||||||
|
-s) SERVICE_NAME="$2"; shift 2 ;;
|
||||||
|
-t) TAIL_LINES="$2"; shift 2 ;;
|
||||||
|
-f) FOLLOW=true; shift ;;
|
||||||
|
--timestamps) TIMESTAMPS=true; shift ;;
|
||||||
|
-h|--help)
|
||||||
|
head -20 "$0" | grep "^#" | sed 's/^# \?//'
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1" >&2
|
||||||
|
echo "Usage: $0 -n <stack-name> [-s service-name] [-t tail] [-f]" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Validate environment
|
||||||
|
if [[ -z "${PORTAINER_URL:-}" ]]; then
|
||||||
|
echo "Error: PORTAINER_URL environment variable not set" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -z "${PORTAINER_API_KEY:-}" ]]; then
|
||||||
|
echo "Error: PORTAINER_API_KEY environment variable not set" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -z "$STACK_NAME" ]]; then
|
||||||
|
echo "Error: -n <stack-name> is required" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Remove trailing slash from URL
|
||||||
|
PORTAINER_URL="${PORTAINER_URL%/}"
|
||||||
|
|
||||||
|
# Function to make API requests
|
||||||
|
api_request() {
|
||||||
|
local method="$1"
|
||||||
|
local endpoint="$2"
|
||||||
|
|
||||||
|
curl -s -w "\n%{http_code}" -X "$method" \
|
||||||
|
-H "X-API-Key: ${PORTAINER_API_KEY}" \
|
||||||
|
"${PORTAINER_URL}${endpoint}"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Get stack info by name
|
||||||
|
response=$(api_request GET "/api/stacks")
|
||||||
|
http_code=$(echo "$response" | tail -n1)
|
||||||
|
body=$(echo "$response" | sed '$d')
|
||||||
|
|
||||||
|
if [[ "$http_code" != "200" ]]; then
|
||||||
|
echo "Error: Failed to list stacks (HTTP $http_code)" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
stack_info=$(echo "$body" | jq --arg name "$STACK_NAME" '.[] | select(.Name == $name)')
|
||||||
|
|
||||||
|
if [[ -z "$stack_info" || "$stack_info" == "null" ]]; then
|
||||||
|
echo "Error: Stack '$STACK_NAME' not found" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
ENDPOINT_ID=$(echo "$stack_info" | jq -r '.EndpointId')
|
||||||
|
|
||||||
|
# Get containers for this stack
|
||||||
|
response=$(api_request GET "/api/endpoints/${ENDPOINT_ID}/docker/containers/json?all=true")
|
||||||
|
http_code=$(echo "$response" | tail -n1)
|
||||||
|
body=$(echo "$response" | sed '$d')
|
||||||
|
|
||||||
|
if [[ "$http_code" != "200" ]]; then
|
||||||
|
echo "Error: Failed to get containers (HTTP $http_code)" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Filter containers belonging to this stack
|
||||||
|
containers=$(echo "$body" | jq --arg name "$STACK_NAME" '[.[] | select(
|
||||||
|
(.Labels["com.docker.compose.project"] == $name) or
|
||||||
|
(.Labels["com.docker.stack.namespace"] == $name)
|
||||||
|
)]')
|
||||||
|
|
||||||
|
container_count=$(echo "$containers" | jq 'length')
|
||||||
|
|
||||||
|
if [[ "$container_count" -eq 0 ]]; then
|
||||||
|
echo "Error: No containers found for stack '$STACK_NAME'" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# If no service specified, list available services
|
||||||
|
if [[ -z "$SERVICE_NAME" ]]; then
|
||||||
|
echo "Available services in stack '$STACK_NAME':"
|
||||||
|
echo ""
|
||||||
|
echo "$containers" | jq -r '.[] |
|
||||||
|
(.Labels["com.docker.compose.service"] // .Labels["com.docker.swarm.service.name"] // .Names[0]) as $svc |
|
||||||
|
"\(.Names[0] | ltrimstr("/")) (\($svc // "unknown"))"'
|
||||||
|
echo ""
|
||||||
|
echo "Use -s <service-name> to view logs for a specific service."
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Find container matching service name
|
||||||
|
# Match against service label or container name
|
||||||
|
container=$(echo "$containers" | jq --arg svc "$SERVICE_NAME" 'first(.[] | select(
|
||||||
|
(.Labels["com.docker.compose.service"] == $svc) or
|
||||||
|
(.Labels["com.docker.swarm.service.name"] == $svc) or
|
||||||
|
(.Names[] | contains($svc))
|
||||||
|
))')
|
||||||
|
|
||||||
|
if [[ -z "$container" || "$container" == "null" ]]; then
|
||||||
|
echo "Error: Service '$SERVICE_NAME' not found in stack '$STACK_NAME'" >&2
|
||||||
|
echo ""
|
||||||
|
echo "Available services:"
|
||||||
|
echo "$containers" | jq -r '.[] |
|
||||||
|
.Labels["com.docker.compose.service"] // .Labels["com.docker.swarm.service.name"] // .Names[0]'
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
CONTAINER_ID=$(echo "$container" | jq -r '.Id')
|
||||||
|
CONTAINER_NAME=$(echo "$container" | jq -r '.Names[0]' | sed 's/^\///')
|
||||||
|
|
||||||
|
echo "Fetching logs for: $CONTAINER_NAME"
|
||||||
|
echo "Container ID: ${CONTAINER_ID:0:12}"
|
||||||
|
echo "---"
|
||||||
|
|
||||||
|
# Build query parameters
|
||||||
|
params="stdout=true&stderr=true&tail=${TAIL_LINES}"
|
||||||
|
if [[ "$TIMESTAMPS" == "true" ]]; then
|
||||||
|
params="${params}×tamps=true"
|
||||||
|
fi
|
||||||
|
if [[ "$FOLLOW" == "true" ]]; then
|
||||||
|
params="${params}&follow=true"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Get logs
|
||||||
|
# Note: Docker API returns raw log stream, not JSON
|
||||||
|
if [[ "$FOLLOW" == "true" ]]; then
|
||||||
|
# Stream logs
|
||||||
|
curl -s -N \
|
||||||
|
-H "X-API-Key: ${PORTAINER_API_KEY}" \
|
||||||
|
"${PORTAINER_URL}/api/endpoints/${ENDPOINT_ID}/docker/containers/${CONTAINER_ID}/logs?${params}" | \
|
||||||
|
# Docker log format has 8-byte header per line, strip it
|
||||||
|
while IFS= read -r line; do
|
||||||
|
# Remove docker stream header (first 8 bytes per chunk)
|
||||||
|
echo "$line" | cut -c9-
|
||||||
|
done
|
||||||
|
else
|
||||||
|
# Get logs (non-streaming)
|
||||||
|
curl -s \
|
||||||
|
-H "X-API-Key: ${PORTAINER_API_KEY}" \
|
||||||
|
"${PORTAINER_URL}/api/endpoints/${ENDPOINT_ID}/docker/containers/${CONTAINER_ID}/logs?${params}" | \
|
||||||
|
# Docker log format has 8-byte header per line, attempt to strip it
|
||||||
|
sed 's/^.\{8\}//' 2>/dev/null || cat
|
||||||
|
fi
|
||||||
183
rails/portainer/stack-redeploy.sh
Executable file
183
rails/portainer/stack-redeploy.sh
Executable file
@@ -0,0 +1,183 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# stack-redeploy.sh - Redeploy a Portainer stack
|
||||||
|
#
|
||||||
|
# For git-based stacks, this pulls the latest from the repository and redeploys.
|
||||||
|
# For file-based stacks, this redeploys with the current stack file.
|
||||||
|
#
|
||||||
|
# Usage: stack-redeploy.sh -n <stack-name> [-p] [-e endpoint_id]
|
||||||
|
#
|
||||||
|
# Environment variables:
|
||||||
|
# PORTAINER_URL - Portainer instance URL (e.g., https://portainer.example.com:9443)
|
||||||
|
# PORTAINER_API_KEY - API access token
|
||||||
|
#
|
||||||
|
# Options:
|
||||||
|
# -n name Stack name (required)
|
||||||
|
# -i id Stack ID (alternative to -n)
|
||||||
|
# -p Pull latest images before redeploying
|
||||||
|
# -e endpoint_id Endpoint/environment ID (auto-detected from stack if not provided)
|
||||||
|
# -h Show this help
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Default values
|
||||||
|
STACK_NAME=""
|
||||||
|
STACK_ID=""
|
||||||
|
PULL_IMAGE=false
|
||||||
|
ENDPOINT_ID=""
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
while getopts "n:i:pe:h" opt; do
|
||||||
|
case $opt in
|
||||||
|
n) STACK_NAME="$OPTARG" ;;
|
||||||
|
i) STACK_ID="$OPTARG" ;;
|
||||||
|
p) PULL_IMAGE=true ;;
|
||||||
|
e) ENDPOINT_ID="$OPTARG" ;;
|
||||||
|
h)
|
||||||
|
head -22 "$0" | grep "^#" | sed 's/^# \?//'
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Usage: $0 -n <stack-name> [-p] [-e endpoint_id]" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Validate environment
|
||||||
|
if [[ -z "${PORTAINER_URL:-}" ]]; then
|
||||||
|
echo "Error: PORTAINER_URL environment variable not set" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -z "${PORTAINER_API_KEY:-}" ]]; then
|
||||||
|
echo "Error: PORTAINER_API_KEY environment variable not set" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -z "$STACK_NAME" && -z "$STACK_ID" ]]; then
|
||||||
|
echo "Error: Either -n <stack-name> or -i <stack-id> is required" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Remove trailing slash from URL
|
||||||
|
PORTAINER_URL="${PORTAINER_URL%/}"
|
||||||
|
|
||||||
|
# Function to make API requests
|
||||||
|
api_request() {
|
||||||
|
local method="$1"
|
||||||
|
local endpoint="$2"
|
||||||
|
local data="${3:-}"
|
||||||
|
|
||||||
|
local args=(-s -w "\n%{http_code}" -X "$method" -H "X-API-Key: ${PORTAINER_API_KEY}")
|
||||||
|
|
||||||
|
if [[ -n "$data" ]]; then
|
||||||
|
args+=(-H "Content-Type: application/json" -d "$data")
|
||||||
|
fi
|
||||||
|
|
||||||
|
curl "${args[@]}" "${PORTAINER_URL}${endpoint}"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Get stack info by name or ID
|
||||||
|
if [[ -n "$STACK_NAME" ]]; then
|
||||||
|
echo "Looking up stack '$STACK_NAME'..."
|
||||||
|
response=$(api_request GET "/api/stacks")
|
||||||
|
http_code=$(echo "$response" | tail -n1)
|
||||||
|
body=$(echo "$response" | sed '$d')
|
||||||
|
|
||||||
|
if [[ "$http_code" != "200" ]]; then
|
||||||
|
echo "Error: Failed to list stacks (HTTP $http_code)" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
stack_info=$(echo "$body" | jq --arg name "$STACK_NAME" '.[] | select(.Name == $name)')
|
||||||
|
|
||||||
|
if [[ -z "$stack_info" || "$stack_info" == "null" ]]; then
|
||||||
|
echo "Error: Stack '$STACK_NAME' not found" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
STACK_ID=$(echo "$stack_info" | jq -r '.Id')
|
||||||
|
ENDPOINT_ID_FROM_STACK=$(echo "$stack_info" | jq -r '.EndpointId')
|
||||||
|
else
|
||||||
|
# Get stack info by ID
|
||||||
|
response=$(api_request GET "/api/stacks/${STACK_ID}")
|
||||||
|
http_code=$(echo "$response" | tail -n1)
|
||||||
|
body=$(echo "$response" | sed '$d')
|
||||||
|
|
||||||
|
if [[ "$http_code" != "200" ]]; then
|
||||||
|
echo "Error: Failed to get stack (HTTP $http_code)" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
stack_info="$body"
|
||||||
|
STACK_NAME=$(echo "$stack_info" | jq -r '.Name')
|
||||||
|
ENDPOINT_ID_FROM_STACK=$(echo "$stack_info" | jq -r '.EndpointId')
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Use endpoint ID from stack if not provided
|
||||||
|
if [[ -z "$ENDPOINT_ID" ]]; then
|
||||||
|
ENDPOINT_ID="$ENDPOINT_ID_FROM_STACK"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if this is a git-based stack
|
||||||
|
git_config=$(echo "$stack_info" | jq -r '.GitConfig // empty')
|
||||||
|
|
||||||
|
if [[ -n "$git_config" && "$git_config" != "null" ]]; then
|
||||||
|
echo "Stack '$STACK_NAME' (ID: $STACK_ID) is git-based"
|
||||||
|
echo "Triggering git pull and redeploy..."
|
||||||
|
|
||||||
|
# Git-based stack redeploy
|
||||||
|
# The git redeploy endpoint pulls from the repository and redeploys
|
||||||
|
request_body=$(jq -n \
|
||||||
|
--argjson pullImage "$PULL_IMAGE" \
|
||||||
|
'{
|
||||||
|
"pullImage": $pullImage,
|
||||||
|
"prune": false,
|
||||||
|
"repositoryReferenceName": "",
|
||||||
|
"repositoryAuthentication": false
|
||||||
|
}')
|
||||||
|
|
||||||
|
response=$(api_request PUT "/api/stacks/${STACK_ID}/git/redeploy?endpointId=${ENDPOINT_ID}" "$request_body")
|
||||||
|
else
|
||||||
|
echo "Stack '$STACK_NAME' (ID: $STACK_ID) is file-based"
|
||||||
|
|
||||||
|
# Get the current stack file content
|
||||||
|
echo "Fetching current stack file..."
|
||||||
|
response=$(api_request GET "/api/stacks/${STACK_ID}/file")
|
||||||
|
http_code=$(echo "$response" | tail -n1)
|
||||||
|
body=$(echo "$response" | sed '$d')
|
||||||
|
|
||||||
|
if [[ "$http_code" != "200" ]]; then
|
||||||
|
echo "Error: Failed to get stack file (HTTP $http_code)" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
stack_file_content=$(echo "$body" | jq -r '.StackFileContent')
|
||||||
|
|
||||||
|
echo "Redeploying..."
|
||||||
|
request_body=$(jq -n \
|
||||||
|
--argjson pullImage "$PULL_IMAGE" \
|
||||||
|
--arg stackFile "$stack_file_content" \
|
||||||
|
'{
|
||||||
|
"pullImage": $pullImage,
|
||||||
|
"prune": false,
|
||||||
|
"stackFileContent": $stackFile
|
||||||
|
}')
|
||||||
|
|
||||||
|
response=$(api_request PUT "/api/stacks/${STACK_ID}?endpointId=${ENDPOINT_ID}" "$request_body")
|
||||||
|
fi
|
||||||
|
|
||||||
|
http_code=$(echo "$response" | tail -n1)
|
||||||
|
body=$(echo "$response" | sed '$d')
|
||||||
|
|
||||||
|
if [[ "$http_code" == "200" ]]; then
|
||||||
|
echo "Successfully redeployed stack '$STACK_NAME'"
|
||||||
|
if [[ "$PULL_IMAGE" == "true" ]]; then
|
||||||
|
echo " - Pulled latest images"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo "Error: Redeploy failed (HTTP $http_code)" >&2
|
||||||
|
echo "$body" | jq '.' 2>/dev/null || echo "$body" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
114
rails/portainer/stack-start.sh
Executable file
114
rails/portainer/stack-start.sh
Executable file
@@ -0,0 +1,114 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# stack-start.sh - Start an inactive Portainer stack
|
||||||
|
#
|
||||||
|
# Usage: stack-start.sh -n <stack-name>
|
||||||
|
#
|
||||||
|
# Environment variables:
|
||||||
|
# PORTAINER_URL - Portainer instance URL (e.g., https://portainer.example.com:9443)
|
||||||
|
# PORTAINER_API_KEY - API access token
|
||||||
|
#
|
||||||
|
# Options:
|
||||||
|
# -n name Stack name (required)
|
||||||
|
# -i id Stack ID (alternative to -n)
|
||||||
|
# -h Show this help
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Default values
|
||||||
|
STACK_NAME=""
|
||||||
|
STACK_ID=""
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
while getopts "n:i:h" opt; do
|
||||||
|
case $opt in
|
||||||
|
n) STACK_NAME="$OPTARG" ;;
|
||||||
|
i) STACK_ID="$OPTARG" ;;
|
||||||
|
h)
|
||||||
|
head -16 "$0" | grep "^#" | sed 's/^# \?//'
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Usage: $0 -n <stack-name>" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Validate environment
|
||||||
|
if [[ -z "${PORTAINER_URL:-}" ]]; then
|
||||||
|
echo "Error: PORTAINER_URL environment variable not set" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -z "${PORTAINER_API_KEY:-}" ]]; then
|
||||||
|
echo "Error: PORTAINER_API_KEY environment variable not set" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -z "$STACK_NAME" && -z "$STACK_ID" ]]; then
|
||||||
|
echo "Error: Either -n <stack-name> or -i <stack-id> is required" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Remove trailing slash from URL
|
||||||
|
PORTAINER_URL="${PORTAINER_URL%/}"
|
||||||
|
|
||||||
|
# Function to make API requests
|
||||||
|
api_request() {
|
||||||
|
local method="$1"
|
||||||
|
local endpoint="$2"
|
||||||
|
|
||||||
|
curl -s -w "\n%{http_code}" -X "$method" \
|
||||||
|
-H "X-API-Key: ${PORTAINER_API_KEY}" \
|
||||||
|
"${PORTAINER_URL}${endpoint}"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Get stack info by name
|
||||||
|
if [[ -n "$STACK_NAME" ]]; then
|
||||||
|
response=$(api_request GET "/api/stacks")
|
||||||
|
http_code=$(echo "$response" | tail -n1)
|
||||||
|
body=$(echo "$response" | sed '$d')
|
||||||
|
|
||||||
|
if [[ "$http_code" != "200" ]]; then
|
||||||
|
echo "Error: Failed to list stacks (HTTP $http_code)" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
stack_info=$(echo "$body" | jq --arg name "$STACK_NAME" '.[] | select(.Name == $name)')
|
||||||
|
|
||||||
|
if [[ -z "$stack_info" || "$stack_info" == "null" ]]; then
|
||||||
|
echo "Error: Stack '$STACK_NAME' not found" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
STACK_ID=$(echo "$stack_info" | jq -r '.Id')
|
||||||
|
ENDPOINT_ID=$(echo "$stack_info" | jq -r '.EndpointId')
|
||||||
|
else
|
||||||
|
response=$(api_request GET "/api/stacks/${STACK_ID}")
|
||||||
|
http_code=$(echo "$response" | tail -n1)
|
||||||
|
body=$(echo "$response" | sed '$d')
|
||||||
|
|
||||||
|
if [[ "$http_code" != "200" ]]; then
|
||||||
|
echo "Error: Failed to get stack (HTTP $http_code)" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
stack_info="$body"
|
||||||
|
STACK_NAME=$(echo "$stack_info" | jq -r '.Name')
|
||||||
|
ENDPOINT_ID=$(echo "$stack_info" | jq -r '.EndpointId')
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Starting stack '$STACK_NAME' (ID: $STACK_ID)..."
|
||||||
|
|
||||||
|
response=$(api_request POST "/api/stacks/${STACK_ID}/start?endpointId=${ENDPOINT_ID}")
|
||||||
|
http_code=$(echo "$response" | tail -n1)
|
||||||
|
body=$(echo "$response" | sed '$d')
|
||||||
|
|
||||||
|
if [[ "$http_code" == "200" ]]; then
|
||||||
|
echo "Successfully started stack '$STACK_NAME'"
|
||||||
|
else
|
||||||
|
echo "Error: Failed to start stack (HTTP $http_code)" >&2
|
||||||
|
echo "$body" | jq '.' 2>/dev/null || echo "$body" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
185
rails/portainer/stack-status.sh
Executable file
185
rails/portainer/stack-status.sh
Executable file
@@ -0,0 +1,185 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# stack-status.sh - Show stack service status
|
||||||
|
#
|
||||||
|
# Usage: stack-status.sh -n <stack-name> [-f format]
|
||||||
|
#
|
||||||
|
# Environment variables:
|
||||||
|
# PORTAINER_URL - Portainer instance URL (e.g., https://portainer.example.com:9443)
|
||||||
|
# PORTAINER_API_KEY - API access token
|
||||||
|
#
|
||||||
|
# Options:
|
||||||
|
# -n name Stack name (required)
|
||||||
|
# -i id Stack ID (alternative to -n)
|
||||||
|
# -f format Output format: table (default), json
|
||||||
|
# -h Show this help
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Default values
|
||||||
|
STACK_NAME=""
|
||||||
|
STACK_ID=""
|
||||||
|
FORMAT="table"
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
while getopts "n:i:f:h" opt; do
|
||||||
|
case $opt in
|
||||||
|
n) STACK_NAME="$OPTARG" ;;
|
||||||
|
i) STACK_ID="$OPTARG" ;;
|
||||||
|
f) FORMAT="$OPTARG" ;;
|
||||||
|
h)
|
||||||
|
head -18 "$0" | grep "^#" | sed 's/^# \?//'
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Usage: $0 -n <stack-name> [-f format]" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Validate environment
|
||||||
|
if [[ -z "${PORTAINER_URL:-}" ]]; then
|
||||||
|
echo "Error: PORTAINER_URL environment variable not set" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -z "${PORTAINER_API_KEY:-}" ]]; then
|
||||||
|
echo "Error: PORTAINER_API_KEY environment variable not set" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -z "$STACK_NAME" && -z "$STACK_ID" ]]; then
|
||||||
|
echo "Error: Either -n <stack-name> or -i <stack-id> is required" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Remove trailing slash from URL
|
||||||
|
PORTAINER_URL="${PORTAINER_URL%/}"
|
||||||
|
|
||||||
|
# Function to make API requests
|
||||||
|
api_request() {
|
||||||
|
local method="$1"
|
||||||
|
local endpoint="$2"
|
||||||
|
|
||||||
|
curl -s -w "\n%{http_code}" -X "$method" \
|
||||||
|
-H "X-API-Key: ${PORTAINER_API_KEY}" \
|
||||||
|
"${PORTAINER_URL}${endpoint}"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Get stack info by name
|
||||||
|
if [[ -n "$STACK_NAME" ]]; then
|
||||||
|
response=$(api_request GET "/api/stacks")
|
||||||
|
http_code=$(echo "$response" | tail -n1)
|
||||||
|
body=$(echo "$response" | sed '$d')
|
||||||
|
|
||||||
|
if [[ "$http_code" != "200" ]]; then
|
||||||
|
echo "Error: Failed to list stacks (HTTP $http_code)" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
stack_info=$(echo "$body" | jq --arg name "$STACK_NAME" '.[] | select(.Name == $name)')
|
||||||
|
|
||||||
|
if [[ -z "$stack_info" || "$stack_info" == "null" ]]; then
|
||||||
|
echo "Error: Stack '$STACK_NAME' not found" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
STACK_ID=$(echo "$stack_info" | jq -r '.Id')
|
||||||
|
ENDPOINT_ID=$(echo "$stack_info" | jq -r '.EndpointId')
|
||||||
|
STACK_NAME=$(echo "$stack_info" | jq -r '.Name')
|
||||||
|
else
|
||||||
|
response=$(api_request GET "/api/stacks/${STACK_ID}")
|
||||||
|
http_code=$(echo "$response" | tail -n1)
|
||||||
|
body=$(echo "$response" | sed '$d')
|
||||||
|
|
||||||
|
if [[ "$http_code" != "200" ]]; then
|
||||||
|
echo "Error: Failed to get stack (HTTP $http_code)" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
stack_info="$body"
|
||||||
|
ENDPOINT_ID=$(echo "$stack_info" | jq -r '.EndpointId')
|
||||||
|
STACK_NAME=$(echo "$stack_info" | jq -r '.Name')
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Get stack type
|
||||||
|
STACK_TYPE=$(echo "$stack_info" | jq -r '.Type')
|
||||||
|
STACK_STATUS=$(echo "$stack_info" | jq -r 'if .Status == 1 then "active" elif .Status == 2 then "inactive" else "unknown" end')
|
||||||
|
|
||||||
|
# Get containers for this stack
|
||||||
|
# Containers are labeled with com.docker.compose.project or com.docker.stack.namespace
|
||||||
|
response=$(api_request GET "/api/endpoints/${ENDPOINT_ID}/docker/containers/json?all=true")
|
||||||
|
http_code=$(echo "$response" | tail -n1)
|
||||||
|
body=$(echo "$response" | sed '$d')
|
||||||
|
|
||||||
|
if [[ "$http_code" != "200" ]]; then
|
||||||
|
echo "Error: Failed to get containers (HTTP $http_code)" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Filter containers belonging to this stack
|
||||||
|
# Check both compose project label and stack namespace label
|
||||||
|
containers=$(echo "$body" | jq --arg name "$STACK_NAME" '[.[] | select(
|
||||||
|
(.Labels["com.docker.compose.project"] == $name) or
|
||||||
|
(.Labels["com.docker.stack.namespace"] == $name)
|
||||||
|
)]')
|
||||||
|
|
||||||
|
container_count=$(echo "$containers" | jq 'length')
|
||||||
|
|
||||||
|
# Output based on format
|
||||||
|
if [[ "$FORMAT" == "json" ]]; then
|
||||||
|
jq -n \
|
||||||
|
--arg name "$STACK_NAME" \
|
||||||
|
--arg id "$STACK_ID" \
|
||||||
|
--arg status "$STACK_STATUS" \
|
||||||
|
--arg type "$STACK_TYPE" \
|
||||||
|
--argjson containers "$containers" \
|
||||||
|
'{
|
||||||
|
stack: {
|
||||||
|
name: $name,
|
||||||
|
id: ($id | tonumber),
|
||||||
|
status: $status,
|
||||||
|
type: (if $type == "1" then "swarm" elif $type == "2" then "compose" else "kubernetes" end)
|
||||||
|
},
|
||||||
|
containers: [$containers[] | {
|
||||||
|
name: .Names[0],
|
||||||
|
id: .Id[0:12],
|
||||||
|
image: .Image,
|
||||||
|
state: .State,
|
||||||
|
status: .Status,
|
||||||
|
created: .Created
|
||||||
|
}]
|
||||||
|
}'
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Table output
|
||||||
|
echo "Stack: $STACK_NAME (ID: $STACK_ID)"
|
||||||
|
echo "Status: $STACK_STATUS"
|
||||||
|
echo "Type: $(if [[ "$STACK_TYPE" == "1" ]]; then echo "swarm"; elif [[ "$STACK_TYPE" == "2" ]]; then echo "compose"; else echo "kubernetes"; fi)"
|
||||||
|
echo "Containers: $container_count"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ "$container_count" -gt 0 ]]; then
|
||||||
|
echo "CONTAINER ID NAME IMAGE STATE STATUS"
|
||||||
|
echo "------------ -------------------------------------- ------------------------------ ---------- ------"
|
||||||
|
echo "$containers" | jq -r '.[] | [
|
||||||
|
.Id[0:12],
|
||||||
|
.Names[0],
|
||||||
|
.Image,
|
||||||
|
.State,
|
||||||
|
.Status
|
||||||
|
] | @tsv' | while IFS=$'\t' read -r id name image state status; do
|
||||||
|
# Clean up container name (remove leading /)
|
||||||
|
name="${name#/}"
|
||||||
|
# Truncate long values
|
||||||
|
name="${name:0:38}"
|
||||||
|
image="${image:0:30}"
|
||||||
|
printf "%-12s %-38s %-30s %-10s %s\n" "$id" "$name" "$image" "$state" "$status"
|
||||||
|
done
|
||||||
|
else
|
||||||
|
echo "No containers found for this stack."
|
||||||
|
echo ""
|
||||||
|
echo "Note: If the stack was recently created or is inactive, containers may not exist yet."
|
||||||
|
fi
|
||||||
114
rails/portainer/stack-stop.sh
Executable file
114
rails/portainer/stack-stop.sh
Executable file
@@ -0,0 +1,114 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# stack-stop.sh - Stop a running Portainer stack
|
||||||
|
#
|
||||||
|
# Usage: stack-stop.sh -n <stack-name>
|
||||||
|
#
|
||||||
|
# Environment variables:
|
||||||
|
# PORTAINER_URL - Portainer instance URL (e.g., https://portainer.example.com:9443)
|
||||||
|
# PORTAINER_API_KEY - API access token
|
||||||
|
#
|
||||||
|
# Options:
|
||||||
|
# -n name Stack name (required)
|
||||||
|
# -i id Stack ID (alternative to -n)
|
||||||
|
# -h Show this help
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Default values
|
||||||
|
STACK_NAME=""
|
||||||
|
STACK_ID=""
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
while getopts "n:i:h" opt; do
|
||||||
|
case $opt in
|
||||||
|
n) STACK_NAME="$OPTARG" ;;
|
||||||
|
i) STACK_ID="$OPTARG" ;;
|
||||||
|
h)
|
||||||
|
head -16 "$0" | grep "^#" | sed 's/^# \?//'
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Usage: $0 -n <stack-name>" >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Validate environment
|
||||||
|
if [[ -z "${PORTAINER_URL:-}" ]]; then
|
||||||
|
echo "Error: PORTAINER_URL environment variable not set" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -z "${PORTAINER_API_KEY:-}" ]]; then
|
||||||
|
echo "Error: PORTAINER_API_KEY environment variable not set" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -z "$STACK_NAME" && -z "$STACK_ID" ]]; then
|
||||||
|
echo "Error: Either -n <stack-name> or -i <stack-id> is required" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Remove trailing slash from URL
|
||||||
|
PORTAINER_URL="${PORTAINER_URL%/}"
|
||||||
|
|
||||||
|
# Function to make API requests
|
||||||
|
api_request() {
|
||||||
|
local method="$1"
|
||||||
|
local endpoint="$2"
|
||||||
|
|
||||||
|
curl -s -w "\n%{http_code}" -X "$method" \
|
||||||
|
-H "X-API-Key: ${PORTAINER_API_KEY}" \
|
||||||
|
"${PORTAINER_URL}${endpoint}"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Get stack info by name
|
||||||
|
if [[ -n "$STACK_NAME" ]]; then
|
||||||
|
response=$(api_request GET "/api/stacks")
|
||||||
|
http_code=$(echo "$response" | tail -n1)
|
||||||
|
body=$(echo "$response" | sed '$d')
|
||||||
|
|
||||||
|
if [[ "$http_code" != "200" ]]; then
|
||||||
|
echo "Error: Failed to list stacks (HTTP $http_code)" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
stack_info=$(echo "$body" | jq --arg name "$STACK_NAME" '.[] | select(.Name == $name)')
|
||||||
|
|
||||||
|
if [[ -z "$stack_info" || "$stack_info" == "null" ]]; then
|
||||||
|
echo "Error: Stack '$STACK_NAME' not found" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
STACK_ID=$(echo "$stack_info" | jq -r '.Id')
|
||||||
|
ENDPOINT_ID=$(echo "$stack_info" | jq -r '.EndpointId')
|
||||||
|
else
|
||||||
|
response=$(api_request GET "/api/stacks/${STACK_ID}")
|
||||||
|
http_code=$(echo "$response" | tail -n1)
|
||||||
|
body=$(echo "$response" | sed '$d')
|
||||||
|
|
||||||
|
if [[ "$http_code" != "200" ]]; then
|
||||||
|
echo "Error: Failed to get stack (HTTP $http_code)" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
stack_info="$body"
|
||||||
|
STACK_NAME=$(echo "$stack_info" | jq -r '.Name')
|
||||||
|
ENDPOINT_ID=$(echo "$stack_info" | jq -r '.EndpointId')
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Stopping stack '$STACK_NAME' (ID: $STACK_ID)..."
|
||||||
|
|
||||||
|
response=$(api_request POST "/api/stacks/${STACK_ID}/stop?endpointId=${ENDPOINT_ID}")
|
||||||
|
http_code=$(echo "$response" | tail -n1)
|
||||||
|
body=$(echo "$response" | sed '$d')
|
||||||
|
|
||||||
|
if [[ "$http_code" == "200" ]]; then
|
||||||
|
echo "Successfully stopped stack '$STACK_NAME'"
|
||||||
|
else
|
||||||
|
echo "Error: Failed to stop stack (HTTP $http_code)" >&2
|
||||||
|
echo "$body" | jq '.' 2>/dev/null || echo "$body" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
616
runtime/claude/CLAUDE.md
Normal file
616
runtime/claude/CLAUDE.md
Normal file
@@ -0,0 +1,616 @@
|
|||||||
|
# Mosaic Universal Standards (Machine-Wide)
|
||||||
|
|
||||||
|
Before applying any runtime-specific guidance in this file, load and apply:
|
||||||
|
- `~/.mosaic/STANDARDS.md`
|
||||||
|
- project-local `AGENTS.md`
|
||||||
|
|
||||||
|
`~/.mosaic` is the canonical cross-agent standards layer. This `CLAUDE.md` is an adapter layer for Claude-specific capabilities only.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Universal Development Standards
|
||||||
|
|
||||||
|
## Core Principles
|
||||||
|
- Think critically. Don't just agree—suggest better approaches when appropriate.
|
||||||
|
- Quality over speed. No workarounds; implement proper solutions.
|
||||||
|
- No deprecated or unsupported packages.
|
||||||
|
|
||||||
|
## Skills System
|
||||||
|
|
||||||
|
**97 skills available in `~/.mosaic/skills/`.** Skills are instruction packages that provide domain expertise. Source: `mosaic/agent-skills` repo (93 community) + 4 local-only.
|
||||||
|
|
||||||
|
**Load a skill by reading its SKILL.md:**
|
||||||
|
```
|
||||||
|
Read ~/.mosaic/skills/<skill-name>/SKILL.md
|
||||||
|
```
|
||||||
|
|
||||||
|
### Skill Dispatch Table — Load the right skills for your task
|
||||||
|
|
||||||
|
| Task Type | Skills to Load | Notes |
|
||||||
|
|-----------|---------------|-------|
|
||||||
|
| **NestJS development** | `nestjs-best-practices` | 40 rules, 10 categories |
|
||||||
|
| **Next.js / React** | `next-best-practices`, `vercel-react-best-practices` | RSC, async, performance |
|
||||||
|
| **React components** | `vercel-composition-patterns`, `shadcn-ui` | shadcn note: uses Tailwind v4 |
|
||||||
|
| **Vue / Nuxt** | `vue-best-practices`, `nuxt`, `pinia`, `vue-router-best-practices` | antfu conventions |
|
||||||
|
| **Vite / Vitest** | `vite`, `vitest` | Build + test tooling |
|
||||||
|
| **FastAPI / Python** | `fastapi`, `python-performance-optimization` | Pydantic v2, async SQLAlchemy |
|
||||||
|
| **Architecture** | `architecture-patterns` | Clean, Hexagonal, DDD |
|
||||||
|
| **Authentication** | `better-auth-best-practices`, `email-and-password-best-practices`, `two-factor-authentication-best-practices` | Better-Auth patterns |
|
||||||
|
| **UI / Styling** | `tailwind-design-system`, `ui-animation`, `web-design-guidelines` | Tailwind v4 |
|
||||||
|
| **Frontend design** | `frontend-design`, `brand-guidelines`, `canvas-design` | Design principles |
|
||||||
|
| **TDD / Testing** | `test-driven-development`, `webapp-testing`, `vue-testing-best-practices` | Red-Green-Refactor |
|
||||||
|
| **Linting** | `lint` | Zero-tolerance — detect linter, fix ALL violations, never disable rules |
|
||||||
|
| **Code review** | `pr-reviewer`, `code-review-excellence`, `verification-before-completion` | Platform-aware (Gitea/GitHub) |
|
||||||
|
| **Debugging** | `systematic-debugging` | Structured methodology |
|
||||||
|
| **Git workflow** | `finishing-a-development-branch`, `using-git-worktrees` | Branch + worktree patterns |
|
||||||
|
| **Document generation** | `pdf`, `docx`, `pptx`, `xlsx` | LibreOffice-based |
|
||||||
|
| **Writing / Comms** | `doc-coauthoring`, `internal-comms`, `copywriting`, `copy-editing` | |
|
||||||
|
| **Marketing** | `marketing-ideas`, `content-strategy`, `social-content`, `email-sequence` | 139 ideas across 14 categories |
|
||||||
|
| **SEO** | `seo-audit`, `programmatic-seo`, `schema-markup` | Technical + content SEO |
|
||||||
|
| **CRO / Conversion** | `page-cro`, `form-cro`, `signup-flow-cro`, `onboarding-cro` | Conversion optimization |
|
||||||
|
| **Pricing / Business** | `pricing-strategy`, `launch-strategy`, `competitor-alternatives` | SaaS focus |
|
||||||
|
| **Ads / Growth** | `paid-ads`, `analytics-tracking`, `ab-test-setup`, `referral-program` | |
|
||||||
|
| **Agent building** | `create-agent`, `ai-sdk`, `proactive-agent`, `dispatching-parallel-agents` | WAL Protocol, parallel workers |
|
||||||
|
| **Agent workflow** | `executing-plans`, `writing-plans`, `brainstorming` | Plan → execute |
|
||||||
|
| **Skill authoring** | `writing-skills`, `skill-creator` | TDD-based skill creation |
|
||||||
|
| **MCP servers** | `mcp-builder` | Model Context Protocol |
|
||||||
|
| **Generative art** | `algorithmic-art`, `theme-factory`, `slack-gif-creator` | |
|
||||||
|
| **Web artifacts** | `web-artifacts-builder` | Self-contained HTML |
|
||||||
|
| **CI/CD setup** | `setup-cicd` | Docker build/push pipeline |
|
||||||
|
| **Jarvis Brain** | `jarvis` | Brain repo context |
|
||||||
|
| **PRD creation** | `prd` | Generate PRDs |
|
||||||
|
| **Ralph development** | `ralph` | Autonomous dev agent |
|
||||||
|
| **Orchestration** | `kickstart` | `/kickstart [milestone\|issue\|task]` — launches orchestrator |
|
||||||
|
|
||||||
|
### For Orchestrator / Programmatic Workers
|
||||||
|
|
||||||
|
When spawning workers via `claude -p`, include skill loading in the kickstart:
|
||||||
|
```bash
|
||||||
|
claude -p "Read ~/.mosaic/skills/nestjs-best-practices/SKILL.md then implement..."
|
||||||
|
```
|
||||||
|
|
||||||
|
For Ralph prd.json stories, add a `skills` field:
|
||||||
|
```json
|
||||||
|
{ "id": "US-001", "skills": ["nestjs-best-practices", "test-driven-development"], ... }
|
||||||
|
```
|
||||||
|
|
||||||
|
### Fresh Machine Setup
|
||||||
|
```bash
|
||||||
|
npx skills add https://git.mosaicstack.dev/mosaic/agent-skills.git --agent claude-code
|
||||||
|
```
|
||||||
|
|
||||||
|
## Ralph Autonomous Development
|
||||||
|
|
||||||
|
For autonomous feature development, use the Ralph pattern. Each iteration is a fresh context with persistent memory.
|
||||||
|
|
||||||
|
### The Ralph Loop
|
||||||
|
```
|
||||||
|
1. Read prd.json → Find highest priority story where passes: false
|
||||||
|
2. Read progress.txt → Check "Codebase Patterns" section first
|
||||||
|
3. Read AGENTS.md files → Check directory-specific patterns
|
||||||
|
4. Implement ONLY the single assigned story
|
||||||
|
5. Run quality checks (typecheck, lint, test)
|
||||||
|
6. Commit ONLY if all checks pass
|
||||||
|
7. Update prd.json → Set passes: true for completed story
|
||||||
|
8. Append learnings to progress.txt
|
||||||
|
9. Update AGENTS.md if reusable patterns discovered
|
||||||
|
10. Loop → Next story
|
||||||
|
```
|
||||||
|
|
||||||
|
### Memory Files
|
||||||
|
| File | Purpose | Location |
|
||||||
|
|------|---------|----------|
|
||||||
|
| `prd.json` | Task list with passes: true/false | Project root or scripts/ralph/ |
|
||||||
|
| `progress.txt` | Learnings between iterations | Same as prd.json |
|
||||||
|
| `AGENTS.md` | Directory-specific patterns | Any directory in repo |
|
||||||
|
|
||||||
|
### Running Ralph
|
||||||
|
```bash
|
||||||
|
# Automated loop
|
||||||
|
./scripts/ralph/ralph.sh 10
|
||||||
|
|
||||||
|
# Manual single story
|
||||||
|
claude -p "Implement US-001 from prd.json following Ralph pattern"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Creating New Features
|
||||||
|
1. Create PRD: `Load the prd skill and create a PRD for [feature]`
|
||||||
|
2. Convert to Ralph: `Load the ralph skill and convert tasks/prd-[name].md to prd.json`
|
||||||
|
3. Run Ralph: `./scripts/ralph/ralph.sh`
|
||||||
|
|
||||||
|
**For full Ralph guide:** `~/.mosaic/guides/ralph-autonomous.md`
|
||||||
|
|
||||||
|
## AGENTS.md Pattern
|
||||||
|
|
||||||
|
Each directory can have an `AGENTS.md` file containing patterns specific to that area of the codebase. **Always check for and read AGENTS.md files in directories you're working in.**
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Example AGENTS.md
|
||||||
|
|
||||||
|
## Codebase Patterns
|
||||||
|
- Use `httpx.AsyncClient` for external HTTP calls
|
||||||
|
- All routes require authentication via `Depends(get_current_user)`
|
||||||
|
|
||||||
|
## Common Gotchas
|
||||||
|
- Remember to run migrations after schema changes
|
||||||
|
- Frontend env vars need NEXT_PUBLIC_ prefix
|
||||||
|
```
|
||||||
|
|
||||||
|
**Update AGENTS.md when you discover:**
|
||||||
|
- Reusable patterns
|
||||||
|
- Non-obvious requirements
|
||||||
|
- Gotchas that would trip up future agents
|
||||||
|
- Testing approaches for that area
|
||||||
|
|
||||||
|
## Project Bootstrapping
|
||||||
|
|
||||||
|
When starting work on a **new project** that lacks `CLAUDE.md` or `AGENTS.md`, bootstrap it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Automated (recommended)
|
||||||
|
~/.mosaic/rails/bootstrap/init-project.sh --name "Project Name" --type auto
|
||||||
|
|
||||||
|
# Or manually with templates
|
||||||
|
export PROJECT_NAME="Project Name" PROJECT_DESCRIPTION="What it does" TASK_PREFIX="PN"
|
||||||
|
envsubst < ~/.mosaic/templates/agent/CLAUDE.md.template > CLAUDE.md
|
||||||
|
envsubst < ~/.mosaic/templates/agent/AGENTS.md.template > AGENTS.md
|
||||||
|
```
|
||||||
|
|
||||||
|
**Available project types:** `nestjs-nextjs`, `django`, `typescript`, `python-fastapi`, `python-library`, `generic` (auto-detected from project files).
|
||||||
|
|
||||||
|
**Templates:** `~/.mosaic/templates/agent/` (generic) and `~/.mosaic/templates/agent/projects/<type>/` (tech-stack specific).
|
||||||
|
|
||||||
|
**Fragments:** `~/.mosaic/templates/agent/fragments/` — Reusable sections (conditional-loading, commit-format, secrets, multi-agent, code-review).
|
||||||
|
|
||||||
|
**Full guide:** `~/.mosaic/guides/bootstrap.md`
|
||||||
|
|
||||||
|
### Agent Configuration Health
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Audit all projects for missing CLAUDE.md, AGENTS.md, agent-guide references
|
||||||
|
~/.mosaic/rails/bootstrap/agent-lint.sh
|
||||||
|
|
||||||
|
# Audit with fix suggestions
|
||||||
|
~/.mosaic/rails/bootstrap/agent-lint.sh --verbose --fix-hint
|
||||||
|
|
||||||
|
# Non-destructively upgrade existing projects (inject missing sections)
|
||||||
|
~/.mosaic/rails/bootstrap/agent-upgrade.sh --all --dry-run # Preview
|
||||||
|
~/.mosaic/rails/bootstrap/agent-upgrade.sh --all # Apply
|
||||||
|
|
||||||
|
# Upgrade a single project
|
||||||
|
~/.mosaic/rails/bootstrap/agent-upgrade.sh ~/src/my-project
|
||||||
|
```
|
||||||
|
|
||||||
|
**Spec:** `~/.mosaic/templates/agent/SPEC.md` — Defines Tier 1/2/3 requirements for well-configured projects.
|
||||||
|
|
||||||
|
## Issue Tracking (Git-Based)
|
||||||
|
|
||||||
|
All work is tracked as **issues in the project's git repository** (Gitea or GitHub).
|
||||||
|
|
||||||
|
### Workflow
|
||||||
|
1. Check for assigned issues before starting work
|
||||||
|
2. Create scratchpad: `docs/scratchpads/{issue-number}-{short-name}.md`
|
||||||
|
3. Reference issues in commits: `Fixes #123` or `Refs #123`
|
||||||
|
4. Close issues only after successful testing
|
||||||
|
|
||||||
|
### Labels
|
||||||
|
Use consistent labels across projects:
|
||||||
|
- `epic` - Large feature spanning multiple issues
|
||||||
|
- `feature` - New functionality
|
||||||
|
- `bug` - Defect fix
|
||||||
|
- `task` - General work item
|
||||||
|
- `documentation` - Docs updates
|
||||||
|
- `security` - Security-related
|
||||||
|
- `breaking` - Breaking change
|
||||||
|
|
||||||
|
### Milestones & Versioning
|
||||||
|
- **Each feature gets a dedicated milestone**
|
||||||
|
- MVP starts at `0.1.0`
|
||||||
|
- Pre-release: `0.X.0` for breaking changes, `0.X.Y` for patches
|
||||||
|
- Post-release: `X.0.0` for breaking changes
|
||||||
|
|
||||||
|
### Git Scripts (PREFERRED for Gitea/GitHub operations)
|
||||||
|
Cross-platform helpers at `~/.mosaic/rails/git/` (work with both Gitea and GitHub):
|
||||||
|
|
||||||
|
**Why use these scripts?**
|
||||||
|
- ✅ Auto-detect platform (Gitea vs GitHub)
|
||||||
|
- ✅ Abstract CLI syntax differences (tea vs gh)
|
||||||
|
- ✅ Handle milestone name filtering for Gitea
|
||||||
|
- ✅ Consistent interface across all repos
|
||||||
|
|
||||||
|
**Issues:**
|
||||||
|
```bash
|
||||||
|
~/.mosaic/rails/git/issue-create.sh -t "Title" -l "label" -m "0.2.0"
|
||||||
|
~/.mosaic/rails/git/issue-list.sh -s open -l "bug"
|
||||||
|
~/.mosaic/rails/git/issue-list.sh -m "M6-AgentOrchestration" # Works with milestone names!
|
||||||
|
~/.mosaic/rails/git/issue-view.sh -i 42 # View issue details
|
||||||
|
~/.mosaic/rails/git/issue-edit.sh -i 42 -t "New Title" -l "labels"
|
||||||
|
~/.mosaic/rails/git/issue-assign.sh -i 42 -a "username"
|
||||||
|
~/.mosaic/rails/git/issue-comment.sh -i 42 -c "Comment text"
|
||||||
|
~/.mosaic/rails/git/issue-close.sh -i 42 [-c "Closing comment"]
|
||||||
|
~/.mosaic/rails/git/issue-reopen.sh -i 42 [-c "Reopening reason"]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Pull Requests:**
|
||||||
|
```bash
|
||||||
|
~/.mosaic/rails/git/pr-create.sh -t "Title" -b "Description" -i 42
|
||||||
|
~/.mosaic/rails/git/pr-create.sh -t "Title" -B main -H feature-branch
|
||||||
|
~/.mosaic/rails/git/pr-list.sh -s open
|
||||||
|
~/.mosaic/rails/git/pr-view.sh -n 42 # View PR details
|
||||||
|
~/.mosaic/rails/git/pr-review.sh -n 42 -a approve [-c "LGTM"]
|
||||||
|
~/.mosaic/rails/git/pr-review.sh -n 42 -a request-changes -c "Fix X"
|
||||||
|
~/.mosaic/rails/git/pr-merge.sh -n 42 -m squash -d
|
||||||
|
~/.mosaic/rails/git/pr-close.sh -n 42 [-c "Closing reason"]
|
||||||
|
~/.mosaic/rails/git/pr-diff.sh -n 42 [-o diff.patch] # Get PR diff
|
||||||
|
~/.mosaic/rails/git/pr-metadata.sh -n 42 [-o metadata.json] # Get PR metadata as JSON
|
||||||
|
```
|
||||||
|
|
||||||
|
**Milestones:**
|
||||||
|
```bash
|
||||||
|
~/.mosaic/rails/git/milestone-create.sh -t "0.2.0" -d "Description"
|
||||||
|
~/.mosaic/rails/git/milestone-create.sh --list
|
||||||
|
~/.mosaic/rails/git/milestone-list.sh [-s open|closed|all]
|
||||||
|
~/.mosaic/rails/git/milestone-close.sh -t "0.2.0"
|
||||||
|
```
|
||||||
|
|
||||||
|
**NOTE:** These scripts handle the Gitea `--milestones` (plural) syntax automatically. Always prefer these over raw `tea` or `gh` commands.
|
||||||
|
|
||||||
|
### Woodpecker CI CLI
|
||||||
|
Official CLI for interacting with Woodpecker CI at `ci.mosaicstack.dev`.
|
||||||
|
|
||||||
|
**Setup:**
|
||||||
|
```bash
|
||||||
|
# Install (Arch)
|
||||||
|
paru -S woodpecker
|
||||||
|
|
||||||
|
# Configure
|
||||||
|
export WOODPECKER_SERVER="https://ci.mosaicstack.dev"
|
||||||
|
export WOODPECKER_TOKEN="your-token" # Get from ci.mosaicstack.dev/user
|
||||||
|
```
|
||||||
|
|
||||||
|
**Pipelines:**
|
||||||
|
```bash
|
||||||
|
woodpecker pipeline ls <owner/repo> # List pipelines
|
||||||
|
woodpecker pipeline info <owner/repo> <num> # Pipeline details
|
||||||
|
woodpecker pipeline create <owner/repo> # Trigger pipeline
|
||||||
|
woodpecker pipeline stop <owner/repo> <num> # Cancel pipeline
|
||||||
|
woodpecker pipeline start <owner/repo> <num> # Restart pipeline
|
||||||
|
woodpecker pipeline approve <owner/repo> <num> # Approve blocked
|
||||||
|
```
|
||||||
|
|
||||||
|
**Logs:**
|
||||||
|
```bash
|
||||||
|
woodpecker log show <owner/repo> <num> # View logs
|
||||||
|
woodpecker log show <owner/repo> <num> <step> # Specific step
|
||||||
|
```
|
||||||
|
|
||||||
|
**Repositories:**
|
||||||
|
```bash
|
||||||
|
woodpecker repo ls # List repos
|
||||||
|
woodpecker repo add <owner/repo> # Activate for CI
|
||||||
|
woodpecker repo rm <owner/repo> # Deactivate
|
||||||
|
woodpecker repo repair <owner/repo> # Repair webhooks
|
||||||
|
```
|
||||||
|
|
||||||
|
**Secrets:**
|
||||||
|
```bash
|
||||||
|
woodpecker secret ls <owner/repo> # List secrets
|
||||||
|
woodpecker secret add <owner/repo> -n KEY -v val # Add secret
|
||||||
|
woodpecker secret rm <owner/repo> -n KEY # Delete secret
|
||||||
|
```
|
||||||
|
|
||||||
|
**Full reference:** `jarvis-brain/docs/reference/woodpecker/WOODPECKER-CLI.md`
|
||||||
|
**Setup command:** `woodpecker setup --server https://ci.mosaicstack.dev --token "YOUR_TOKEN"`
|
||||||
|
|
||||||
|
### Portainer Scripts
|
||||||
|
CLI tools for managing Portainer stacks at `~/.mosaic/rails/portainer/`.
|
||||||
|
|
||||||
|
**Setup:**
|
||||||
|
```bash
|
||||||
|
export PORTAINER_URL="https://portainer.example.com:9443"
|
||||||
|
export PORTAINER_API_KEY="your-api-key-here"
|
||||||
|
```
|
||||||
|
|
||||||
|
Create an API key in Portainer: My account → Access tokens → Add access token.
|
||||||
|
|
||||||
|
**Stack Management:**
|
||||||
|
```bash
|
||||||
|
stack-list.sh # List all stacks
|
||||||
|
stack-list.sh -f json # JSON format
|
||||||
|
stack-list.sh -e 1 # Filter by endpoint
|
||||||
|
|
||||||
|
stack-status.sh -n mystack # Show stack status
|
||||||
|
stack-status.sh -n mystack -f json # JSON format
|
||||||
|
|
||||||
|
stack-redeploy.sh -n mystack # Redeploy stack
|
||||||
|
stack-redeploy.sh -n mystack -p # Pull latest images
|
||||||
|
|
||||||
|
stack-start.sh -n mystack # Start inactive stack
|
||||||
|
stack-stop.sh -n mystack # Stop running stack
|
||||||
|
```
|
||||||
|
|
||||||
|
**Logs:**
|
||||||
|
```bash
|
||||||
|
stack-logs.sh -n mystack # List services
|
||||||
|
stack-logs.sh -n mystack -s webapp # View service logs
|
||||||
|
stack-logs.sh -n mystack -s webapp -t 200 # Last 200 lines
|
||||||
|
stack-logs.sh -n mystack -s webapp -f # Follow logs
|
||||||
|
```
|
||||||
|
|
||||||
|
**Endpoints:**
|
||||||
|
```bash
|
||||||
|
endpoint-list.sh # List all endpoints
|
||||||
|
endpoint-list.sh -f json # JSON format
|
||||||
|
```
|
||||||
|
|
||||||
|
**Common Workflow (CI/CD redeploy):**
|
||||||
|
```bash
|
||||||
|
~/.mosaic/rails/portainer/stack-redeploy.sh -n myapp -p && \
|
||||||
|
~/.mosaic/rails/portainer/stack-status.sh -n myapp && \
|
||||||
|
~/.mosaic/rails/portainer/stack-logs.sh -n myapp -s api -t 50
|
||||||
|
```
|
||||||
|
|
||||||
|
### Git Worktrees
|
||||||
|
Use worktrees for parallel work on multiple issues without branch switching.
|
||||||
|
|
||||||
|
**Structure:**
|
||||||
|
```
|
||||||
|
{project_name}_worktrees/{issue-name}/
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```
|
||||||
|
~/src/my-app/ # Main repository
|
||||||
|
~/src/my-app_worktrees/
|
||||||
|
├── 42-fix-login/ # Worktree for issue #42
|
||||||
|
├── 57-add-dashboard/ # Worktree for issue #57
|
||||||
|
└── 89-refactor-auth/ # Worktree for issue #89
|
||||||
|
```
|
||||||
|
|
||||||
|
**Commands:**
|
||||||
|
```bash
|
||||||
|
# Create worktree for an issue
|
||||||
|
git worktree add ../my-app_worktrees/42-fix-login -b issue-42-fix-login
|
||||||
|
|
||||||
|
# List active worktrees
|
||||||
|
git worktree list
|
||||||
|
|
||||||
|
# Remove worktree after merge
|
||||||
|
git worktree remove ../my-app_worktrees/42-fix-login
|
||||||
|
```
|
||||||
|
|
||||||
|
**When to use worktrees:**
|
||||||
|
- Working on multiple issues simultaneously
|
||||||
|
- Long-running feature branches that need isolation
|
||||||
|
- Code reviews while continuing other work
|
||||||
|
- Comparing implementations across branches
|
||||||
|
|
||||||
|
## Development Requirements
|
||||||
|
|
||||||
|
### Test-Driven Development (TDD)
|
||||||
|
1. Write tests BEFORE implementation
|
||||||
|
2. Minimum **85% coverage**
|
||||||
|
3. Build and test after EVERY change
|
||||||
|
4. Task completion requires passing tests
|
||||||
|
|
||||||
|
### Linting (MANDATORY)
|
||||||
|
**Run the project linter after every code change. Fix ALL violations. Zero tolerance.**
|
||||||
|
- Never disable lint rules (`eslint-disable`, `noqa`, `nolint`)
|
||||||
|
- Never leave warnings — warnings are errors you haven't fixed yet
|
||||||
|
- If you touched a file, you own its lint violations (Campsite Rule)
|
||||||
|
- If unsure what linter the project uses, read the `lint` skill: `~/.mosaic/skills/lint/SKILL.md`
|
||||||
|
|
||||||
|
### Code Style
|
||||||
|
Follow [Google Style Guides](https://github.com/google/styleguide) for all languages.
|
||||||
|
|
||||||
|
### Commits
|
||||||
|
```
|
||||||
|
<type>(#issue): Brief description
|
||||||
|
|
||||||
|
Detailed explanation if needed.
|
||||||
|
|
||||||
|
Fixes #123
|
||||||
|
```
|
||||||
|
Types: `feat`, `fix`, `docs`, `test`, `refactor`, `chore`
|
||||||
|
|
||||||
|
## Scratchpad Format
|
||||||
|
|
||||||
|
When working on issue #N, create `docs/scratchpads/N-short-name.md`:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Issue #N: Title
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
[What this issue accomplishes]
|
||||||
|
|
||||||
|
## Approach
|
||||||
|
[Implementation plan]
|
||||||
|
|
||||||
|
## Progress
|
||||||
|
- [ ] Step 1
|
||||||
|
- [ ] Step 2
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
[Test plan and results]
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
[Findings, blockers, decisions]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Conditional Context
|
||||||
|
|
||||||
|
**Read the relevant guide before starting work:**
|
||||||
|
|
||||||
|
| Task Type | Guide |
|
||||||
|
|-----------|-------|
|
||||||
|
| Bootstrapping a new project | `~/.mosaic/guides/bootstrap.md` |
|
||||||
|
| Orchestrating autonomous task completion | `~/.mosaic/guides/orchestrator.md` |
|
||||||
|
| Ralph autonomous development | `~/.mosaic/guides/ralph-autonomous.md` |
|
||||||
|
| Frontend development | `~/.mosaic/guides/frontend.md` |
|
||||||
|
| Backend/API development | `~/.mosaic/guides/backend.md` |
|
||||||
|
| Code review | `~/.mosaic/guides/code-review.md` |
|
||||||
|
| Authentication/Authorization | `~/.mosaic/guides/authentication.md` |
|
||||||
|
| CI/CD pipelines & Docker builds | `~/.mosaic/guides/ci-cd-pipelines.md` |
|
||||||
|
| Infrastructure/DevOps | `~/.mosaic/guides/infrastructure.md` |
|
||||||
|
| QA/Testing | `~/.mosaic/guides/qa-testing.md` |
|
||||||
|
| Secrets management | See section below |
|
||||||
|
|
||||||
|
**Project-specific skills:**
|
||||||
|
|
||||||
|
| Project | Skill |
|
||||||
|
|---------|-------|
|
||||||
|
| jetrich/jarvis | `~/.mosaic/skills/jarvis/SKILL.md` |
|
||||||
|
|
||||||
|
## Secrets Management
|
||||||
|
|
||||||
|
**NEVER hardcode secrets in the codebase.** Choose the appropriate method based on your environment.
|
||||||
|
|
||||||
|
### If Using Vault
|
||||||
|
See `~/.mosaic/guides/vault-secrets.md` for the canonical structure and rules.
|
||||||
|
|
||||||
|
Quick reference:
|
||||||
|
```
|
||||||
|
{mount}/{service}/{component}/{secret-name}
|
||||||
|
Example: secret-prod/postgres/database/app
|
||||||
|
```
|
||||||
|
|
||||||
|
### If NOT Using Vault (Use .env Files)
|
||||||
|
|
||||||
|
**Structure:**
|
||||||
|
```
|
||||||
|
project-root/
|
||||||
|
├── .env.example # Template with placeholder values (committed)
|
||||||
|
├── .env # Local secrets (NEVER committed)
|
||||||
|
├── .env.development # Dev overrides (optional, not committed)
|
||||||
|
├── .env.test # Test environment (optional, not committed)
|
||||||
|
└── .gitignore # Must include .env*
|
||||||
|
```
|
||||||
|
|
||||||
|
**Rules:**
|
||||||
|
1. **ALWAYS** add `.env*` to `.gitignore` (except `.env.example`)
|
||||||
|
2. Create `.env.example` with all required variables and placeholder values
|
||||||
|
3. Use descriptive variable names: `DATABASE_URL`, `API_SECRET_KEY`
|
||||||
|
4. Document required variables in README
|
||||||
|
5. Load via `dotenv` or framework-native methods
|
||||||
|
|
||||||
|
**.env.example template:**
|
||||||
|
```bash
|
||||||
|
# Database
|
||||||
|
DATABASE_URL=postgresql://user:password@localhost:5432/dbname
|
||||||
|
DATABASE_POOL_SIZE=10
|
||||||
|
|
||||||
|
# Authentication
|
||||||
|
JWT_SECRET_KEY=your-secret-key-here
|
||||||
|
JWT_EXPIRY_SECONDS=3600
|
||||||
|
|
||||||
|
# External APIs
|
||||||
|
STRIPE_API_KEY=sk_test_xxx
|
||||||
|
SENDGRID_API_KEY=SG.xxx
|
||||||
|
|
||||||
|
# App Config
|
||||||
|
APP_ENV=development
|
||||||
|
DEBUG=false
|
||||||
|
```
|
||||||
|
|
||||||
|
**Loading secrets:**
|
||||||
|
```python
|
||||||
|
# Python
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
load_dotenv()
|
||||||
|
|
||||||
|
# Node.js
|
||||||
|
import 'dotenv/config';
|
||||||
|
|
||||||
|
# Or use framework-native (Next.js, NestJS, etc.)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Security Rules (All Environments)
|
||||||
|
1. **Never commit secrets** - Use .env or Vault
|
||||||
|
2. **Never log secrets** - Mask in logs if needed
|
||||||
|
3. **Rotate compromised secrets immediately**
|
||||||
|
4. **Use different secrets per environment**
|
||||||
|
5. **Validate secrets exist at startup** - Fail fast if missing
|
||||||
|
|
||||||
|
## Multi-Agent Coordination
|
||||||
|
|
||||||
|
When launching concurrent agents:
|
||||||
|
```bash
|
||||||
|
nohup claude -p "<instruction>" > logs/agent-{name}.log 2>&1 &
|
||||||
|
```
|
||||||
|
|
||||||
|
- Maximum 10 simultaneous agents
|
||||||
|
- Monitor for errors and permission issues
|
||||||
|
- Add required permissions to `.claude/settings.json` → `allowedCommands`
|
||||||
|
- Restart failed agents after fixing issues
|
||||||
|
|
||||||
|
**For Ralph multi-agent:**
|
||||||
|
- Use worktrees for isolation
|
||||||
|
- Each agent works on different story
|
||||||
|
- Coordinate via git (frequent pulls)
|
||||||
|
|
||||||
|
## Dev Server Infrastructure (web1.corp.uscllc.local)
|
||||||
|
|
||||||
|
### Shared Traefik Reverse Proxy
|
||||||
|
|
||||||
|
A shared Traefik instance handles routing for all dev/test services on this server.
|
||||||
|
|
||||||
|
**Location:** `~/src/traefik`
|
||||||
|
|
||||||
|
**Start Traefik:**
|
||||||
|
```bash
|
||||||
|
cd ~/src/traefik
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
**Dashboard:** http://localhost:8080
|
||||||
|
|
||||||
|
### Connecting Services to Traefik
|
||||||
|
|
||||||
|
Add to your service's `docker-compose.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
app:
|
||||||
|
labels:
|
||||||
|
- "traefik.enable=true"
|
||||||
|
- "traefik.http.routers.myapp.rule=Host(`myapp.uscllc.com`)"
|
||||||
|
- "traefik.http.routers.myapp.entrypoints=websecure"
|
||||||
|
- "traefik.http.routers.myapp.tls=true"
|
||||||
|
- "traefik.http.services.myapp.loadbalancer.server.port=3000"
|
||||||
|
networks:
|
||||||
|
- internal
|
||||||
|
- traefik-public
|
||||||
|
|
||||||
|
networks:
|
||||||
|
internal:
|
||||||
|
driver: bridge
|
||||||
|
traefik-public:
|
||||||
|
external: true
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note:** Uses self-signed wildcard cert - browsers will show security warning.
|
||||||
|
|
||||||
|
### Active Dev Services
|
||||||
|
|
||||||
|
| Service | Domain | Repository |
|
||||||
|
|---------|--------|------------|
|
||||||
|
| Inventory Stickers | inventory.uscllc.com | ~/src/sticker-app |
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
project-root/
|
||||||
|
├── AGENTS.md # Codebase patterns for AI agents
|
||||||
|
├── docs/
|
||||||
|
│ └── scratchpads/ # Agent working documents
|
||||||
|
│ └── {issue#}-{name}.md
|
||||||
|
├── scripts/
|
||||||
|
│ └── ralph/ # Ralph autonomous development
|
||||||
|
│ ├── ralph.sh # Loop script
|
||||||
|
│ ├── prd.json # Current feature tasks
|
||||||
|
│ └── progress.txt # Memory between iterations
|
||||||
|
├── tasks/ # PRD documents
|
||||||
|
│ └── prd-{feature}.md
|
||||||
|
├── logs/ # Log files
|
||||||
|
└── tests/ # Test files
|
||||||
|
```
|
||||||
294
runtime/claude/context7-integration.md
Normal file
294
runtime/claude/context7-integration.md
Normal file
@@ -0,0 +1,294 @@
|
|||||||
|
# Context7 Integration for Atomic Code Implementer
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The atomic-code-implementer agent uses Context7 MCP server to dynamically fetch up-to-date documentation for libraries and frameworks. This integration provides real-time access to the latest API documentation, best practices, and code examples.
|
||||||
|
|
||||||
|
## Integration Points
|
||||||
|
|
||||||
|
### 1. Preset-Driven Documentation Lookup
|
||||||
|
|
||||||
|
Each preset configuration includes a `context7Libraries` array that specifies which libraries to fetch documentation for:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"context7Libraries": [
|
||||||
|
"@nestjs/common",
|
||||||
|
"@nestjs/core",
|
||||||
|
"@nestjs/typeorm",
|
||||||
|
"typeorm",
|
||||||
|
"class-validator"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
When a preset is loaded, the agent automatically resolves and fetches documentation for all specified libraries.
|
||||||
|
|
||||||
|
### 2. Error-Driven Documentation Lookup
|
||||||
|
|
||||||
|
When build errors, type errors, or runtime issues occur, the agent can automatically lookup documentation for:
|
||||||
|
|
||||||
|
- Error resolution patterns
|
||||||
|
- API migration guides
|
||||||
|
- Breaking change documentation
|
||||||
|
- Best practice guidelines
|
||||||
|
|
||||||
|
### 3. Implementation-Driven Lookup
|
||||||
|
|
||||||
|
During atomic task implementation, the agent can fetch:
|
||||||
|
|
||||||
|
- Framework-specific implementation patterns
|
||||||
|
- Library-specific configuration examples
|
||||||
|
- Performance optimization techniques
|
||||||
|
- Security best practices
|
||||||
|
|
||||||
|
## Context7 Usage Patterns
|
||||||
|
|
||||||
|
### Library Resolution
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Resolve library ID from preset configuration
|
||||||
|
const libraryId = await mcp__context7__resolve_library_id({
|
||||||
|
libraryName: "@nestjs/common"
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Documentation Retrieval
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Get comprehensive documentation
|
||||||
|
const docs = await mcp__context7__get_library_docs({
|
||||||
|
context7CompatibleLibraryID: "/nestjs/nest",
|
||||||
|
topic: "controllers",
|
||||||
|
tokens: 8000
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Error-Specific Lookups
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Look up specific error patterns
|
||||||
|
const errorDocs = await mcp__context7__get_library_docs({
|
||||||
|
context7CompatibleLibraryID: "/typescript/typescript",
|
||||||
|
topic: "type errors",
|
||||||
|
tokens: 5000
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Automatic Lookup Triggers
|
||||||
|
|
||||||
|
### 1. Preset Loading Phase
|
||||||
|
When an atomic task is started:
|
||||||
|
1. Detect tech stack from file extensions
|
||||||
|
2. Load appropriate preset configuration
|
||||||
|
3. Extract `context7Libraries` array
|
||||||
|
4. Resolve all library IDs
|
||||||
|
5. Fetch relevant documentation based on task context
|
||||||
|
|
||||||
|
### 2. Error Detection Phase
|
||||||
|
When quality hooks detect issues:
|
||||||
|
1. Parse error messages for library/framework references
|
||||||
|
2. Resolve documentation for problematic libraries
|
||||||
|
3. Look up error-specific resolution patterns
|
||||||
|
4. Apply common fixes based on documentation
|
||||||
|
|
||||||
|
### 3. Implementation Phase
|
||||||
|
During code implementation:
|
||||||
|
1. Detect new library imports or API usage
|
||||||
|
2. Automatically fetch documentation for unknown patterns
|
||||||
|
3. Provide implementation examples and best practices
|
||||||
|
4. Validate against latest API specifications
|
||||||
|
|
||||||
|
## Context7 Library Mappings
|
||||||
|
|
||||||
|
### NestJS Backend
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"@nestjs/common": "/nestjs/nest",
|
||||||
|
"@nestjs/typeorm": "/nestjs/typeorm",
|
||||||
|
"typeorm": "/typeorm/typeorm",
|
||||||
|
"class-validator": "/typestack/class-validator",
|
||||||
|
"bcrypt": "/kelektiv/node.bcrypt.js"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### React Frontend
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"react": "/facebook/react",
|
||||||
|
"react-dom": "/facebook/react",
|
||||||
|
"@tanstack/react-query": "/tanstack/query",
|
||||||
|
"tailwindcss": "/tailwindlabs/tailwindcss",
|
||||||
|
"@testing-library/react": "/testing-library/react-testing-library"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Python FastAPI
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"fastapi": "/tiangolo/fastapi",
|
||||||
|
"sqlalchemy": "/sqlalchemy/sqlalchemy",
|
||||||
|
"pydantic": "/samuelcolvin/pydantic",
|
||||||
|
"pytest": "/pytest-dev/pytest"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Integration Workflow
|
||||||
|
|
||||||
|
### Sequential Thinking Enhanced Lookup
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
1. **Preset Analysis Phase**
|
||||||
|
- Use sequential thinking to determine optimal documentation needs
|
||||||
|
- Analyze task requirements for specific library features
|
||||||
|
- Prioritize documentation lookup based on complexity
|
||||||
|
|
||||||
|
2. **Dynamic Documentation Loading**
|
||||||
|
- Load core framework documentation first
|
||||||
|
- Fetch specialized library docs based on task specifics
|
||||||
|
- Cache documentation for session reuse
|
||||||
|
|
||||||
|
3. **Implementation Guidance**
|
||||||
|
- Use retrieved docs to guide implementation decisions
|
||||||
|
- Apply documented best practices and patterns
|
||||||
|
- Validate implementation against official examples
|
||||||
|
```
|
||||||
|
|
||||||
|
### Error Resolution Workflow
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
1. **Error Detection**
|
||||||
|
- Parse error messages for library/API references
|
||||||
|
- Identify deprecated or changed APIs
|
||||||
|
- Extract relevant context from error stack traces
|
||||||
|
|
||||||
|
2. **Documentation Lookup**
|
||||||
|
- Resolve library documentation for error context
|
||||||
|
- Fetch migration guides for breaking changes
|
||||||
|
- Look up troubleshooting and FAQ sections
|
||||||
|
|
||||||
|
3. **Automated Remediation**
|
||||||
|
- Apply documented fixes and workarounds
|
||||||
|
- Update code to use current APIs
|
||||||
|
- Add proper error handling based on docs
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration Examples
|
||||||
|
|
||||||
|
### Preset Configuration with Context7
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"name": "NestJS HIPAA Healthcare",
|
||||||
|
"techStack": {
|
||||||
|
"framework": "NestJS",
|
||||||
|
"database": "TypeORM + PostgreSQL"
|
||||||
|
},
|
||||||
|
"context7Libraries": [
|
||||||
|
"@nestjs/common",
|
||||||
|
"@nestjs/typeorm",
|
||||||
|
"typeorm",
|
||||||
|
"bcrypt",
|
||||||
|
"helmet"
|
||||||
|
],
|
||||||
|
"context7Topics": {
|
||||||
|
"security": ["authentication", "authorization", "encryption"],
|
||||||
|
"database": ["migrations", "relationships", "transactions"],
|
||||||
|
"testing": ["unit tests", "integration tests", "mocking"]
|
||||||
|
},
|
||||||
|
"context7AutoLookup": {
|
||||||
|
"onError": true,
|
||||||
|
"onImport": true,
|
||||||
|
"onDeprecation": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Agent Integration Points
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Context7 Integration in atomic-code-implementer.md
|
||||||
|
|
||||||
|
### Phase 1: Preset Loading
|
||||||
|
```javascript
|
||||||
|
// Load preset and resolve documentation
|
||||||
|
const preset = loadPreset(detectedTechStack, domainContext);
|
||||||
|
const libraryDocs = await loadContext7Documentation(preset.context7Libraries);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 2: Implementation Guidance
|
||||||
|
```javascript
|
||||||
|
// Get implementation examples during coding
|
||||||
|
const implementationDocs = await mcp__context7__get_library_docs({
|
||||||
|
context7CompatibleLibraryID: "/nestjs/nest",
|
||||||
|
topic: "controllers authentication",
|
||||||
|
tokens: 6000
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 3: Error Resolution
|
||||||
|
```javascript
|
||||||
|
// Look up error-specific documentation
|
||||||
|
if (buildError.includes("TypeError: Cannot read property")) {
|
||||||
|
const errorDocs = await mcp__context7__get_library_docs({
|
||||||
|
context7CompatibleLibraryID: extractLibraryFromError(buildError),
|
||||||
|
topic: "common errors troubleshooting",
|
||||||
|
tokens: 4000
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
```
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### 1. Documentation Caching
|
||||||
|
- Cache resolved library IDs for session duration
|
||||||
|
- Store frequently accessed documentation locally
|
||||||
|
- Implement intelligent cache invalidation
|
||||||
|
|
||||||
|
### 2. Context-Aware Lookups
|
||||||
|
- Tailor documentation queries to specific atomic task context
|
||||||
|
- Use targeted topics rather than generic documentation
|
||||||
|
- Prioritize relevant sections based on implementation needs
|
||||||
|
|
||||||
|
### 3. Error-Driven Learning
|
||||||
|
- Maintain error pattern → documentation mapping
|
||||||
|
- Learn from successful error resolutions
|
||||||
|
- Build knowledge base of common issues and solutions
|
||||||
|
|
||||||
|
### 4. Performance Optimization
|
||||||
|
- Batch documentation requests when possible
|
||||||
|
- Use appropriate token limits for different use cases
|
||||||
|
- Implement request deduplication
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Common Issues
|
||||||
|
|
||||||
|
1. **Library Not Found**
|
||||||
|
```javascript
|
||||||
|
// Fallback to generic search
|
||||||
|
const fallbackId = await mcp__context7__resolve_library_id({
|
||||||
|
libraryName: `${libraryName} documentation`
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Documentation Too Generic**
|
||||||
|
```javascript
|
||||||
|
// Use more specific topics
|
||||||
|
const specificDocs = await mcp__context7__get_library_docs({
|
||||||
|
context7CompatibleLibraryID: libraryId,
|
||||||
|
topic: `${specificFeature} implementation examples`,
|
||||||
|
tokens: 8000
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Rate Limiting**
|
||||||
|
```javascript
|
||||||
|
// Implement exponential backoff
|
||||||
|
const docs = await retryWithBackoff(() =>
|
||||||
|
mcp__context7__get_library_docs(params)
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
This integration ensures the atomic code implementer always has access to the most current and relevant documentation, enabling it to produce high-quality, up-to-date implementations while following current best practices.
|
||||||
288
runtime/claude/hooks-config.json
Normal file
288
runtime/claude/hooks-config.json
Normal file
@@ -0,0 +1,288 @@
|
|||||||
|
{
|
||||||
|
"name": "Universal Atomic Code Implementer Hooks",
|
||||||
|
"description": "Comprehensive hooks configuration for quality enforcement and automatic remediation",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"hooks": {
|
||||||
|
"PostToolUse": [
|
||||||
|
{
|
||||||
|
"matcher": "Write|Edit|MultiEdit",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "bash",
|
||||||
|
"args": [
|
||||||
|
"-c",
|
||||||
|
"echo '[HOOK] Universal quality enforcement for $FILE_PATH'; if [[ \"$FILE_PATH\" == *.ts || \"$FILE_PATH\" == *.tsx ]]; then echo '[HOOK] TypeScript checks'; npx eslint --fix \"$FILE_PATH\" && npx prettier --write \"$FILE_PATH\" && npx tsc --noEmit || echo '[HOOK] TS completed'; elif [[ \"$FILE_PATH\" == *.js || \"$FILE_PATH\" == *.jsx ]]; then echo '[HOOK] JavaScript checks'; npx eslint --fix \"$FILE_PATH\" && npx prettier --write \"$FILE_PATH\" || echo '[HOOK] JS completed'; elif [[ \"$FILE_PATH\" == *.py ]]; then echo '[HOOK] Python checks'; black \"$FILE_PATH\" && flake8 \"$FILE_PATH\" && mypy \"$FILE_PATH\" || echo '[HOOK] Python completed'; fi"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"fileTypeHooks": {
|
||||||
|
"*.ts": {
|
||||||
|
"afterChange": [
|
||||||
|
"echo '[HOOK] TypeScript file modified: ${FILE_PATH}'",
|
||||||
|
"npx eslint --fix ${FILE_PATH}",
|
||||||
|
"npx prettier --write ${FILE_PATH}",
|
||||||
|
"npx tsc --noEmit --project tsconfig.json"
|
||||||
|
],
|
||||||
|
"beforeDelete": [
|
||||||
|
"echo '[HOOK] Checking for TypeScript file dependencies before deletion'"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"*.tsx": {
|
||||||
|
"afterChange": [
|
||||||
|
"echo '[HOOK] React TypeScript file modified: ${FILE_PATH}'",
|
||||||
|
"npx eslint --fix ${FILE_PATH}",
|
||||||
|
"npx prettier --write ${FILE_PATH}",
|
||||||
|
"npx tsc --noEmit --project tsconfig.json",
|
||||||
|
"if command -v npm >/dev/null 2>&1; then",
|
||||||
|
" npm run test:component -- ${FILE_NAME} 2>/dev/null || echo '[HOOK] Component tests checked'",
|
||||||
|
"fi"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"*.js": {
|
||||||
|
"afterChange": [
|
||||||
|
"echo '[HOOK] JavaScript file modified: ${FILE_PATH}'",
|
||||||
|
"npx eslint --fix ${FILE_PATH}",
|
||||||
|
"npx prettier --write ${FILE_PATH}"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"*.jsx": {
|
||||||
|
"afterChange": [
|
||||||
|
"echo '[HOOK] React JavaScript file modified: ${FILE_PATH}'",
|
||||||
|
"npx eslint --fix ${FILE_PATH}",
|
||||||
|
"npx prettier --write ${FILE_PATH}",
|
||||||
|
"if command -v npm >/dev/null 2>&1; then",
|
||||||
|
" npm run test:component -- ${FILE_NAME} 2>/dev/null || echo '[HOOK] Component tests checked'",
|
||||||
|
"fi"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"*.py": {
|
||||||
|
"afterChange": [
|
||||||
|
"echo '[HOOK] Python file modified: ${FILE_PATH}'",
|
||||||
|
"black ${FILE_PATH}",
|
||||||
|
"flake8 ${FILE_PATH} || echo '[HOOK] Flake8 linting completed'",
|
||||||
|
"mypy ${FILE_PATH} || echo '[HOOK] MyPy type checking completed'",
|
||||||
|
"if command -v pytest >/dev/null 2>&1; then",
|
||||||
|
" pytest ${FILE_PATH%.*}_test.py 2>/dev/null || echo '[HOOK] Python tests checked'",
|
||||||
|
"fi"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"package.json": {
|
||||||
|
"afterChange": [
|
||||||
|
"echo '[HOOK] package.json modified, updating dependencies'",
|
||||||
|
"npm install --no-audit --no-fund || echo '[HOOK] Dependency update completed'"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"requirements.txt": {
|
||||||
|
"afterChange": [
|
||||||
|
"echo '[HOOK] requirements.txt modified, updating Python dependencies'",
|
||||||
|
"pip install -r requirements.txt || echo '[HOOK] Python dependency update completed'"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"*.json": {
|
||||||
|
"afterChange": [
|
||||||
|
"echo '[HOOK] JSON file modified: ${FILE_PATH}'",
|
||||||
|
"if command -v jq >/dev/null 2>&1; then",
|
||||||
|
" jq . ${FILE_PATH} > /dev/null || echo '[HOOK] JSON validation failed'",
|
||||||
|
"fi"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"*.md": {
|
||||||
|
"afterChange": [
|
||||||
|
"echo '[HOOK] Markdown file modified: ${FILE_PATH}'",
|
||||||
|
"if command -v prettier >/dev/null 2>&1; then",
|
||||||
|
" npx prettier --write ${FILE_PATH} || echo '[HOOK] Markdown formatting completed'",
|
||||||
|
"fi"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"remediationActions": {
|
||||||
|
"RETRY_OPERATION": {
|
||||||
|
"description": "Retry the last file operation after applying fixes",
|
||||||
|
"maxRetries": 2,
|
||||||
|
"backoffMs": 1000
|
||||||
|
},
|
||||||
|
"CONTINUE_WITH_WARNING": {
|
||||||
|
"description": "Continue execution but log warnings for manual review",
|
||||||
|
"logLevel": "warning"
|
||||||
|
},
|
||||||
|
"ABORT_WITH_ERROR": {
|
||||||
|
"description": "Stop execution and require manual intervention",
|
||||||
|
"logLevel": "error"
|
||||||
|
},
|
||||||
|
"TRIGGER_QA_AGENT": {
|
||||||
|
"description": "Escalate to QA validation agent for complex issues",
|
||||||
|
"agent": "qa-validation-agent"
|
||||||
|
},
|
||||||
|
"REQUEST_CONTEXT7_HELP": {
|
||||||
|
"description": "Look up documentation for error resolution",
|
||||||
|
"tool": "mcp__context7__get-library-docs"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"qualityGates": {
|
||||||
|
"typescript": {
|
||||||
|
"eslint": {
|
||||||
|
"enabled": true,
|
||||||
|
"config": ".eslintrc.js",
|
||||||
|
"autoFix": true,
|
||||||
|
"failOnError": false
|
||||||
|
},
|
||||||
|
"prettier": {
|
||||||
|
"enabled": true,
|
||||||
|
"config": ".prettierrc",
|
||||||
|
"autoFix": true
|
||||||
|
},
|
||||||
|
"typeCheck": {
|
||||||
|
"enabled": true,
|
||||||
|
"config": "tsconfig.json",
|
||||||
|
"failOnError": false
|
||||||
|
},
|
||||||
|
"testing": {
|
||||||
|
"enabled": true,
|
||||||
|
"runAffected": true,
|
||||||
|
"coverage": 80
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"python": {
|
||||||
|
"black": {
|
||||||
|
"enabled": true,
|
||||||
|
"lineLength": 88,
|
||||||
|
"autoFix": true
|
||||||
|
},
|
||||||
|
"flake8": {
|
||||||
|
"enabled": true,
|
||||||
|
"config": ".flake8",
|
||||||
|
"failOnError": false
|
||||||
|
},
|
||||||
|
"mypy": {
|
||||||
|
"enabled": true,
|
||||||
|
"config": "mypy.ini",
|
||||||
|
"failOnError": false
|
||||||
|
},
|
||||||
|
"pytest": {
|
||||||
|
"enabled": true,
|
||||||
|
"coverage": 90
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"javascript": {
|
||||||
|
"eslint": {
|
||||||
|
"enabled": true,
|
||||||
|
"autoFix": true
|
||||||
|
},
|
||||||
|
"prettier": {
|
||||||
|
"enabled": true,
|
||||||
|
"autoFix": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"performanceOptimization": {
|
||||||
|
"parallelExecution": {
|
||||||
|
"enabled": true,
|
||||||
|
"maxConcurrency": 4
|
||||||
|
},
|
||||||
|
"caching": {
|
||||||
|
"enabled": true,
|
||||||
|
"eslint": true,
|
||||||
|
"prettier": true,
|
||||||
|
"typescript": true
|
||||||
|
},
|
||||||
|
"incrementalChecks": {
|
||||||
|
"enabled": true,
|
||||||
|
"onlyModifiedFiles": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"monitoring": {
|
||||||
|
"metrics": {
|
||||||
|
"hookExecutionTime": true,
|
||||||
|
"errorRates": true,
|
||||||
|
"remediationSuccess": true
|
||||||
|
},
|
||||||
|
"logging": {
|
||||||
|
"level": "info",
|
||||||
|
"format": "json",
|
||||||
|
"includeStackTrace": true
|
||||||
|
},
|
||||||
|
"alerts": {
|
||||||
|
"highErrorRate": {
|
||||||
|
"threshold": 0.1,
|
||||||
|
"action": "log"
|
||||||
|
},
|
||||||
|
"slowHookExecution": {
|
||||||
|
"thresholdMs": 10000,
|
||||||
|
"action": "log"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"integration": {
|
||||||
|
"presets": {
|
||||||
|
"loadHooksFromPresets": true,
|
||||||
|
"overrideWithProjectConfig": true
|
||||||
|
},
|
||||||
|
"cicd": {
|
||||||
|
"skipInCI": false,
|
||||||
|
"reportToCI": true
|
||||||
|
},
|
||||||
|
"ide": {
|
||||||
|
"vscode": {
|
||||||
|
"showNotifications": true,
|
||||||
|
"autoSave": false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"customCommands": {
|
||||||
|
"fullQualityCheck": {
|
||||||
|
"description": "Run comprehensive quality checks",
|
||||||
|
"commands": [
|
||||||
|
"npm run lint",
|
||||||
|
"npm run format",
|
||||||
|
"npm run build",
|
||||||
|
"npm run test",
|
||||||
|
"npm run type-check"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"securityScan": {
|
||||||
|
"description": "Run security scanning",
|
||||||
|
"commands": [
|
||||||
|
"npm audit",
|
||||||
|
"npx eslint . --ext .ts,.tsx,.js,.jsx --config .eslintrc.security.js || echo 'Security scan completed'"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"performanceCheck": {
|
||||||
|
"description": "Run performance analysis",
|
||||||
|
"commands": [
|
||||||
|
"npm run build:analyze || echo 'Bundle analysis completed'",
|
||||||
|
"npm run lighthouse || echo 'Lighthouse audit completed'"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"documentation": {
|
||||||
|
"usage": "This configuration provides comprehensive quality enforcement through hooks",
|
||||||
|
"examples": [
|
||||||
|
{
|
||||||
|
"scenario": "TypeScript file creation",
|
||||||
|
"flow": "Write file → ESLint auto-fix → Prettier format → TypeScript check → Tests"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"scenario": "Python file modification",
|
||||||
|
"flow": "Edit file → Black format → Flake8 lint → MyPy type check → Pytest"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"scenario": "Build error",
|
||||||
|
"flow": "Error detected → Analyze common issues → Apply fixes → Retry or continue"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"troubleshooting": [
|
||||||
|
{
|
||||||
|
"issue": "Hooks taking too long",
|
||||||
|
"solution": "Enable parallelExecution and incremental checks"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"issue": "False positive errors",
|
||||||
|
"solution": "Adjust quality gate thresholds or use CONTINUE_WITH_WARNING"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
228
runtime/claude/settings.json
Normal file
228
runtime/claude/settings.json
Normal file
@@ -0,0 +1,228 @@
|
|||||||
|
{
|
||||||
|
"model": "opus",
|
||||||
|
"hooks": {
|
||||||
|
"PostToolUse": [
|
||||||
|
{
|
||||||
|
"matcher": "Edit|MultiEdit|Write",
|
||||||
|
"hooks": [
|
||||||
|
{
|
||||||
|
"type": "command",
|
||||||
|
"command": "~/.mosaic/rails/qa/qa-hook-stdin.sh",
|
||||||
|
"timeout": 60
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"enabledPlugins": {
|
||||||
|
"frontend-design@claude-plugins-official": true,
|
||||||
|
"feature-dev@claude-plugins-official": true,
|
||||||
|
"code-review@claude-plugins-official": true,
|
||||||
|
"pr-review-toolkit@claude-plugins-official": true
|
||||||
|
},
|
||||||
|
"skipDangerousModePermissionPrompt": true,
|
||||||
|
"allowedCommands": [
|
||||||
|
"npm",
|
||||||
|
"npm install",
|
||||||
|
"npm run",
|
||||||
|
"npm test",
|
||||||
|
"npm build",
|
||||||
|
"npm start",
|
||||||
|
"npm run dev",
|
||||||
|
"npm run build",
|
||||||
|
"npm run lint",
|
||||||
|
"npm run typecheck",
|
||||||
|
"npm run test:ci",
|
||||||
|
"npm run test:e2e",
|
||||||
|
"npm run test:unit",
|
||||||
|
"npm run test:integration",
|
||||||
|
"npm run test:cov",
|
||||||
|
"npm run test:security",
|
||||||
|
"npm run security:scan",
|
||||||
|
"npm run security:audit",
|
||||||
|
"npm run performance:benchmark",
|
||||||
|
"npm run build:dev",
|
||||||
|
"npm run build:prod",
|
||||||
|
"npm run test",
|
||||||
|
"npm run test:watch",
|
||||||
|
"npm run migrate",
|
||||||
|
"npm run migrate:rollback",
|
||||||
|
"npm run db:seed",
|
||||||
|
"npm run db:reset",
|
||||||
|
"node",
|
||||||
|
"yarn",
|
||||||
|
"pnpm",
|
||||||
|
"npx",
|
||||||
|
"npx tsc",
|
||||||
|
"npx eslint",
|
||||||
|
"npx prettier",
|
||||||
|
"npx jest",
|
||||||
|
"npx vitest",
|
||||||
|
"git",
|
||||||
|
"git add",
|
||||||
|
"git commit",
|
||||||
|
"git push",
|
||||||
|
"git pull",
|
||||||
|
"git status",
|
||||||
|
"git diff",
|
||||||
|
"git log",
|
||||||
|
"git branch",
|
||||||
|
"git checkout",
|
||||||
|
"git merge",
|
||||||
|
"git init",
|
||||||
|
"git remote",
|
||||||
|
"git fetch",
|
||||||
|
"git reset",
|
||||||
|
"git rebase",
|
||||||
|
"git stash",
|
||||||
|
"git tag",
|
||||||
|
"git show",
|
||||||
|
"git config",
|
||||||
|
"gh",
|
||||||
|
"gh issue",
|
||||||
|
"gh pr",
|
||||||
|
"gh repo",
|
||||||
|
"gh api",
|
||||||
|
"docker",
|
||||||
|
"docker build",
|
||||||
|
"docker run",
|
||||||
|
"docker ps",
|
||||||
|
"docker logs",
|
||||||
|
"docker exec",
|
||||||
|
"docker stop",
|
||||||
|
"docker start",
|
||||||
|
"docker pull",
|
||||||
|
"docker push",
|
||||||
|
"docker-compose",
|
||||||
|
"docker-compose up",
|
||||||
|
"docker-compose down",
|
||||||
|
"docker-compose build",
|
||||||
|
"docker-compose logs",
|
||||||
|
"docker-compose ps",
|
||||||
|
"docker-compose exec",
|
||||||
|
"kubectl",
|
||||||
|
"kubectl get",
|
||||||
|
"kubectl describe",
|
||||||
|
"kubectl logs",
|
||||||
|
"kubectl apply",
|
||||||
|
"kubectl delete",
|
||||||
|
"kubectl port-forward",
|
||||||
|
"mkdir",
|
||||||
|
"touch",
|
||||||
|
"chmod",
|
||||||
|
"chown",
|
||||||
|
"ls",
|
||||||
|
"cd",
|
||||||
|
"pwd",
|
||||||
|
"cp",
|
||||||
|
"mv",
|
||||||
|
"rm",
|
||||||
|
"cat",
|
||||||
|
"echo",
|
||||||
|
"head",
|
||||||
|
"tail",
|
||||||
|
"grep",
|
||||||
|
"grep -E",
|
||||||
|
"grep -r",
|
||||||
|
"find",
|
||||||
|
"find -name",
|
||||||
|
"find -type",
|
||||||
|
"find -path",
|
||||||
|
"find -exec",
|
||||||
|
"find . -type f",
|
||||||
|
"find . -type d",
|
||||||
|
"wc",
|
||||||
|
"sort",
|
||||||
|
"uniq",
|
||||||
|
"curl",
|
||||||
|
"wget",
|
||||||
|
"ping",
|
||||||
|
"netstat",
|
||||||
|
"ss",
|
||||||
|
"lsof",
|
||||||
|
"psql",
|
||||||
|
"pg_dump",
|
||||||
|
"pg_restore",
|
||||||
|
"sqlite3",
|
||||||
|
"jest",
|
||||||
|
"vitest",
|
||||||
|
"playwright",
|
||||||
|
"cypress",
|
||||||
|
"artillery",
|
||||||
|
"lighthouse",
|
||||||
|
"tsc",
|
||||||
|
"eslint",
|
||||||
|
"prettier",
|
||||||
|
"snyk",
|
||||||
|
"semgrep",
|
||||||
|
"tar",
|
||||||
|
"gzip",
|
||||||
|
"unzip",
|
||||||
|
"zip",
|
||||||
|
"which",
|
||||||
|
"whoami",
|
||||||
|
"id",
|
||||||
|
"env",
|
||||||
|
"export",
|
||||||
|
"source",
|
||||||
|
"sleep",
|
||||||
|
"date",
|
||||||
|
"uptime",
|
||||||
|
"df",
|
||||||
|
"du",
|
||||||
|
"free",
|
||||||
|
"top",
|
||||||
|
"htop",
|
||||||
|
"ps",
|
||||||
|
"tree",
|
||||||
|
"jq",
|
||||||
|
"sed",
|
||||||
|
"awk",
|
||||||
|
"xargs",
|
||||||
|
"tee",
|
||||||
|
"test",
|
||||||
|
"true",
|
||||||
|
"false",
|
||||||
|
"basename",
|
||||||
|
"dirname",
|
||||||
|
"realpath",
|
||||||
|
"readlink",
|
||||||
|
"stat",
|
||||||
|
"file",
|
||||||
|
"make",
|
||||||
|
"cmake",
|
||||||
|
"gcc",
|
||||||
|
"g++",
|
||||||
|
"clang",
|
||||||
|
"python",
|
||||||
|
"python3",
|
||||||
|
"pip",
|
||||||
|
"pip3",
|
||||||
|
"pip install",
|
||||||
|
"poetry",
|
||||||
|
"pipenv",
|
||||||
|
"go",
|
||||||
|
"go build",
|
||||||
|
"go test",
|
||||||
|
"go run",
|
||||||
|
"go mod",
|
||||||
|
"cargo",
|
||||||
|
"rustc",
|
||||||
|
"ruby",
|
||||||
|
"gem",
|
||||||
|
"bundle",
|
||||||
|
"rake",
|
||||||
|
"java",
|
||||||
|
"javac",
|
||||||
|
"mvn",
|
||||||
|
"gradle",
|
||||||
|
"dotnet",
|
||||||
|
"msbuild",
|
||||||
|
"php",
|
||||||
|
"composer",
|
||||||
|
"perl",
|
||||||
|
"cpan",
|
||||||
|
"nohup"
|
||||||
|
],
|
||||||
|
"enableAllMcpTools": true
|
||||||
|
}
|
||||||
206
skills-local/jarvis/SKILL.md
Normal file
206
skills-local/jarvis/SKILL.md
Normal file
@@ -0,0 +1,206 @@
|
|||||||
|
---
|
||||||
|
name: jarvis
|
||||||
|
description: "Jarvis Platform development context. Use when working on the jetrich/jarvis repository. Provides architecture knowledge, coding patterns, and component locations."
|
||||||
|
---
|
||||||
|
|
||||||
|
# Jarvis Platform Development
|
||||||
|
|
||||||
|
## Project Overview
|
||||||
|
|
||||||
|
Jarvis is a self-hosted AI assistant platform built with:
|
||||||
|
- **Backend:** FastAPI (Python 3.11+)
|
||||||
|
- **Frontend:** Next.js 14+ (App Router)
|
||||||
|
- **Database:** PostgreSQL with pgvector
|
||||||
|
- **Plugins:** Modular LLM providers and integrations
|
||||||
|
|
||||||
|
Repository: `jetrich/jarvis`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
jarvis/
|
||||||
|
├── apps/
|
||||||
|
│ ├── api/ # FastAPI backend
|
||||||
|
│ │ └── src/
|
||||||
|
│ │ ├── routes/ # API endpoints
|
||||||
|
│ │ ├── services/ # Business logic
|
||||||
|
│ │ ├── models/ # SQLAlchemy models
|
||||||
|
│ │ └── core/ # Config, deps, security
|
||||||
|
│ └── web/ # Next.js frontend
|
||||||
|
│ └── src/
|
||||||
|
│ ├── app/ # App router pages
|
||||||
|
│ ├── components/ # React components
|
||||||
|
│ └── lib/ # Utilities
|
||||||
|
├── packages/
|
||||||
|
│ └── plugins/ # jarvis_plugins package
|
||||||
|
│ └── jarvis_plugins/
|
||||||
|
│ ├── llm/ # LLM providers (ollama, claude, etc.)
|
||||||
|
│ └── integrations/# External integrations
|
||||||
|
├── docs/
|
||||||
|
│ └── scratchpads/ # Agent working docs
|
||||||
|
└── scripts/ # Utility scripts
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Key Patterns
|
||||||
|
|
||||||
|
### LLM Provider Pattern
|
||||||
|
All LLM providers implement `BaseLLMProvider`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# packages/plugins/jarvis_plugins/llm/base.py
|
||||||
|
class BaseLLMProvider(ABC):
|
||||||
|
@abstractmethod
|
||||||
|
async def generate(self, prompt: str, **kwargs) -> str: ...
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def stream(self, prompt: str, **kwargs) -> AsyncIterator[str]: ...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Integration Pattern
|
||||||
|
External integrations (GitHub, Calendar, etc.) follow:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# packages/plugins/jarvis_plugins/integrations/base.py
|
||||||
|
class BaseIntegration(ABC):
|
||||||
|
@abstractmethod
|
||||||
|
async def authenticate(self, credentials: dict) -> bool: ...
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def execute(self, action: str, params: dict) -> dict: ...
|
||||||
|
```
|
||||||
|
|
||||||
|
### API Route Pattern
|
||||||
|
FastAPI routes use dependency injection:
|
||||||
|
|
||||||
|
```python
|
||||||
|
@router.get("/items")
|
||||||
|
async def list_items(
|
||||||
|
db: Session = Depends(get_db),
|
||||||
|
current_user: User = Depends(get_current_user),
|
||||||
|
service: ItemService = Depends(get_item_service)
|
||||||
|
):
|
||||||
|
return await service.list(db, current_user.id)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Frontend Component Pattern
|
||||||
|
Use shadcn/ui + server components by default:
|
||||||
|
|
||||||
|
```tsx
|
||||||
|
// Server component (default)
|
||||||
|
export default async function DashboardPage() {
|
||||||
|
const data = await fetchData();
|
||||||
|
return <Dashboard data={data} />;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Client component (when needed)
|
||||||
|
"use client"
|
||||||
|
export function InteractiveWidget() {
|
||||||
|
const [state, setState] = useState();
|
||||||
|
// ...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Database
|
||||||
|
|
||||||
|
- **ORM:** SQLAlchemy 2.0+
|
||||||
|
- **Migrations:** Alembic
|
||||||
|
- **Vector Store:** pgvector extension
|
||||||
|
|
||||||
|
### Creating Migrations
|
||||||
|
```bash
|
||||||
|
cd apps/api
|
||||||
|
alembic revision --autogenerate -m "description"
|
||||||
|
alembic upgrade head
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
### Backend
|
||||||
|
```bash
|
||||||
|
cd apps/api
|
||||||
|
pytest
|
||||||
|
pytest --cov=src
|
||||||
|
```
|
||||||
|
|
||||||
|
### Frontend
|
||||||
|
```bash
|
||||||
|
cd apps/web
|
||||||
|
npm test
|
||||||
|
npm run test:e2e
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quality Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backend
|
||||||
|
cd apps/api
|
||||||
|
ruff check .
|
||||||
|
ruff format .
|
||||||
|
mypy src/
|
||||||
|
|
||||||
|
# Frontend
|
||||||
|
cd apps/web
|
||||||
|
npm run lint
|
||||||
|
npm run typecheck
|
||||||
|
npm run format
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Active Development Areas
|
||||||
|
|
||||||
|
| Issue | Feature | Priority |
|
||||||
|
|-------|---------|----------|
|
||||||
|
| #84 | Per-function LLM routing | High |
|
||||||
|
| #85 | Ralph autonomous coding loop | High |
|
||||||
|
| #86 | Thinking models (CoT UI) | Medium |
|
||||||
|
| #87 | Local image generation | Medium |
|
||||||
|
| #88 | Deep research mode | Medium |
|
||||||
|
| #89 | Uncensored models + alignment | Medium |
|
||||||
|
| #90 | OCR capabilities | Medium |
|
||||||
|
| #91 | Authentik SSO | Medium |
|
||||||
|
| #40 | Claude Max + Claude Code | High |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Environment Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backend
|
||||||
|
cd apps/api
|
||||||
|
cp .env.example .env
|
||||||
|
pip install -e ".[dev]"
|
||||||
|
|
||||||
|
# Frontend
|
||||||
|
cd apps/web
|
||||||
|
cp .env.example .env.local
|
||||||
|
npm install
|
||||||
|
|
||||||
|
# Database
|
||||||
|
docker-compose up -d postgres
|
||||||
|
alembic upgrade head
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Commit Convention
|
||||||
|
|
||||||
|
```
|
||||||
|
<type>(#issue): Brief description
|
||||||
|
|
||||||
|
Detailed explanation if needed.
|
||||||
|
|
||||||
|
Fixes #123
|
||||||
|
```
|
||||||
|
|
||||||
|
Types: `feat`, `fix`, `docs`, `test`, `refactor`, `chore`
|
||||||
32
skills-local/mosaic-standards/SKILL.md
Normal file
32
skills-local/mosaic-standards/SKILL.md
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
---
|
||||||
|
name: mosaic-standards
|
||||||
|
description: Load machine-wide Mosaic standards and enforce the repository lifecycle contract. Use at session start for any coding runtime (Codex, Claude, OpenCode, etc.).
|
||||||
|
---
|
||||||
|
|
||||||
|
# Mosaic Standards
|
||||||
|
|
||||||
|
## Load Order
|
||||||
|
|
||||||
|
1. `~/.mosaic/STANDARDS.md`
|
||||||
|
2. Repository `AGENTS.md`
|
||||||
|
3. Repo-local `.mosaic/repo-hooks.sh` when present
|
||||||
|
|
||||||
|
## Session Lifecycle
|
||||||
|
|
||||||
|
- Start: `scripts/agent/session-start.sh`
|
||||||
|
- Priority scan: `scripts/agent/critical.sh`
|
||||||
|
- End: `scripts/agent/session-end.sh`
|
||||||
|
|
||||||
|
If wrappers are available, you may use:
|
||||||
|
|
||||||
|
- `mosaic-session-start`
|
||||||
|
- `mosaic-critical`
|
||||||
|
- `mosaic-session-end`
|
||||||
|
|
||||||
|
## Enforcement Rules
|
||||||
|
|
||||||
|
- Treat `~/.mosaic` as canonical for shared guides, rails, profiles, and skills.
|
||||||
|
- Do not edit generated project views directly when the repo defines canonical data sources.
|
||||||
|
- Pull/rebase before edits in shared repositories.
|
||||||
|
- Run project verification commands before claiming completion.
|
||||||
|
- Use non-destructive git workflow unless explicitly instructed otherwise.
|
||||||
240
skills-local/prd/SKILL.md
Normal file
240
skills-local/prd/SKILL.md
Normal file
@@ -0,0 +1,240 @@
|
|||||||
|
---
|
||||||
|
name: prd
|
||||||
|
description: "Generate a Product Requirements Document (PRD) for a new feature. Use when planning a feature, starting a new project, or when asked to create a PRD. Triggers on: create a prd, write prd for, plan this feature, requirements for, spec out."
|
||||||
|
---
|
||||||
|
|
||||||
|
# PRD Generator
|
||||||
|
|
||||||
|
Create detailed Product Requirements Documents that are clear, actionable, and suitable for implementation.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Job
|
||||||
|
|
||||||
|
1. Receive a feature description from the user
|
||||||
|
2. Ask 3-5 essential clarifying questions (with lettered options)
|
||||||
|
3. Generate a structured PRD based on answers
|
||||||
|
4. Save to `tasks/prd-[feature-name].md`
|
||||||
|
|
||||||
|
**Important:** Do NOT start implementing. Just create the PRD.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 1: Clarifying Questions
|
||||||
|
|
||||||
|
Ask only critical questions where the initial prompt is ambiguous. Focus on:
|
||||||
|
|
||||||
|
- **Problem/Goal:** What problem does this solve?
|
||||||
|
- **Core Functionality:** What are the key actions?
|
||||||
|
- **Scope/Boundaries:** What should it NOT do?
|
||||||
|
- **Success Criteria:** How do we know it's done?
|
||||||
|
|
||||||
|
### Format Questions Like This:
|
||||||
|
|
||||||
|
```
|
||||||
|
1. What is the primary goal of this feature?
|
||||||
|
A. Improve user onboarding experience
|
||||||
|
B. Increase user retention
|
||||||
|
C. Reduce support burden
|
||||||
|
D. Other: [please specify]
|
||||||
|
|
||||||
|
2. Who is the target user?
|
||||||
|
A. New users only
|
||||||
|
B. Existing users only
|
||||||
|
C. All users
|
||||||
|
D. Admin users only
|
||||||
|
|
||||||
|
3. What is the scope?
|
||||||
|
A. Minimal viable version
|
||||||
|
B. Full-featured implementation
|
||||||
|
C. Just the backend/API
|
||||||
|
D. Just the UI
|
||||||
|
```
|
||||||
|
|
||||||
|
This lets users respond with "1A, 2C, 3B" for quick iteration.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 2: PRD Structure
|
||||||
|
|
||||||
|
Generate the PRD with these sections:
|
||||||
|
|
||||||
|
### 1. Introduction/Overview
|
||||||
|
Brief description of the feature and the problem it solves.
|
||||||
|
|
||||||
|
### 2. Goals
|
||||||
|
Specific, measurable objectives (bullet list).
|
||||||
|
|
||||||
|
### 3. User Stories
|
||||||
|
Each story needs:
|
||||||
|
- **Title:** Short descriptive name
|
||||||
|
- **Description:** "As a [user], I want [feature] so that [benefit]"
|
||||||
|
- **Acceptance Criteria:** Verifiable checklist of what "done" means
|
||||||
|
|
||||||
|
Each story should be small enough to implement in one focused session.
|
||||||
|
|
||||||
|
**Format:**
|
||||||
|
```markdown
|
||||||
|
### US-001: [Title]
|
||||||
|
**Description:** As a [user], I want [feature] so that [benefit].
|
||||||
|
|
||||||
|
**Acceptance Criteria:**
|
||||||
|
- [ ] Specific verifiable criterion
|
||||||
|
- [ ] Another criterion
|
||||||
|
- [ ] Typecheck/lint passes
|
||||||
|
- [ ] **[UI stories only]** Verify in browser using dev-browser skill
|
||||||
|
```
|
||||||
|
|
||||||
|
**Important:**
|
||||||
|
- Acceptance criteria must be verifiable, not vague. "Works correctly" is bad. "Button shows confirmation dialog before deleting" is good.
|
||||||
|
- **For any story with UI changes:** Always include "Verify in browser using dev-browser skill" as acceptance criteria. This ensures visual verification of frontend work.
|
||||||
|
|
||||||
|
### 4. Functional Requirements
|
||||||
|
Numbered list of specific functionalities:
|
||||||
|
- "FR-1: The system must allow users to..."
|
||||||
|
- "FR-2: When a user clicks X, the system must..."
|
||||||
|
|
||||||
|
Be explicit and unambiguous.
|
||||||
|
|
||||||
|
### 5. Non-Goals (Out of Scope)
|
||||||
|
What this feature will NOT include. Critical for managing scope.
|
||||||
|
|
||||||
|
### 6. Design Considerations (Optional)
|
||||||
|
- UI/UX requirements
|
||||||
|
- Link to mockups if available
|
||||||
|
- Relevant existing components to reuse
|
||||||
|
|
||||||
|
### 7. Technical Considerations (Optional)
|
||||||
|
- Known constraints or dependencies
|
||||||
|
- Integration points with existing systems
|
||||||
|
- Performance requirements
|
||||||
|
|
||||||
|
### 8. Success Metrics
|
||||||
|
How will success be measured?
|
||||||
|
- "Reduce time to complete X by 50%"
|
||||||
|
- "Increase conversion rate by 10%"
|
||||||
|
|
||||||
|
### 9. Open Questions
|
||||||
|
Remaining questions or areas needing clarification.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Writing for Junior Developers
|
||||||
|
|
||||||
|
The PRD reader may be a junior developer or AI agent. Therefore:
|
||||||
|
|
||||||
|
- Be explicit and unambiguous
|
||||||
|
- Avoid jargon or explain it
|
||||||
|
- Provide enough detail to understand purpose and core logic
|
||||||
|
- Number requirements for easy reference
|
||||||
|
- Use concrete examples where helpful
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
- **Format:** Markdown (`.md`)
|
||||||
|
- **Location:** `tasks/`
|
||||||
|
- **Filename:** `prd-[feature-name].md` (kebab-case)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example PRD
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# PRD: Task Priority System
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
Add priority levels to tasks so users can focus on what matters most. Tasks can be marked as high, medium, or low priority, with visual indicators and filtering to help users manage their workload effectively.
|
||||||
|
|
||||||
|
## Goals
|
||||||
|
|
||||||
|
- Allow assigning priority (high/medium/low) to any task
|
||||||
|
- Provide clear visual differentiation between priority levels
|
||||||
|
- Enable filtering and sorting by priority
|
||||||
|
- Default new tasks to medium priority
|
||||||
|
|
||||||
|
## User Stories
|
||||||
|
|
||||||
|
### US-001: Add priority field to database
|
||||||
|
**Description:** As a developer, I need to store task priority so it persists across sessions.
|
||||||
|
|
||||||
|
**Acceptance Criteria:**
|
||||||
|
- [ ] Add priority column to tasks table: 'high' | 'medium' | 'low' (default 'medium')
|
||||||
|
- [ ] Generate and run migration successfully
|
||||||
|
- [ ] Typecheck passes
|
||||||
|
|
||||||
|
### US-002: Display priority indicator on task cards
|
||||||
|
**Description:** As a user, I want to see task priority at a glance so I know what needs attention first.
|
||||||
|
|
||||||
|
**Acceptance Criteria:**
|
||||||
|
- [ ] Each task card shows colored priority badge (red=high, yellow=medium, gray=low)
|
||||||
|
- [ ] Priority visible without hovering or clicking
|
||||||
|
- [ ] Typecheck passes
|
||||||
|
- [ ] Verify in browser using dev-browser skill
|
||||||
|
|
||||||
|
### US-003: Add priority selector to task edit
|
||||||
|
**Description:** As a user, I want to change a task's priority when editing it.
|
||||||
|
|
||||||
|
**Acceptance Criteria:**
|
||||||
|
- [ ] Priority dropdown in task edit modal
|
||||||
|
- [ ] Shows current priority as selected
|
||||||
|
- [ ] Saves immediately on selection change
|
||||||
|
- [ ] Typecheck passes
|
||||||
|
- [ ] Verify in browser using dev-browser skill
|
||||||
|
|
||||||
|
### US-004: Filter tasks by priority
|
||||||
|
**Description:** As a user, I want to filter the task list to see only high-priority items when I'm focused.
|
||||||
|
|
||||||
|
**Acceptance Criteria:**
|
||||||
|
- [ ] Filter dropdown with options: All | High | Medium | Low
|
||||||
|
- [ ] Filter persists in URL params
|
||||||
|
- [ ] Empty state message when no tasks match filter
|
||||||
|
- [ ] Typecheck passes
|
||||||
|
- [ ] Verify in browser using dev-browser skill
|
||||||
|
|
||||||
|
## Functional Requirements
|
||||||
|
|
||||||
|
- FR-1: Add `priority` field to tasks table ('high' | 'medium' | 'low', default 'medium')
|
||||||
|
- FR-2: Display colored priority badge on each task card
|
||||||
|
- FR-3: Include priority selector in task edit modal
|
||||||
|
- FR-4: Add priority filter dropdown to task list header
|
||||||
|
- FR-5: Sort by priority within each status column (high to medium to low)
|
||||||
|
|
||||||
|
## Non-Goals
|
||||||
|
|
||||||
|
- No priority-based notifications or reminders
|
||||||
|
- No automatic priority assignment based on due date
|
||||||
|
- No priority inheritance for subtasks
|
||||||
|
|
||||||
|
## Technical Considerations
|
||||||
|
|
||||||
|
- Reuse existing badge component with color variants
|
||||||
|
- Filter state managed via URL search params
|
||||||
|
- Priority stored in database, not computed
|
||||||
|
|
||||||
|
## Success Metrics
|
||||||
|
|
||||||
|
- Users can change priority in under 2 clicks
|
||||||
|
- High-priority tasks immediately visible at top of lists
|
||||||
|
- No regression in task list performance
|
||||||
|
|
||||||
|
## Open Questions
|
||||||
|
|
||||||
|
- Should priority affect task ordering within a column?
|
||||||
|
- Should we add keyboard shortcuts for priority changes?
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Checklist
|
||||||
|
|
||||||
|
Before saving the PRD:
|
||||||
|
|
||||||
|
- [ ] Asked clarifying questions with lettered options
|
||||||
|
- [ ] Incorporated user's answers
|
||||||
|
- [ ] User stories are small and specific
|
||||||
|
- [ ] Functional requirements are numbered and unambiguous
|
||||||
|
- [ ] Non-goals section defines clear boundaries
|
||||||
|
- [ ] Saved to `tasks/prd-[feature-name].md`
|
||||||
257
skills-local/ralph/SKILL.md
Normal file
257
skills-local/ralph/SKILL.md
Normal file
@@ -0,0 +1,257 @@
|
|||||||
|
---
|
||||||
|
name: ralph
|
||||||
|
description: "Convert PRDs to prd.json format for the Ralph autonomous agent system. Use when you have an existing PRD and need to convert it to Ralph's JSON format. Triggers on: convert this prd, turn this into ralph format, create prd.json from this, ralph json."
|
||||||
|
---
|
||||||
|
|
||||||
|
# Ralph PRD Converter
|
||||||
|
|
||||||
|
Converts existing PRDs to the prd.json format that Ralph uses for autonomous execution.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Job
|
||||||
|
|
||||||
|
Take a PRD (markdown file or text) and convert it to `prd.json` in your ralph directory.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"project": "[Project Name]",
|
||||||
|
"branchName": "ralph/[feature-name-kebab-case]",
|
||||||
|
"description": "[Feature description from PRD title/intro]",
|
||||||
|
"userStories": [
|
||||||
|
{
|
||||||
|
"id": "US-001",
|
||||||
|
"title": "[Story title]",
|
||||||
|
"description": "As a [user], I want [feature] so that [benefit]",
|
||||||
|
"acceptanceCriteria": [
|
||||||
|
"Criterion 1",
|
||||||
|
"Criterion 2",
|
||||||
|
"Typecheck passes"
|
||||||
|
],
|
||||||
|
"priority": 1,
|
||||||
|
"passes": false,
|
||||||
|
"notes": ""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Story Size: The Number One Rule
|
||||||
|
|
||||||
|
**Each story must be completable in ONE Ralph iteration (one context window).**
|
||||||
|
|
||||||
|
Ralph spawns a fresh Amp instance per iteration with no memory of previous work. If a story is too big, the LLM runs out of context before finishing and produces broken code.
|
||||||
|
|
||||||
|
### Right-sized stories:
|
||||||
|
- Add a database column and migration
|
||||||
|
- Add a UI component to an existing page
|
||||||
|
- Update a server action with new logic
|
||||||
|
- Add a filter dropdown to a list
|
||||||
|
|
||||||
|
### Too big (split these):
|
||||||
|
- "Build the entire dashboard" - Split into: schema, queries, UI components, filters
|
||||||
|
- "Add authentication" - Split into: schema, middleware, login UI, session handling
|
||||||
|
- "Refactor the API" - Split into one story per endpoint or pattern
|
||||||
|
|
||||||
|
**Rule of thumb:** If you cannot describe the change in 2-3 sentences, it is too big.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Story Ordering: Dependencies First
|
||||||
|
|
||||||
|
Stories execute in priority order. Earlier stories must not depend on later ones.
|
||||||
|
|
||||||
|
**Correct order:**
|
||||||
|
1. Schema/database changes (migrations)
|
||||||
|
2. Server actions / backend logic
|
||||||
|
3. UI components that use the backend
|
||||||
|
4. Dashboard/summary views that aggregate data
|
||||||
|
|
||||||
|
**Wrong order:**
|
||||||
|
1. UI component (depends on schema that does not exist yet)
|
||||||
|
2. Schema change
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Acceptance Criteria: Must Be Verifiable
|
||||||
|
|
||||||
|
Each criterion must be something Ralph can CHECK, not something vague.
|
||||||
|
|
||||||
|
### Good criteria (verifiable):
|
||||||
|
- "Add `status` column to tasks table with default 'pending'"
|
||||||
|
- "Filter dropdown has options: All, Active, Completed"
|
||||||
|
- "Clicking delete shows confirmation dialog"
|
||||||
|
- "Typecheck passes"
|
||||||
|
- "Tests pass"
|
||||||
|
|
||||||
|
### Bad criteria (vague):
|
||||||
|
- "Works correctly"
|
||||||
|
- "User can do X easily"
|
||||||
|
- "Good UX"
|
||||||
|
- "Handles edge cases"
|
||||||
|
|
||||||
|
### Always include as final criterion:
|
||||||
|
```
|
||||||
|
"Typecheck passes"
|
||||||
|
```
|
||||||
|
|
||||||
|
For stories with testable logic, also include:
|
||||||
|
```
|
||||||
|
"Tests pass"
|
||||||
|
```
|
||||||
|
|
||||||
|
### For stories that change UI, also include:
|
||||||
|
```
|
||||||
|
"Verify in browser using dev-browser skill"
|
||||||
|
```
|
||||||
|
|
||||||
|
Frontend stories are NOT complete until visually verified. Ralph will use the dev-browser skill to navigate to the page, interact with the UI, and confirm changes work.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Conversion Rules
|
||||||
|
|
||||||
|
1. **Each user story becomes one JSON entry**
|
||||||
|
2. **IDs**: Sequential (US-001, US-002, etc.)
|
||||||
|
3. **Priority**: Based on dependency order, then document order
|
||||||
|
4. **All stories**: `passes: false` and empty `notes`
|
||||||
|
5. **branchName**: Derive from feature name, kebab-case, prefixed with `ralph/`
|
||||||
|
6. **Always add**: "Typecheck passes" to every story's acceptance criteria
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Splitting Large PRDs
|
||||||
|
|
||||||
|
If a PRD has big features, split them:
|
||||||
|
|
||||||
|
**Original:**
|
||||||
|
> "Add user notification system"
|
||||||
|
|
||||||
|
**Split into:**
|
||||||
|
1. US-001: Add notifications table to database
|
||||||
|
2. US-002: Create notification service for sending notifications
|
||||||
|
3. US-003: Add notification bell icon to header
|
||||||
|
4. US-004: Create notification dropdown panel
|
||||||
|
5. US-005: Add mark-as-read functionality
|
||||||
|
6. US-006: Add notification preferences page
|
||||||
|
|
||||||
|
Each is one focused change that can be completed and verified independently.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example
|
||||||
|
|
||||||
|
**Input PRD:**
|
||||||
|
```markdown
|
||||||
|
# Task Status Feature
|
||||||
|
|
||||||
|
Add ability to mark tasks with different statuses.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
- Toggle between pending/in-progress/done on task list
|
||||||
|
- Filter list by status
|
||||||
|
- Show status badge on each task
|
||||||
|
- Persist status in database
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output prd.json:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"project": "TaskApp",
|
||||||
|
"branchName": "ralph/task-status",
|
||||||
|
"description": "Task Status Feature - Track task progress with status indicators",
|
||||||
|
"userStories": [
|
||||||
|
{
|
||||||
|
"id": "US-001",
|
||||||
|
"title": "Add status field to tasks table",
|
||||||
|
"description": "As a developer, I need to store task status in the database.",
|
||||||
|
"acceptanceCriteria": [
|
||||||
|
"Add status column: 'pending' | 'in_progress' | 'done' (default 'pending')",
|
||||||
|
"Generate and run migration successfully",
|
||||||
|
"Typecheck passes"
|
||||||
|
],
|
||||||
|
"priority": 1,
|
||||||
|
"passes": false,
|
||||||
|
"notes": ""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "US-002",
|
||||||
|
"title": "Display status badge on task cards",
|
||||||
|
"description": "As a user, I want to see task status at a glance.",
|
||||||
|
"acceptanceCriteria": [
|
||||||
|
"Each task card shows colored status badge",
|
||||||
|
"Badge colors: gray=pending, blue=in_progress, green=done",
|
||||||
|
"Typecheck passes",
|
||||||
|
"Verify in browser using dev-browser skill"
|
||||||
|
],
|
||||||
|
"priority": 2,
|
||||||
|
"passes": false,
|
||||||
|
"notes": ""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "US-003",
|
||||||
|
"title": "Add status toggle to task list rows",
|
||||||
|
"description": "As a user, I want to change task status directly from the list.",
|
||||||
|
"acceptanceCriteria": [
|
||||||
|
"Each row has status dropdown or toggle",
|
||||||
|
"Changing status saves immediately",
|
||||||
|
"UI updates without page refresh",
|
||||||
|
"Typecheck passes",
|
||||||
|
"Verify in browser using dev-browser skill"
|
||||||
|
],
|
||||||
|
"priority": 3,
|
||||||
|
"passes": false,
|
||||||
|
"notes": ""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "US-004",
|
||||||
|
"title": "Filter tasks by status",
|
||||||
|
"description": "As a user, I want to filter the list to see only certain statuses.",
|
||||||
|
"acceptanceCriteria": [
|
||||||
|
"Filter dropdown: All | Pending | In Progress | Done",
|
||||||
|
"Filter persists in URL params",
|
||||||
|
"Typecheck passes",
|
||||||
|
"Verify in browser using dev-browser skill"
|
||||||
|
],
|
||||||
|
"priority": 4,
|
||||||
|
"passes": false,
|
||||||
|
"notes": ""
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Archiving Previous Runs
|
||||||
|
|
||||||
|
**Before writing a new prd.json, check if there is an existing one from a different feature:**
|
||||||
|
|
||||||
|
1. Read the current `prd.json` if it exists
|
||||||
|
2. Check if `branchName` differs from the new feature's branch name
|
||||||
|
3. If different AND `progress.txt` has content beyond the header:
|
||||||
|
- Create archive folder: `archive/YYYY-MM-DD-feature-name/`
|
||||||
|
- Copy current `prd.json` and `progress.txt` to archive
|
||||||
|
- Reset `progress.txt` with fresh header
|
||||||
|
|
||||||
|
**The ralph.sh script handles this automatically** when you run it, but if you are manually updating prd.json between runs, archive first.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Checklist Before Saving
|
||||||
|
|
||||||
|
Before writing prd.json, verify:
|
||||||
|
|
||||||
|
- [ ] **Previous run archived** (if prd.json exists with different branchName, archive it first)
|
||||||
|
- [ ] Each story is completable in one iteration (small enough)
|
||||||
|
- [ ] Stories are ordered by dependency (schema to backend to UI)
|
||||||
|
- [ ] Every story has "Typecheck passes" as criterion
|
||||||
|
- [ ] UI stories have "Verify in browser using dev-browser skill" as criterion
|
||||||
|
- [ ] Acceptance criteria are verifiable (not vague)
|
||||||
|
- [ ] No story depends on a later story
|
||||||
304
skills-local/setup-cicd/SKILL.md
Normal file
304
skills-local/setup-cicd/SKILL.md
Normal file
@@ -0,0 +1,304 @@
|
|||||||
|
---
|
||||||
|
name: setup-cicd
|
||||||
|
description: "Configure CI/CD Docker build, push, and package linking for a project. Use when adding Docker builds to a Woodpecker pipeline, setting up Gitea container registry, or implementing CI/CD for deployment. Triggers on: setup cicd, add docker builds, configure pipeline, add ci/cd, setup ci."
|
||||||
|
---
|
||||||
|
|
||||||
|
# CI/CD Pipeline Setup
|
||||||
|
|
||||||
|
Configure Docker build, registry push, and package linking for a Woodpecker CI pipeline using Kaniko and Gitea's container registry.
|
||||||
|
|
||||||
|
**Before starting:** Read `~/.mosaic/guides/ci-cd-pipelines.md` for deep background on the patterns used here.
|
||||||
|
|
||||||
|
**Reference implementation:** `~/src/mosaic-stack/.woodpecker.yml`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Job
|
||||||
|
|
||||||
|
1. Scan the current project for services, Dockerfiles, and registry info
|
||||||
|
2. Ask clarifying questions about what to build and how to name images
|
||||||
|
3. Generate Woodpecker YAML for Docker build/push/link steps
|
||||||
|
4. Provide secrets configuration commands
|
||||||
|
5. Output a verification checklist
|
||||||
|
|
||||||
|
**Important:** This skill generates YAML to *append* to an existing `.woodpecker.yml`, not replace it. The project should already have quality gate steps (lint, test, typecheck, build).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 1: Project Scan
|
||||||
|
|
||||||
|
Run these scans and present results to the user:
|
||||||
|
|
||||||
|
### 1a. Detect registry info from git remote
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Extract Gitea host and org/repo from remote
|
||||||
|
REMOTE_URL=$(git remote get-url origin 2>/dev/null)
|
||||||
|
# Parse: https://git.example.com/org/repo.git -> host=git.example.com, org=org, repo=repo
|
||||||
|
```
|
||||||
|
|
||||||
|
Present:
|
||||||
|
- **Registry host:** (extracted from remote)
|
||||||
|
- **Organization:** (extracted from remote)
|
||||||
|
- **Repository:** (extracted from remote)
|
||||||
|
|
||||||
|
### 1b. Find all Dockerfiles
|
||||||
|
|
||||||
|
```bash
|
||||||
|
find . -name "Dockerfile" -o -name "Dockerfile.*" | grep -v node_modules | grep -v .git | sort
|
||||||
|
```
|
||||||
|
|
||||||
|
For each Dockerfile found, note:
|
||||||
|
- Path relative to project root
|
||||||
|
- Whether it's a dev variant (`Dockerfile.dev`) or production
|
||||||
|
- The service name (inferred from parent directory)
|
||||||
|
|
||||||
|
### 1c. Detect existing pipeline
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cat .woodpecker.yml 2>/dev/null || cat .woodpecker/*.yml 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
Check:
|
||||||
|
- Does a `build` step exist? (Docker builds will depend on it)
|
||||||
|
- Are there already Docker build steps? (avoid duplicating)
|
||||||
|
- What's the existing dependency chain?
|
||||||
|
|
||||||
|
### 1d. Find publishable npm packages (if applicable)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Find package.json files without "private": true
|
||||||
|
find . -name "package.json" -not -path "*/node_modules/*" -exec grep -L '"private": true' {} \;
|
||||||
|
```
|
||||||
|
|
||||||
|
### 1e. Present scan results
|
||||||
|
|
||||||
|
Show the user a summary table:
|
||||||
|
|
||||||
|
```
|
||||||
|
=== CI/CD Scan Results ===
|
||||||
|
Registry: git.example.com
|
||||||
|
Organization: org-name
|
||||||
|
Repository: repo-name
|
||||||
|
|
||||||
|
Dockerfiles Found:
|
||||||
|
1. src/backend-api/Dockerfile → backend-api
|
||||||
|
2. src/web-portal/Dockerfile → web-portal
|
||||||
|
3. src/ingest-api/Dockerfile → ingest-api
|
||||||
|
4. src/backend-api/Dockerfile.dev → (dev variant, skip)
|
||||||
|
|
||||||
|
Existing Pipeline: .woodpecker.yml
|
||||||
|
- Has build step: yes (build-all)
|
||||||
|
- Has Docker steps: no
|
||||||
|
|
||||||
|
Publishable npm Packages:
|
||||||
|
- @scope/schemas (src/schemas)
|
||||||
|
- @scope/design-system (src/design-system)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 2: Clarifying Questions
|
||||||
|
|
||||||
|
Ask these questions with lettered options (user can respond "1A, 2B, 3C"):
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Which Dockerfiles should be built in CI?
|
||||||
|
(Select all that apply — list found Dockerfiles with letters)
|
||||||
|
A. src/backend-api/Dockerfile (backend-api)
|
||||||
|
B. src/web-portal/Dockerfile (web-portal)
|
||||||
|
C. src/ingest-api/Dockerfile (ingest-api)
|
||||||
|
D. All of the above
|
||||||
|
E. Other: [specify]
|
||||||
|
|
||||||
|
2. Image naming convention?
|
||||||
|
A. {org}/{service} (e.g., usc/uconnect-backend-api) — Recommended
|
||||||
|
B. {org}/{repo}-{service} (e.g., usc/uconnect-backend-api)
|
||||||
|
C. Custom: [specify]
|
||||||
|
|
||||||
|
3. Do any services need build arguments?
|
||||||
|
A. No build args needed
|
||||||
|
B. Yes: [specify service:KEY=VALUE, e.g., web-portal:NEXT_PUBLIC_API_URL=https://api.example.com]
|
||||||
|
|
||||||
|
4. Which branches should trigger Docker builds?
|
||||||
|
A. main and develop (Recommended)
|
||||||
|
B. main only
|
||||||
|
C. Custom: [specify]
|
||||||
|
|
||||||
|
5. Should npm packages be published? (only if publishable packages found)
|
||||||
|
A. Yes, to Gitea npm registry
|
||||||
|
B. Yes, to custom registry: [specify URL]
|
||||||
|
C. No, skip npm publishing
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 3: Generate Pipeline YAML
|
||||||
|
|
||||||
|
### 3a. Add kaniko_setup anchor
|
||||||
|
|
||||||
|
If the project's `.woodpecker.yml` doesn't already have a `kaniko_setup` anchor in its `variables:` section, add it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
~/.mosaic/rails/cicd/generate-docker-steps.sh --kaniko-setup-only --registry REGISTRY_HOST
|
||||||
|
```
|
||||||
|
|
||||||
|
This outputs:
|
||||||
|
```yaml
|
||||||
|
# Kaniko base command setup
|
||||||
|
- &kaniko_setup |
|
||||||
|
mkdir -p /kaniko/.docker
|
||||||
|
echo "{\"auths\":{\"REGISTRY\":{\"username\":\"$GITEA_USER\",\"password\":\"$GITEA_TOKEN\"}}}" > /kaniko/.docker/config.json
|
||||||
|
```
|
||||||
|
|
||||||
|
Add this to the existing `variables:` block at the top of `.woodpecker.yml`.
|
||||||
|
|
||||||
|
### 3b. Generate Docker build/push/link steps
|
||||||
|
|
||||||
|
Use the generator script with the user's answers:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
~/.mosaic/rails/cicd/generate-docker-steps.sh \
|
||||||
|
--registry REGISTRY \
|
||||||
|
--org ORG \
|
||||||
|
--repo REPO \
|
||||||
|
--service "SERVICE_NAME:DOCKERFILE_PATH" \
|
||||||
|
--service "SERVICE_NAME:DOCKERFILE_PATH" \
|
||||||
|
--branches "main,develop" \
|
||||||
|
--depends-on "BUILD_STEP_NAME" \
|
||||||
|
[--build-arg "SERVICE:KEY=VALUE"] \
|
||||||
|
[--npm-package "@scope/pkg:path" --npm-registry "URL"]
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3c. Present generated YAML
|
||||||
|
|
||||||
|
Show the full YAML output to the user and ask for confirmation before appending to `.woodpecker.yml`.
|
||||||
|
|
||||||
|
### 3d. Append to pipeline
|
||||||
|
|
||||||
|
Append the generated YAML to the end of `.woodpecker.yml`. The kaniko_setup anchor goes in the `variables:` section.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 4: Secrets Checklist
|
||||||
|
|
||||||
|
Present the required Woodpecker secrets and commands to configure them:
|
||||||
|
|
||||||
|
```
|
||||||
|
=== Required Woodpecker Secrets ===
|
||||||
|
|
||||||
|
Configure these at: https://WOODPECKER_HOST/repos/ORG/REPO/settings/secrets
|
||||||
|
|
||||||
|
1. gitea_username
|
||||||
|
Value: Your Gitea username or service account
|
||||||
|
Events: push, manual, tag
|
||||||
|
|
||||||
|
2. gitea_token
|
||||||
|
Value: Gitea token with package:write scope
|
||||||
|
Generate at: https://REGISTRY_HOST/user/settings/applications
|
||||||
|
Events: push, manual, tag
|
||||||
|
|
||||||
|
CLI commands:
|
||||||
|
woodpecker secret add ORG/REPO --name gitea_username --value "USERNAME"
|
||||||
|
woodpecker secret add ORG/REPO --name gitea_token --value "TOKEN"
|
||||||
|
```
|
||||||
|
|
||||||
|
If npm publishing was selected, also list:
|
||||||
|
```
|
||||||
|
3. npm_token (if using separate npm registry)
|
||||||
|
Value: npm registry auth token
|
||||||
|
Events: push, manual, tag
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 5: Verification Checklist
|
||||||
|
|
||||||
|
Present this checklist for the user to follow after implementation:
|
||||||
|
|
||||||
|
```
|
||||||
|
=== Verification Checklist ===
|
||||||
|
|
||||||
|
□ 1. Secrets configured in Woodpecker UI
|
||||||
|
→ gitea_username and gitea_token set
|
||||||
|
→ Token has package:write scope
|
||||||
|
|
||||||
|
□ 2. Update docker-compose.yml to use registry images
|
||||||
|
→ Change: build: ./src/backend-api
|
||||||
|
→ To: image: REGISTRY/ORG/SERVICE:${IMAGE_TAG:-dev}
|
||||||
|
|
||||||
|
□ 3. Push to develop branch and verify pipeline
|
||||||
|
→ Quality gates pass
|
||||||
|
→ Docker build steps run (only on main/develop)
|
||||||
|
→ Images appear in Gitea Packages
|
||||||
|
|
||||||
|
□ 4. Verify package linking
|
||||||
|
→ Images linked to repository on Gitea Packages tab
|
||||||
|
→ If linking fails (404), link manually via Gitea web UI
|
||||||
|
|
||||||
|
□ 5. Test image pull
|
||||||
|
→ docker login REGISTRY
|
||||||
|
→ docker pull REGISTRY/ORG/SERVICE:dev
|
||||||
|
|
||||||
|
□ 6. Commit the pipeline changes
|
||||||
|
→ git add .woodpecker.yml
|
||||||
|
→ git commit -m "feat: Add Docker build/push/link to CI pipeline"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example Session
|
||||||
|
|
||||||
|
```
|
||||||
|
User: /setup-cicd
|
||||||
|
|
||||||
|
Agent: Let me scan this project...
|
||||||
|
|
||||||
|
=== CI/CD Scan Results ===
|
||||||
|
Registry: git.uscllc.com
|
||||||
|
Organization: usc
|
||||||
|
Repository: uconnect
|
||||||
|
|
||||||
|
Dockerfiles Found:
|
||||||
|
1. src/backend-api/Dockerfile → backend-api
|
||||||
|
2. src/web-portal/Dockerfile → web-portal
|
||||||
|
3. src/ingest-api/Dockerfile → ingest-api
|
||||||
|
4. src/heartbeat-collector/Dockerfile → heartbeat-collector
|
||||||
|
5. src/field-agent/Dockerfile → field-agent
|
||||||
|
6. src/egress-api/Dockerfile → egress-api
|
||||||
|
|
||||||
|
Existing Pipeline: .woodpecker.yml
|
||||||
|
- Has build step: yes (build-all)
|
||||||
|
- Has Docker steps: no
|
||||||
|
|
||||||
|
I have a few questions:
|
||||||
|
|
||||||
|
1. Which Dockerfiles should be built in CI?
|
||||||
|
A. backend-api
|
||||||
|
B. web-portal
|
||||||
|
C. ingest-api
|
||||||
|
...
|
||||||
|
G. All of the above
|
||||||
|
|
||||||
|
User: 1G, 2A, 3A, 4A
|
||||||
|
|
||||||
|
Agent: Generating pipeline YAML...
|
||||||
|
[shows YAML]
|
||||||
|
Shall I append this to .woodpecker.yml?
|
||||||
|
|
||||||
|
User: Yes
|
||||||
|
|
||||||
|
Agent: Done. Here are the secrets you need to configure:
|
||||||
|
[shows secrets checklist]
|
||||||
|
[shows verification checklist]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- The generator script handles `$$` escaping for Woodpecker shell variables automatically
|
||||||
|
- Package linking requires Gitea 1.24.0+ (the API endpoint was added in that version)
|
||||||
|
- If the project has no existing `.woodpecker.yml`, suggest running `init-project.sh` first to set up quality gates
|
||||||
|
- For the kaniko_setup anchor, the registry hostname must not include `https://` — just the bare hostname
|
||||||
|
- Build context defaults to `.` (project root) for Dockerfiles under `apps/`, `src/`, or `packages/`. For other locations (like `docker/postgres/`), the context is the Dockerfile's parent directory.
|
||||||
Reference in New Issue
Block a user