fix: rename all packages from @mosaic/* to @mosaicstack/*
Some checks failed
ci/woodpecker/pr/ci Pipeline failed
ci/woodpecker/push/ci Pipeline failed

- Updated all package.json name fields and dependency references
- Updated all TypeScript/JavaScript imports
- Updated .woodpecker/publish.yml filters and registry paths
- Updated tools/install.sh scope default
- Updated .npmrc registry paths (worktree + host)
- Enhanced update-checker.ts with checkForAllUpdates() multi-package support
- Updated CLI update command to show table of all packages
- Added KNOWN_PACKAGES, formatAllPackagesTable, getInstallAllCommand
- Marked checkForUpdate() with @deprecated JSDoc

Closes #391
This commit is contained in:
Jarvis
2026-04-04 21:43:23 -05:00
parent 80994bdc8e
commit 774b76447d
200 changed files with 828 additions and 641 deletions

View File

@@ -26,7 +26,7 @@ This plan establishes the foundational architecture for these systems.
4. **Hot reload** — soft-restart the gateway to load new plugins/skills/commands without dropping connections
5. **Local primitives** — baseline commands that work even when disconnected from the gateway
6. **Workspaces** — structured, git-backed, per-user/per-project filesystem layout with chroot isolation
7. **Task orchestration** — unified `@mosaic/queue` layer bridging PG, workspace files, and Valkey for agent task assignment
7. **Task orchestration** — unified `@mosaicstack/queue` layer bridging PG, workspace files, and Valkey for agent task assignment
8. **Session garbage collection** — three-tier GC (session, sweep, full cold-start) across Valkey, PG, and filesystem
---
@@ -220,7 +220,7 @@ Without `onUnload`, hot-reload is impossible — would leak listeners, duplicate
---
## Type Contracts (`@mosaic/types`)
## Type Contracts (`@mosaicstack/types`)
### CommandDef — Gateway Command Manifest Entry
@@ -407,7 +407,7 @@ The condensation step is a lightweight LLM call (cheap model, small context) tha
### Using Existing Queue Package
The `@mosaic/queue` package already provides `createQueue()` returning an ioredis handle on `redis://localhost:6380`. The `/system` storage will use the same Valkey instance directly via the redis handle — no queue semantics needed, just `SET`/`GET`/`DEL`/`EXPIRE`.
The `@mosaicstack/queue` package already provides `createQueue()` returning an ioredis handle on `redis://localhost:6380`. The `/system` storage will use the same Valkey instance directly via the redis handle — no queue semantics needed, just `SET`/`GET`/`DEL`/`EXPIRE`.
---
@@ -415,7 +415,7 @@ The `@mosaic/queue` package already provides `createQueue()` returning an ioredi
### Storage
Postgres via `@mosaic/db`. The `preferences` table already exists in `packages/db/src/schema.ts` with the right shape:
Postgres via `@mosaicstack/db`. The `preferences` table already exists in `packages/db/src/schema.ts` with the right shape:
```typescript
// Existing schema — already has category + key + value JSONB
@@ -620,7 +620,7 @@ Aliases are resolved in `findCommand()` before manifest lookup.
### Phase 1: Types + Local Command Parsing (no gateway changes)
1. Add `CommandDef`, `CommandManifest`, new socket events to `@mosaic/types`
1. Add `CommandDef`, `CommandManifest`, new socket events to `@mosaicstack/types`
2. Add `parseSlashCommand()` utility to `packages/cli`
3. Add `role: 'system'` to `Message` type, render system messages in `MessageList`
4. Implement local-only commands: `/help`, `/stop`, `/cost`, `/status` (local state only)
@@ -639,7 +639,7 @@ Aliases are resolved in `findCommand()` before manifest lookup.
### Phase 3: Preferences & System Overrides
1. Create `user_preferences` table in `@mosaic/db`, Drizzle schema + migration
1. Create `user_preferences` table in `@mosaicstack/db`, Drizzle schema + migration
2. Create `PreferencesService` in gateway — CRUD + defaults + enforcement logic
3. Implement `/preferences` command (REST-executed)
4. Implement `/system` command — Valkey storage, session-scoped
@@ -1159,7 +1159,7 @@ Additional tool sets needed for workspace workflows:
- **Docker/Portainer tools** — container management, deployment
- These are registered as additional `ToolDefinition[]` sets, same pattern as existing tools
`@mosaic/prdy` already provides the PRD wizard tooling — the workspace structure gives it a canonical output location (`docs/PRD-<name>.md`).
`@mosaicstack/prdy` already provides the PRD wizard tooling — the workspace structure gives it a canonical output location (`docs/PRD-<name>.md`).
### Task Queue & Orchestration
@@ -1167,19 +1167,19 @@ Additional tool sets needed for workspace workflows:
There are currently two parallel systems for task management:
1. **`@mosaic/coord`** (file-based) — missions stored as `mission.json`, tasks in `TASKS.md`, file locks, session tracking, subprocess spawning. Built for single-machine orchestrator pattern.
1. **`@mosaicstack/coord`** (file-based) — missions stored as `mission.json`, tasks in `TASKS.md`, file locks, session tracking, subprocess spawning. Built for single-machine orchestrator pattern.
2. **PG tables** (`tasks`, `mission_tasks`, `missions`) — DB-backed CRUD with status, priority, assignee, project/mission FKs. Exposed via REST API and Brain repos.
These are not connected. `@mosaic/coord` reads/writes files. The DB tables are managed via MissionsController. An agent using `coord_mission_status` gets file-based data; the dashboard shows DB data.
These are not connected. `@mosaicstack/coord` reads/writes files. The DB tables are managed via MissionsController. An agent using `coord_mission_status` gets file-based data; the dashboard shows DB data.
#### Vision: `@mosaic/queue` as the Unified Task Layer
#### Vision: `@mosaicstack/queue` as the Unified Task Layer
`@mosaic/queue` becomes the task orchestration service — not just a Valkey queue primitive, but the coordinator between agents, DB, and workspace files:
`@mosaicstack/queue` becomes the task orchestration service — not just a Valkey queue primitive, but the coordinator between agents, DB, and workspace files:
```
┌──────────────────────────────────────────────┐
│ @mosaic/queue │
│ @mosaicstack/queue │
│ (Task Orchestration Service) │
│ │
│ ┌─────────────────┐ ┌──────────────────┐ │
@@ -1217,21 +1217,21 @@ These are not connected. `@mosaic/coord` reads/writes files. The DB tables are m
6. Agent completes → updates status via queue service → PG updated + file synced + lock released
7. Gateway/orchestrator monitors progress, assigns next based on dependencies
**Flatfile fallback:** If no PG configured, queue service writes to flatfiles in workspace (JSON task manifests). Preserves the `@mosaic/coord` file-based pattern for single-machine, no-DB deployments.
**Flatfile fallback:** If no PG configured, queue service writes to flatfiles in workspace (JSON task manifests). Preserves the `@mosaicstack/coord` file-based pattern for single-machine, no-DB deployments.
**What this replaces:**
- `@mosaic/coord`'s file-only task tracking → unified DB+file via queue service
- `@mosaicstack/coord`'s file-only task tracking → unified DB+file via queue service
- Direct PG CRUD for task status → routed through queue service for consistency
- Manual task assignment → queue-based distribution with agent claiming
**What this preserves:**
- `TASKS.md` file format — still the agent-readable working copy
- Mission structure from `@mosaic/coord` — creation, milestones, sessions
- `@mosaic/prdy` PRD workflow — writes to `docs/`, syncs metadata to DB
- Mission structure from `@mosaicstack/coord` — creation, milestones, sessions
- `@mosaicstack/prdy` PRD workflow — writes to `docs/`, syncs metadata to DB
> **Note:** This is a significant refactor of `@mosaic/coord` + `@mosaic/queue`. Warrants its own dedicated plan alongside the Gatekeeper plan.
> **Note:** This is a significant refactor of `@mosaicstack/coord` + `@mosaicstack/queue`. Warrants its own dedicated plan alongside the Gatekeeper plan.
### Chroot Agent Sandboxing
@@ -1271,7 +1271,7 @@ The following topics are significant enough to warrant their own dedicated plan
| Plan | Stub File | Scope |
| -------------------------- | -------------------------------------- | -------------------------------------------------------------------------------------------------- |
| **Gatekeeper Service** | `docs/plans/gatekeeper-service.md` | PR review/merge agent, quality gates, CI integration, trust boundary design |
| **Task Queue Unification** | `docs/plans/task-queue-unification.md` | `@mosaic/queue` refactor, `@mosaic/coord` consolidation, DB+file sync, flatfile fallback |
| **Task Queue Unification** | `docs/plans/task-queue-unification.md` | `@mosaicstack/queue` refactor, `@mosaicstack/coord` consolidation, DB+file sync, flatfile fallback |
| **Chroot Sandboxing** | `docs/plans/chroot-sandboxing.md` | Chroot environment provisioning, capability management, Docker integration, namespace alternatives |
---
@@ -1447,7 +1447,7 @@ TUI requests fresh commands:manifest (reflects new provider availability)
### Implementation Notes
- Gateway stores poll state in Valkey: `mosaic:auth:poll:<pollToken>` with 5-min TTL
- `clipboardy` used for clipboard write in TUI (add as dep to `@mosaic/cli` if not already present)
- `clipboardy` used for clipboard write in TUI (add as dep to `@mosaicstack/cli` if not already present)
- On success, gateway emits a fresh `commands:manifest` via socket (reflects provider now connected)
---
@@ -1463,8 +1463,8 @@ mutable: boolean('mutable').notNull().default(true),
Generate and apply:
```bash
pnpm --filter @mosaic/db db:generate # generates migration SQL
pnpm --filter @mosaic/db db:migrate # applies to PG
pnpm --filter @mosaicstack/db db:generate # generates migration SQL
pnpm --filter @mosaicstack/db db:migrate # applies to PG
```
Platform enforcement keys (seeded with `mutable = false` by gateway `PreferencesService.onModuleInit()`):