Merge pull request 'fix(#411): complete auth/frontend remediation and review hardening' (#421) from fix/auth-frontend-remediation into develop
All checks were successful
ci/woodpecker/push/infra Pipeline was successful
ci/woodpecker/push/coordinator Pipeline was successful
ci/woodpecker/push/orchestrator Pipeline was successful
ci/woodpecker/push/web Pipeline was successful
ci/woodpecker/push/api Pipeline was successful

Reviewed-on: #421
This commit was merged in pull request #421.
This commit is contained in:
2026-02-17 21:24:13 +00:00
45 changed files with 949 additions and 1229 deletions

15
.mosaic/README.md Normal file
View File

@@ -0,0 +1,15 @@
# Repo Mosaic Linkage
This repository is attached to the machine-wide Mosaic framework.
## Load Order for Agents
1. `~/.mosaic/STANDARDS.md`
2. `AGENTS.md` (this repository)
3. `.mosaic/repo-hooks.sh` (repo-specific automation hooks)
## Purpose
- Keep universal standards in `~/.mosaic`
- Keep repo-specific behavior in this repo
- Avoid copying large runtime configs into each project

29
.mosaic/repo-hooks.sh Executable file
View File

@@ -0,0 +1,29 @@
#!/usr/bin/env bash
# Repo-specific hooks used by scripts/agent/*.sh for Mosaic Stack.
mosaic_hook_session_start() {
echo "[mosaic-stack] Branch: $(git rev-parse --abbrev-ref HEAD)"
echo "[mosaic-stack] Remotes:"
git remote -v | sed 's/^/[mosaic-stack] /'
if command -v node >/dev/null 2>&1; then
echo "[mosaic-stack] Node: $(node -v)"
fi
if command -v pnpm >/dev/null 2>&1; then
echo "[mosaic-stack] pnpm: $(pnpm -v)"
fi
}
mosaic_hook_critical() {
echo "[mosaic-stack] Recent commits:"
git log --oneline --decorate -n 5 | sed 's/^/[mosaic-stack] /'
echo "[mosaic-stack] Open TODO/FIXME markers (top 20):"
rg -n "(TODO|FIXME|HACK|SECURITY)" apps packages plugins docs --glob '!**/node_modules/**' -S \
| head -n 20 \
| sed 's/^/[mosaic-stack] /' \
|| true
}
mosaic_hook_session_end() {
echo "[mosaic-stack] Working tree summary:"
git status --short | sed 's/^/[mosaic-stack] /' || true
}

1
.nvmrc Normal file
View File

@@ -0,0 +1 @@
24

View File

@@ -12,7 +12,7 @@ when:
event: pull_request
variables:
- &node_image "node:22-slim"
- &node_image "node:24-slim"
- &install_codex "npm i -g @openai/codex"
steps:

View File

@@ -1,37 +1,65 @@
# Mosaic Stack — Agent Guidelines
> **Any AI model, coding assistant, or framework working in this codebase MUST read and follow `CLAUDE.md` in the project root.**
## Load Order
`CLAUDE.md` is the authoritative source for:
1. `SOUL.md` (repo identity + behavior invariants)
2. `~/.config/mosaic/STANDARDS.md` (machine-wide standards rails)
3. `AGENTS.md` (repo-specific overlay)
4. `.mosaic/repo-hooks.sh` (repo lifecycle hooks)
- Technology stack and versions
- TypeScript strict mode requirements
- ESLint Quality Rails (error-level enforcement)
- Prettier formatting rules
- Testing requirements (85% coverage, TDD)
- API conventions and database patterns
- Commit format and branch strategy
- PDA-friendly design principles
## Runtime Contract
## Quick Rules (Read CLAUDE.md for Details)
- This file is authoritative for repo-local operations.
- `CLAUDE.md` is a compatibility pointer to `AGENTS.md`.
- Follow universal rails from `~/.config/mosaic/guides/` and `~/.config/mosaic/rails/`.
- **No `any` types** — use `unknown`, generics, or proper types
- **Explicit return types** on all functions
- **Type-only imports** — `import type { Foo }` for types
- **Double quotes**, semicolons, 2-space indent, 100 char width
- **`??` not `||`** for defaults, **`?.`** not `&&` chains
- **All promises** must be awaited or returned
- **85% test coverage** minimum, tests before implementation
## Session Lifecycle
## Updating Conventions
```bash
bash scripts/agent/session-start.sh
bash scripts/agent/critical.sh
bash scripts/agent/session-end.sh
```
If you discover new patterns, gotchas, or conventions while working in this codebase, **update `CLAUDE.md`** — not this file. This file exists solely to redirect agents that look for `AGENTS.md` to the canonical source.
Optional:
## Per-App Context
```bash
bash scripts/agent/log-limitation.sh "Short Name"
```
Each app directory has its own `AGENTS.md` for app-specific patterns:
## Repo Context
- Platform: multi-tenant personal assistant stack
- Monorepo: `pnpm` workspaces + Turborepo
- Core apps: `apps/api` (NestJS), `apps/web` (Next.js), orchestrator/coordinator services
- Infrastructure: Docker Compose + PostgreSQL + Valkey + Authentik
## Quick Command Set
```bash
pnpm install
pnpm dev
pnpm test
pnpm lint
pnpm build
```
## Standards and Quality
- Enforce strict typing and no unsafe shortcuts.
- Keep lint/typecheck/tests green before completion.
- Prefer small, focused commits and clear change descriptions.
## App-Specific Overlays
- `apps/api/AGENTS.md`
- `apps/web/AGENTS.md`
- `apps/coordinator/AGENTS.md`
- `apps/orchestrator/AGENTS.md`
## Additional Guidance
- Orchestrator guidance: `docs/claude/orchestrator.md`
- Security remediation context: `docs/reports/codebase-review-2026-02-05/01-security-review.md`
- Code quality context: `docs/reports/codebase-review-2026-02-05/02-code-quality-review.md`
- QA context: `docs/reports/codebase-review-2026-02-05/03-qa-test-coverage.md`

481
CLAUDE.md
View File

@@ -1,477 +1,14 @@
**Multi-tenant personal assistant platform with PostgreSQL backend, Authentik SSO, and MoltBot
integration.**
# Compatibility Pointer
## Conditional Documentation Loading
This repository uses an agent-neutral Mosaic standards model.
| When working on... | Load this guide |
| ---------------------------------------- | ------------------------------------------------------------------- |
| Orchestrating autonomous task completion | `docs/claude/orchestrator.md` |
| Security remediation (review findings) | `docs/reports/codebase-review-2026-02-05/01-security-review.md` |
| Code quality fixes | `docs/reports/codebase-review-2026-02-05/02-code-quality-review.md` |
| Test coverage gaps | `docs/reports/codebase-review-2026-02-05/03-qa-test-coverage.md` |
Authoritative repo guidance is in `AGENTS.md`.
## Platform Templates
Load order for Claude sessions:
Bootstrap templates are at `docs/templates/`. See `docs/templates/README.md` for usage.
1. `SOUL.md`
2. `~/.mosaic/STANDARDS.md`
3. `AGENTS.md`
4. `.mosaic/repo-hooks.sh`
## Project Overview
Mosaic Stack is a standalone platform that provides:
- Multi-user workspaces with team sharing
- Task, event, and project management
- Gantt charts and Kanban boards
- MoltBot integration via plugins (stock MoltBot + mosaic-plugin-\*)
- PDA-friendly design throughout
**Repository:** git.mosaicstack.dev/mosaic/stack
**Versioning:** Start at 0.0.1, MVP = 0.1.0
## Technology Stack
| Layer | Technology |
| ---------- | -------------------------------------------- |
| Frontend | Next.js 16 + React + TailwindCSS + Shadcn/ui |
| Backend | NestJS + Prisma ORM |
| Database | PostgreSQL 17 + pgvector |
| Cache | Valkey (Redis-compatible) |
| Auth | Authentik (OIDC) |
| AI | Ollama (configurable: local or remote) |
| Messaging | MoltBot (stock + Mosaic plugins) |
| Real-time | WebSockets (Socket.io) |
| Monorepo | pnpm workspaces + TurboRepo |
| Testing | Vitest + Playwright |
| Deployment | Docker + docker-compose |
## Repository Structure
mosaic-stack/
├── apps/
│ ├── api/ # mosaic-api (NestJS)
│ │ ├── src/
│ │ │ ├── auth/ # Authentik OIDC
│ │ │ ├── tasks/ # Task management
│ │ │ ├── events/ # Calendar/events
│ │ │ ├── projects/ # Project management
│ │ │ ├── brain/ # MoltBot integration
│ │ │ └── activity/ # Activity logging
│ │ ├── prisma/
│ │ │ └── schema.prisma
│ │ └── Dockerfile
│ └── web/ # mosaic-web (Next.js 16)
│ ├── app/
│ ├── components/
│ └── Dockerfile
├── packages/
│ ├── shared/ # Shared types, utilities
│ ├── ui/ # Shared UI components
│ └── config/ # Shared configuration
├── plugins/
│ ├── mosaic-plugin-brain/ # MoltBot skill: API queries
│ ├── mosaic-plugin-calendar/ # MoltBot skill: Calendar
│ ├── mosaic-plugin-tasks/ # MoltBot skill: Tasks
│ └── mosaic-plugin-gantt/ # MoltBot skill: Gantt
├── docker/
│ ├── docker-compose.yml # Turnkey deployment
│ └── init-scripts/ # PostgreSQL init
├── docs/
│ ├── SETUP.md
│ ├── CONFIGURATION.md
│ └── DESIGN-PRINCIPLES.md
├── .env.example
├── turbo.json
├── pnpm-workspace.yaml
└── README.md
## Development Workflow
### Branch Strategy
- `main` — stable releases only
- `develop` — active development (default working branch)
- `feature/*` — feature branches from develop
- `fix/*` — bug fix branches
### Starting Work
````bash
git checkout develop
git pull --rebase
pnpm install
Running Locally
# Start all services (Docker)
docker compose up -d
# Or run individually for development
pnpm dev # All apps
pnpm dev:api # API only
pnpm dev:web # Web only
Testing
pnpm test # Run all tests
pnpm test:api # API tests only
pnpm test:web # Web tests only
pnpm test:e2e # Playwright E2E
Building
pnpm build # Build all
pnpm build:api # Build API
pnpm build:web # Build Web
Design Principles (NON-NEGOTIABLE)
PDA-Friendly Language
NEVER use demanding language. This is critical.
┌─────────────┬──────────────────────┐
│ ❌ NEVER │ ✅ ALWAYS │
├─────────────┼──────────────────────┤
│ OVERDUE │ Target passed │
├─────────────┼──────────────────────┤
│ URGENT │ Approaching target │
├─────────────┼──────────────────────┤
│ MUST DO │ Scheduled for │
├─────────────┼──────────────────────┤
│ CRITICAL │ High priority │
├─────────────┼──────────────────────┤
│ YOU NEED TO │ Consider / Option to │
├─────────────┼──────────────────────┤
│ REQUIRED │ Recommended │
└─────────────┴──────────────────────┘
Visual Indicators
Use status indicators consistently:
- 🟢 On track / Active
- 🔵 Upcoming / Scheduled
- ⏸️ Paused / On hold
- 💤 Dormant / Inactive
- ⚪ Not started
Display Principles
1. 10-second scannability — Key info visible immediately
2. Visual chunking — Clear sections with headers
3. Single-line items — Compact, scannable lists
4. Date grouping — Today, Tomorrow, This Week headers
5. Progressive disclosure — Details on click, not upfront
6. Calm colors — No aggressive reds for status
Reference
See docs/DESIGN-PRINCIPLES.md for complete guidelines.
For original patterns, see: jarvis-brain/docs/DESIGN-PRINCIPLES.md
API Conventions
Endpoints
GET /api/{resource} # List (with pagination, filters)
GET /api/{resource}/:id # Get single
POST /api/{resource} # Create
PATCH /api/{resource}/:id # Update
DELETE /api/{resource}/:id # Delete
Response Format
// Success
{
data: T | T[],
meta?: { total, page, limit }
}
// Error
{
error: {
code: string,
message: string,
details?: any
}
}
Brain Query API
POST /api/brain/query
{
query: "what's on my calendar",
context?: { view: "dashboard", workspace_id: "..." }
}
Database Conventions
Multi-Tenant (RLS)
All workspace-scoped tables use Row-Level Security:
- Always include workspace_id in queries
- RLS policies enforce isolation
- Set session context for current user
Prisma Commands
pnpm prisma:generate # Generate client
pnpm prisma:migrate # Run migrations
pnpm prisma:studio # Open Prisma Studio
pnpm prisma:seed # Seed development data
MoltBot Plugin Development
Plugins live in plugins/mosaic-plugin-*/ and follow MoltBot skill format:
# plugins/mosaic-plugin-brain/SKILL.md
---
name: mosaic-plugin-brain
description: Query Mosaic Stack for tasks, events, projects
version: 0.0.1
triggers:
- "what's on my calendar"
- "show my tasks"
- "morning briefing"
tools:
- mosaic_api
---
# Plugin instructions here...
Key principle: MoltBot remains stock. All customization via plugins only.
Environment Variables
See .env.example for all variables. Key ones:
# Database
DATABASE_URL=postgresql://mosaic:password@localhost:5432/mosaic
# Auth
AUTHENTIK_URL=https://auth.example.com
AUTHENTIK_CLIENT_ID=mosaic-stack
AUTHENTIK_CLIENT_SECRET=...
# Ollama
OLLAMA_MODE=local|remote
OLLAMA_ENDPOINT=http://localhost:11434
# MoltBot
MOSAIC_API_TOKEN=...
Issue Tracking
Issues are tracked at: https://git.mosaicstack.dev/mosaic/stack/issues
Labels
- Priority: p0 (critical), p1 (high), p2 (medium), p3 (low)
- Type: api, web, database, auth, plugin, ai, devops, docs, migration, security, testing,
performance, setup
Milestones
- M1-Foundation (0.0.x)
- M2-MultiTenant (0.0.x)
- M3-Features (0.0.x)
- M4-MoltBot (0.0.x)
- M5-Migration (0.1.0 MVP)
Commit Format
<type>(#issue): Brief description
Detailed explanation if needed.
Fixes #123
Types: feat, fix, docs, test, refactor, chore
Test-Driven Development (TDD) - REQUIRED
**All code must follow TDD principles. This is non-negotiable.**
TDD Workflow (Red-Green-Refactor)
1. **RED** — Write a failing test first
- Write the test for new functionality BEFORE writing any implementation code
- Run the test to verify it fails (proves the test works)
- Commit message: `test(#issue): add test for [feature]`
2. **GREEN** — Write minimal code to make the test pass
- Implement only enough code to pass the test
- Run tests to verify they pass
- Commit message: `feat(#issue): implement [feature]`
3. **REFACTOR** — Clean up the code while keeping tests green
- Improve code quality, remove duplication, enhance readability
- Ensure all tests still pass after refactoring
- Commit message: `refactor(#issue): improve [component]`
Testing Requirements
- **Minimum 85% code coverage** for all new code
- **Write tests BEFORE implementation** — no exceptions
- Test files must be co-located with source files:
- `feature.service.ts` → `feature.service.spec.ts`
- `component.tsx` → `component.test.tsx`
- All tests must pass before creating a PR
- Use descriptive test names: `it("should return user when valid token provided")`
- Group related tests with `describe()` blocks
- Mock external dependencies (database, APIs, file system)
Test Types
- **Unit Tests** — Test individual functions/methods in isolation
- **Integration Tests** — Test module interactions (e.g., service + database)
- **E2E Tests** — Test complete user workflows with Playwright
Running Tests
```bash
pnpm test # Run all tests
pnpm test:watch # Watch mode for active development
pnpm test:coverage # Generate coverage report
pnpm test:api # API tests only
pnpm test:web # Web tests only
pnpm test:e2e # Playwright E2E tests
````
Coverage Verification
After implementing a feature, verify coverage meets requirements:
```bash
pnpm test:coverage
# Check the coverage report in coverage/index.html
# Ensure your files show ≥85% coverage
```
TDD Anti-Patterns to Avoid
❌ Writing implementation code before tests
❌ Writing tests after implementation is complete
❌ Skipping tests for "simple" code
❌ Testing implementation details instead of behavior
❌ Writing tests that don't fail when they should
❌ Committing code with failing tests
Quality Rails - Mechanical Code Quality Enforcement
**Status:** ACTIVE (2026-01-30) - Strict enforcement enabled ✅
Quality Rails provides mechanical enforcement of code quality standards through pre-commit hooks
and CI/CD pipelines. See `docs/quality-rails-status.md` for full details.
What's Enforced (NOW ACTIVE):
- ✅ **Type Safety** - Blocks explicit `any` types (@typescript-eslint/no-explicit-any: error)
- ✅ **Return Types** - Requires explicit return types on exported functions
- ✅ **Security** - Detects SQL injection, XSS, unsafe regex (eslint-plugin-security)
- ✅ **Promise Safety** - Blocks floating promises and misused promises
- ✅ **Code Formatting** - Auto-formats with Prettier on commit
- ✅ **Build Verification** - Type-checks before allowing commit
- ✅ **Secret Scanning** - Blocks hardcoded passwords/API keys (git-secrets)
Current Status:
- ✅ **Pre-commit hooks**: ACTIVE - Blocks commits with violations
- ✅ **Strict enforcement**: ENABLED - Package-level enforcement
- 🟡 **CI/CD pipeline**: Ready (.woodpecker.yml created, not yet configured)
How It Works:
**Package-Level Enforcement** - If you touch ANY file in a package with violations,
you must fix ALL violations in that package before committing. This forces incremental
cleanup while preventing new violations.
Example:
- Edit `apps/api/src/tasks/tasks.service.ts`
- Pre-commit hook runs lint on ENTIRE `@mosaic/api` package
- If `@mosaic/api` has violations → Commit BLOCKED
- Fix all violations in `@mosaic/api` → Commit allowed
Next Steps:
1. Fix violations package-by-package as you work in them
2. Priority: Fix explicit `any` types and type safety issues first
3. Configure Woodpecker CI to run quality gates on all PRs
Why This Matters:
Based on validation of 50 real production issues, Quality Rails mechanically prevents ~70%
of quality issues including:
- Hardcoded passwords
- Type safety violations
- SQL injection vulnerabilities
- Build failures
- Test coverage gaps
**Mechanical enforcement works. Process compliance doesn't.**
See `docs/quality-rails-status.md` for detailed roadmap and violation breakdown.
Example TDD Session
```bash
# 1. RED - Write failing test
# Edit: feature.service.spec.ts
# Add test for getUserById()
pnpm test:watch # Watch it fail
git add feature.service.spec.ts
git commit -m "test(#42): add test for getUserById"
# 2. GREEN - Implement minimal code
# Edit: feature.service.ts
# Add getUserById() method
pnpm test:watch # Watch it pass
git add feature.service.ts
git commit -m "feat(#42): implement getUserById"
# 3. REFACTOR - Improve code quality
# Edit: feature.service.ts
# Extract helper, improve naming
pnpm test:watch # Ensure still passing
git add feature.service.ts
git commit -m "refactor(#42): extract user mapping logic"
```
Docker Deployment
Turnkey (includes everything)
docker compose up -d
Customized (external services)
Create docker-compose.override.yml to:
- Point to external PostgreSQL/Valkey/Ollama
- Disable bundled services
See docs/DOCKER.md for details.
Key Documentation
┌───────────────────────────┬───────────────────────┐
│ Document │ Purpose │
├───────────────────────────┼───────────────────────┤
│ docs/SETUP.md │ Installation guide │
├───────────────────────────┼───────────────────────┤
│ docs/CONFIGURATION.md │ All config options │
├───────────────────────────┼───────────────────────┤
│ docs/DESIGN-PRINCIPLES.md │ PDA-friendly patterns │
├───────────────────────────┼───────────────────────┤
│ docs/DOCKER.md │ Docker deployment │
├───────────────────────────┼───────────────────────┤
│ docs/API.md │ API documentation │
└───────────────────────────┴───────────────────────┘
Related Repositories
┌──────────────┬──────────────────────────────────────────────┐
│ Repo │ Purpose │
├──────────────┼──────────────────────────────────────────────┤
│ jarvis-brain │ Original JSON-based brain (migration source) │
├──────────────┼──────────────────────────────────────────────┤
│ MoltBot │ Stock messaging gateway │
└──────────────┴──────────────────────────────────────────────┘
---
Mosaic Stack v0.0.x — Building the future of personal assistants.
If you were started from `CLAUDE.md`, continue by reading `AGENTS.md` now.

View File

@@ -90,7 +90,7 @@ docker compose down
If you prefer manual installation, you'll need:
- **Docker mode:** Docker 24+ and Docker Compose
- **Native mode:** Node.js 22+, pnpm 10+, PostgreSQL 17+
- **Native mode:** Node.js 24+, pnpm 10+, PostgreSQL 17+
The installer handles these automatically.

20
SOUL.md Normal file
View File

@@ -0,0 +1,20 @@
# Mosaic Stack Soul
You are Jarvis for the Mosaic Stack repository, running on the current agent runtime.
## Behavioral Invariants
- Identity first: answer identity prompts as Jarvis for this repository.
- Implementation detail second: runtime (Codex/Claude/OpenCode/etc.) is secondary metadata.
- Be proactive: surface risks, blockers, and next actions without waiting.
- Be calm and clear: keep responses concise, chunked, and PDA-friendly.
- Respect canonical sources:
- Repo operations and conventions: `AGENTS.md`
- Machine-wide rails: `~/.mosaic/STANDARDS.md`
- Repo lifecycle hooks: `.mosaic/repo-hooks.sh`
## Guardrails
- Do not claim completion without verification evidence.
- Do not bypass lint/type/test quality gates.
- Prefer explicit assumptions and concrete file/command references.

View File

@@ -12,7 +12,10 @@ import { PrismaClient, Prisma } from "@prisma/client";
import { randomUUID as uuid } from "crypto";
import { runWithRlsClient, getRlsClient } from "../prisma/rls-context.provider";
describe.skipIf(!process.env.DATABASE_URL)(
const shouldRunDbIntegrationTests =
process.env.RUN_DB_TESTS === "true" && Boolean(process.env.DATABASE_URL);
describe.skipIf(!shouldRunDbIntegrationTests)(
"Auth Tables RLS Policies (requires DATABASE_URL)",
() => {
let prisma: PrismaClient;
@@ -28,7 +31,7 @@ describe.skipIf(!process.env.DATABASE_URL)(
beforeAll(async () => {
// Skip setup if DATABASE_URL is not available
if (!process.env.DATABASE_URL) {
if (!shouldRunDbIntegrationTests) {
return;
}
@@ -49,7 +52,7 @@ describe.skipIf(!process.env.DATABASE_URL)(
afterAll(async () => {
// Skip cleanup if DATABASE_URL is not available or prisma not initialized
if (!process.env.DATABASE_URL || !prisma) {
if (!shouldRunDbIntegrationTests || !prisma) {
return;
}

View File

@@ -93,7 +93,10 @@ export class MatrixRoomService {
select: { matrixRoomId: true },
});
return workspace?.matrixRoomId ?? null;
if (!workspace) {
return null;
}
return workspace.matrixRoomId ?? null;
}
/**

View File

@@ -15,7 +15,12 @@
import { describe, it, expect, beforeAll, afterAll } from "vitest";
import { PrismaClient, CredentialType, CredentialScope } from "@prisma/client";
describe("UserCredential Model", () => {
const shouldRunDbIntegrationTests =
process.env.RUN_DB_TESTS === "true" && Boolean(process.env.DATABASE_URL);
const describeFn = shouldRunDbIntegrationTests ? describe : describe.skip;
describeFn("UserCredential Model", () => {
let prisma: PrismaClient;
let testUserId: string;
let testWorkspaceId: string;
@@ -23,8 +28,8 @@ describe("UserCredential Model", () => {
beforeAll(async () => {
// Note: These tests require a running database
// They will be skipped in CI if DATABASE_URL is not set
if (!process.env.DATABASE_URL) {
console.warn("DATABASE_URL not set, skipping UserCredential model tests");
if (!shouldRunDbIntegrationTests) {
console.warn("Skipping UserCredential model tests (set RUN_DB_TESTS=true and DATABASE_URL)");
return;
}

View File

@@ -16,7 +16,9 @@ import { JOB_CREATED, JOB_STARTED, STEP_STARTED } from "./event-types";
* NOTE: These tests require a real database connection with realistic data volume.
* Run with: pnpm test:api -- job-events.performance.spec.ts
*/
const describeFn = process.env.DATABASE_URL ? describe : describe.skip;
const shouldRunDbIntegrationTests =
process.env.RUN_DB_TESTS === "true" && Boolean(process.env.DATABASE_URL);
const describeFn = shouldRunDbIntegrationTests ? describe : describe.skip;
describeFn("JobEventsService Performance", () => {
let service: JobEventsService;

View File

@@ -27,7 +27,9 @@ async function isFulltextSearchConfigured(prisma: PrismaClient): Promise<boolean
* Skip when DATABASE_URL is not set. Tests that require the trigger/index
* will be skipped if the database migration hasn't been applied.
*/
const describeFn = process.env.DATABASE_URL ? describe : describe.skip;
const shouldRunDbIntegrationTests =
process.env.RUN_DB_TESTS === "true" && Boolean(process.env.DATABASE_URL);
const describeFn = shouldRunDbIntegrationTests ? describe : describe.skip;
describeFn("Full-Text Search Setup (Integration)", () => {
let prisma: PrismaClient;

View File

@@ -3,6 +3,7 @@ import { Test, TestingModule } from "@nestjs/testing";
import { ConfigModule } from "@nestjs/config";
import { MosaicTelemetryModule } from "./mosaic-telemetry.module";
import { MosaicTelemetryService } from "./mosaic-telemetry.service";
import { PrismaService } from "../prisma/prisma.service";
// Mock the telemetry client to avoid real HTTP calls
vi.mock("@mosaicstack/telemetry-client", async (importOriginal) => {
@@ -56,6 +57,30 @@ vi.mock("@mosaicstack/telemetry-client", async (importOriginal) => {
describe("MosaicTelemetryModule", () => {
let module: TestingModule;
const sharedTestEnv = {
ENCRYPTION_KEY: "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef",
};
const mockPrismaService = {
onModuleInit: vi.fn(),
onModuleDestroy: vi.fn(),
$connect: vi.fn(),
$disconnect: vi.fn(),
};
const buildTestModule = async (env: Record<string, string>): Promise<TestingModule> =>
Test.createTestingModule({
imports: [
ConfigModule.forRoot({
isGlobal: true,
envFilePath: [],
load: [() => ({ ...env, ...sharedTestEnv })],
}),
MosaicTelemetryModule,
],
})
.overrideProvider(PrismaService)
.useValue(mockPrismaService)
.compile();
beforeEach(() => {
vi.clearAllMocks();
@@ -63,40 +88,18 @@ describe("MosaicTelemetryModule", () => {
describe("module initialization", () => {
it("should compile the module successfully", async () => {
module = await Test.createTestingModule({
imports: [
ConfigModule.forRoot({
isGlobal: true,
envFilePath: [],
load: [
() => ({
MOSAIC_TELEMETRY_ENABLED: "false",
}),
],
}),
MosaicTelemetryModule,
],
}).compile();
module = await buildTestModule({
MOSAIC_TELEMETRY_ENABLED: "false",
});
expect(module).toBeDefined();
await module.close();
});
it("should provide MosaicTelemetryService", async () => {
module = await Test.createTestingModule({
imports: [
ConfigModule.forRoot({
isGlobal: true,
envFilePath: [],
load: [
() => ({
MOSAIC_TELEMETRY_ENABLED: "false",
}),
],
}),
MosaicTelemetryModule,
],
}).compile();
module = await buildTestModule({
MOSAIC_TELEMETRY_ENABLED: "false",
});
const service = module.get<MosaicTelemetryService>(MosaicTelemetryService);
expect(service).toBeDefined();
@@ -106,20 +109,9 @@ describe("MosaicTelemetryModule", () => {
});
it("should export MosaicTelemetryService for injection in other modules", async () => {
module = await Test.createTestingModule({
imports: [
ConfigModule.forRoot({
isGlobal: true,
envFilePath: [],
load: [
() => ({
MOSAIC_TELEMETRY_ENABLED: "false",
}),
],
}),
MosaicTelemetryModule,
],
}).compile();
module = await buildTestModule({
MOSAIC_TELEMETRY_ENABLED: "false",
});
const service = module.get(MosaicTelemetryService);
expect(service).toBeDefined();
@@ -130,24 +122,13 @@ describe("MosaicTelemetryModule", () => {
describe("lifecycle integration", () => {
it("should initialize service on module init when enabled", async () => {
module = await Test.createTestingModule({
imports: [
ConfigModule.forRoot({
isGlobal: true,
envFilePath: [],
load: [
() => ({
MOSAIC_TELEMETRY_ENABLED: "true",
MOSAIC_TELEMETRY_SERVER_URL: "https://tel.test.local",
MOSAIC_TELEMETRY_API_KEY: "a".repeat(64),
MOSAIC_TELEMETRY_INSTANCE_ID: "550e8400-e29b-41d4-a716-446655440000",
MOSAIC_TELEMETRY_DRY_RUN: "false",
}),
],
}),
MosaicTelemetryModule,
],
}).compile();
module = await buildTestModule({
MOSAIC_TELEMETRY_ENABLED: "true",
MOSAIC_TELEMETRY_SERVER_URL: "https://tel.test.local",
MOSAIC_TELEMETRY_API_KEY: "a".repeat(64),
MOSAIC_TELEMETRY_INSTANCE_ID: "550e8400-e29b-41d4-a716-446655440000",
MOSAIC_TELEMETRY_DRY_RUN: "false",
});
await module.init();
@@ -158,20 +139,9 @@ describe("MosaicTelemetryModule", () => {
});
it("should not start client when disabled via env", async () => {
module = await Test.createTestingModule({
imports: [
ConfigModule.forRoot({
isGlobal: true,
envFilePath: [],
load: [
() => ({
MOSAIC_TELEMETRY_ENABLED: "false",
}),
],
}),
MosaicTelemetryModule,
],
}).compile();
module = await buildTestModule({
MOSAIC_TELEMETRY_ENABLED: "false",
});
await module.init();
@@ -182,24 +152,13 @@ describe("MosaicTelemetryModule", () => {
});
it("should cleanly shut down on module destroy", async () => {
module = await Test.createTestingModule({
imports: [
ConfigModule.forRoot({
isGlobal: true,
envFilePath: [],
load: [
() => ({
MOSAIC_TELEMETRY_ENABLED: "true",
MOSAIC_TELEMETRY_SERVER_URL: "https://tel.test.local",
MOSAIC_TELEMETRY_API_KEY: "a".repeat(64),
MOSAIC_TELEMETRY_INSTANCE_ID: "550e8400-e29b-41d4-a716-446655440000",
MOSAIC_TELEMETRY_DRY_RUN: "false",
}),
],
}),
MosaicTelemetryModule,
],
}).compile();
module = await buildTestModule({
MOSAIC_TELEMETRY_ENABLED: "true",
MOSAIC_TELEMETRY_SERVER_URL: "https://tel.test.local",
MOSAIC_TELEMETRY_API_KEY: "a".repeat(64),
MOSAIC_TELEMETRY_INSTANCE_ID: "550e8400-e29b-41d4-a716-446655440000",
MOSAIC_TELEMETRY_DRY_RUN: "false",
});
await module.init();

View File

@@ -156,7 +156,7 @@ describe("PrismaService", () => {
it("should set workspace context variables in transaction", async () => {
const userId = "user-123";
const workspaceId = "workspace-456";
const executeRawSpy = vi.spyOn(service, "$executeRaw").mockResolvedValue(0);
vi.spyOn(service, "$executeRaw").mockResolvedValue(0);
// Mock $transaction to execute the callback with a mock tx client
const mockTx = {
@@ -195,7 +195,6 @@ describe("PrismaService", () => {
};
// Mock both methods at the same time to avoid spy issues
const originalSetContext = service.setWorkspaceContext.bind(service);
const setContextCalls: [string, string, unknown][] = [];
service.setWorkspaceContext = vi.fn().mockImplementation((uid, wid, tx) => {
setContextCalls.push([uid, wid, tx]);

View File

@@ -3,6 +3,7 @@ import { PrismaClient } from "@prisma/client";
import { VaultService } from "../vault/vault.service";
import { createAccountEncryptionExtension } from "./account-encryption.extension";
import { createLlmEncryptionExtension } from "./llm-encryption.extension";
import { getRlsClient } from "./rls-context.provider";
/**
* Prisma service that manages database connection lifecycle
@@ -177,6 +178,13 @@ export class PrismaService extends PrismaClient implements OnModuleInit, OnModul
workspaceId: string,
fn: (tx: PrismaClient) => Promise<T>
): Promise<T> {
const rlsClient = getRlsClient();
if (rlsClient) {
await this.setWorkspaceContext(userId, workspaceId, rlsClient as unknown as PrismaClient);
return fn(rlsClient as unknown as PrismaClient);
}
return this.$transaction(async (tx) => {
await this.setWorkspaceContext(userId, workspaceId, tx as PrismaClient);
return fn(tx as PrismaClient);

View File

@@ -25,6 +25,8 @@ describe("TasksController", () => {
const request = context.switchToHttp().getRequest();
request.user = {
id: "550e8400-e29b-41d4-a716-446655440002",
email: "test@example.com",
name: "Test User",
workspaceId: "550e8400-e29b-41d4-a716-446655440001",
};
return true;
@@ -46,6 +48,8 @@ describe("TasksController", () => {
const mockRequest = {
user: {
id: mockUserId,
email: "test@example.com",
name: "Test User",
workspaceId: mockWorkspaceId,
},
};
@@ -132,13 +136,16 @@ describe("TasksController", () => {
mockTasksService.findAll.mockResolvedValue(paginatedResult);
const result = await controller.findAll(query, mockWorkspaceId);
const result = await controller.findAll(query, mockWorkspaceId, mockRequest.user);
expect(result).toEqual(paginatedResult);
expect(service.findAll).toHaveBeenCalledWith({
...query,
workspaceId: mockWorkspaceId,
});
expect(service.findAll).toHaveBeenCalledWith(
{
...query,
workspaceId: mockWorkspaceId,
},
mockUserId
);
});
it("should extract workspaceId from request.user if not in query", async () => {
@@ -149,12 +156,13 @@ describe("TasksController", () => {
meta: { total: 0, page: 1, limit: 50, totalPages: 0 },
});
await controller.findAll(query as any, mockWorkspaceId);
await controller.findAll(query as any, mockWorkspaceId, mockRequest.user);
expect(service.findAll).toHaveBeenCalledWith(
expect.objectContaining({
workspaceId: mockWorkspaceId,
})
}),
mockUserId
);
});
});
@@ -163,10 +171,10 @@ describe("TasksController", () => {
it("should return a task by id", async () => {
mockTasksService.findOne.mockResolvedValue(mockTask);
const result = await controller.findOne(mockTaskId, mockWorkspaceId);
const result = await controller.findOne(mockTaskId, mockWorkspaceId, mockRequest.user);
expect(result).toEqual(mockTask);
expect(service.findOne).toHaveBeenCalledWith(mockTaskId, mockWorkspaceId);
expect(service.findOne).toHaveBeenCalledWith(mockTaskId, mockWorkspaceId, mockUserId);
});
it("should throw error if workspaceId not found", async () => {
@@ -175,10 +183,10 @@ describe("TasksController", () => {
// We can test that the controller properly uses the provided workspaceId instead
mockTasksService.findOne.mockResolvedValue(mockTask);
const result = await controller.findOne(mockTaskId, mockWorkspaceId);
const result = await controller.findOne(mockTaskId, mockWorkspaceId, mockRequest.user);
expect(result).toEqual(mockTask);
expect(service.findOne).toHaveBeenCalledWith(mockTaskId, mockWorkspaceId);
expect(service.findOne).toHaveBeenCalledWith(mockTaskId, mockWorkspaceId, mockUserId);
});
});

View File

@@ -53,8 +53,12 @@ export class TasksController {
*/
@Get()
@RequirePermission(Permission.WORKSPACE_ANY)
async findAll(@Query() query: QueryTasksDto, @Workspace() workspaceId: string) {
return this.tasksService.findAll(Object.assign({}, query, { workspaceId }));
async findAll(
@Query() query: QueryTasksDto,
@Workspace() workspaceId: string,
@CurrentUser() user: AuthenticatedUser
) {
return this.tasksService.findAll(Object.assign({}, query, { workspaceId }), user.id);
}
/**
@@ -64,8 +68,12 @@ export class TasksController {
*/
@Get(":id")
@RequirePermission(Permission.WORKSPACE_ANY)
async findOne(@Param("id") id: string, @Workspace() workspaceId: string) {
return this.tasksService.findOne(id, workspaceId);
async findOne(
@Param("id") id: string,
@Workspace() workspaceId: string,
@CurrentUser() user: AuthenticatedUser
) {
return this.tasksService.findOne(id, workspaceId, user.id);
}
/**

View File

@@ -21,6 +21,7 @@ describe("TasksService", () => {
update: vi.fn(),
delete: vi.fn(),
},
withWorkspaceContext: vi.fn(),
};
const mockActivityService = {
@@ -75,6 +76,9 @@ describe("TasksService", () => {
// Clear all mocks before each test
vi.clearAllMocks();
mockPrismaService.withWorkspaceContext.mockImplementation(async (_userId, _workspaceId, fn) => {
return fn(mockPrismaService as unknown as PrismaService);
});
});
it("should be defined", () => {
@@ -95,6 +99,11 @@ describe("TasksService", () => {
const result = await service.create(mockWorkspaceId, mockUserId, createDto);
expect(result).toEqual(mockTask);
expect(prisma.withWorkspaceContext).toHaveBeenCalledWith(
mockUserId,
mockWorkspaceId,
expect.any(Function)
);
expect(prisma.task.create).toHaveBeenCalledWith({
data: {
title: createDto.title,
@@ -177,6 +186,29 @@ describe("TasksService", () => {
});
});
it("should use workspace context when userId is provided", async () => {
mockPrismaService.task.findMany.mockResolvedValue([mockTask]);
mockPrismaService.task.count.mockResolvedValue(1);
await service.findAll({ workspaceId: mockWorkspaceId }, mockUserId);
expect(prisma.withWorkspaceContext).toHaveBeenCalledWith(
mockUserId,
mockWorkspaceId,
expect.any(Function)
);
});
it("should fallback to direct Prisma access when userId is missing", async () => {
mockPrismaService.task.findMany.mockResolvedValue([mockTask]);
mockPrismaService.task.count.mockResolvedValue(1);
await service.findAll({ workspaceId: mockWorkspaceId });
expect(prisma.withWorkspaceContext).not.toHaveBeenCalled();
expect(prisma.task.findMany).toHaveBeenCalled();
});
it("should filter by status", async () => {
mockPrismaService.task.findMany.mockResolvedValue([mockTask]);
mockPrismaService.task.count.mockResolvedValue(1);

View File

@@ -1,8 +1,7 @@
import { Injectable, NotFoundException } from "@nestjs/common";
import { Prisma, Task } from "@prisma/client";
import { Prisma, Task, TaskStatus, TaskPriority, type PrismaClient } from "@prisma/client";
import { PrismaService } from "../prisma/prisma.service";
import { ActivityService } from "../activity/activity.service";
import { TaskStatus, TaskPriority } from "@prisma/client";
import type { CreateTaskDto, UpdateTaskDto, QueryTasksDto } from "./dto";
type TaskWithRelations = Task & {
@@ -24,6 +23,18 @@ export class TasksService {
private readonly activityService: ActivityService
) {}
private async withWorkspaceContextIfAvailable<T>(
workspaceId: string | undefined,
userId: string | undefined,
fn: (client: PrismaClient) => Promise<T>
): Promise<T> {
if (workspaceId && userId && typeof this.prisma.withWorkspaceContext === "function") {
return this.prisma.withWorkspaceContext(userId, workspaceId, fn);
}
return fn(this.prisma);
}
/**
* Create a new task
*/
@@ -66,19 +77,21 @@ export class TasksService {
data.completedAt = new Date();
}
const task = await this.prisma.task.create({
data,
include: {
assignee: {
select: { id: true, name: true, email: true },
const task = await this.withWorkspaceContextIfAvailable(workspaceId, userId, async (client) => {
return client.task.create({
data,
include: {
assignee: {
select: { id: true, name: true, email: true },
},
creator: {
select: { id: true, name: true, email: true },
},
project: {
select: { id: true, name: true, color: true },
},
},
creator: {
select: { id: true, name: true, email: true },
},
project: {
select: { id: true, name: true, color: true },
},
},
});
});
// Log activity
@@ -92,7 +105,10 @@ export class TasksService {
/**
* Get paginated tasks with filters
*/
async findAll(query: QueryTasksDto): Promise<{
async findAll(
query: QueryTasksDto,
userId?: string
): Promise<{
data: Omit<TaskWithRelations, "subtasks">[];
meta: {
total: number;
@@ -143,28 +159,34 @@ export class TasksService {
}
// Execute queries in parallel
const [data, total] = await Promise.all([
this.prisma.task.findMany({
where,
include: {
assignee: {
select: { id: true, name: true, email: true },
},
creator: {
select: { id: true, name: true, email: true },
},
project: {
select: { id: true, name: true, color: true },
},
},
orderBy: {
createdAt: "desc",
},
skip,
take: limit,
}),
this.prisma.task.count({ where }),
]);
const [data, total] = await this.withWorkspaceContextIfAvailable(
query.workspaceId,
userId,
async (client) => {
return Promise.all([
client.task.findMany({
where,
include: {
assignee: {
select: { id: true, name: true, email: true },
},
creator: {
select: { id: true, name: true, email: true },
},
project: {
select: { id: true, name: true, color: true },
},
},
orderBy: {
createdAt: "desc",
},
skip,
take: limit,
}),
client.task.count({ where }),
]);
}
);
return {
data,
@@ -180,30 +202,32 @@ export class TasksService {
/**
* Get a single task by ID
*/
async findOne(id: string, workspaceId: string): Promise<TaskWithRelations> {
const task = await this.prisma.task.findUnique({
where: {
id,
workspaceId,
},
include: {
assignee: {
select: { id: true, name: true, email: true },
async findOne(id: string, workspaceId: string, userId?: string): Promise<TaskWithRelations> {
const task = await this.withWorkspaceContextIfAvailable(workspaceId, userId, async (client) => {
return client.task.findUnique({
where: {
id,
workspaceId,
},
creator: {
select: { id: true, name: true, email: true },
},
project: {
select: { id: true, name: true, color: true },
},
subtasks: {
include: {
assignee: {
select: { id: true, name: true, email: true },
include: {
assignee: {
select: { id: true, name: true, email: true },
},
creator: {
select: { id: true, name: true, email: true },
},
project: {
select: { id: true, name: true, color: true },
},
subtasks: {
include: {
assignee: {
select: { id: true, name: true, email: true },
},
},
},
},
},
});
});
if (!task) {
@@ -222,82 +246,89 @@ export class TasksService {
userId: string,
updateTaskDto: UpdateTaskDto
): Promise<Omit<TaskWithRelations, "subtasks">> {
// Verify task exists
const existingTask = await this.prisma.task.findUnique({
where: { id, workspaceId },
});
const { task, existingTask } = await this.withWorkspaceContextIfAvailable(
workspaceId,
userId,
async (client) => {
const existingTask = await client.task.findUnique({
where: { id, workspaceId },
});
if (!existingTask) {
throw new NotFoundException(`Task with ID ${id} not found`);
}
if (!existingTask) {
throw new NotFoundException(`Task with ID ${id} not found`);
}
// Build update data - only include defined fields
const data: Prisma.TaskUpdateInput = {};
// Build update data - only include defined fields
const data: Prisma.TaskUpdateInput = {};
if (updateTaskDto.title !== undefined) {
data.title = updateTaskDto.title;
}
if (updateTaskDto.description !== undefined) {
data.description = updateTaskDto.description;
}
if (updateTaskDto.status !== undefined) {
data.status = updateTaskDto.status;
}
if (updateTaskDto.priority !== undefined) {
data.priority = updateTaskDto.priority;
}
if (updateTaskDto.dueDate !== undefined) {
data.dueDate = updateTaskDto.dueDate;
}
if (updateTaskDto.sortOrder !== undefined) {
data.sortOrder = updateTaskDto.sortOrder;
}
if (updateTaskDto.metadata !== undefined) {
data.metadata = updateTaskDto.metadata as unknown as Prisma.InputJsonValue;
}
if (updateTaskDto.assigneeId !== undefined && updateTaskDto.assigneeId !== null) {
data.assignee = { connect: { id: updateTaskDto.assigneeId } };
}
if (updateTaskDto.projectId !== undefined && updateTaskDto.projectId !== null) {
data.project = { connect: { id: updateTaskDto.projectId } };
}
if (updateTaskDto.parentId !== undefined && updateTaskDto.parentId !== null) {
data.parent = { connect: { id: updateTaskDto.parentId } };
}
if (updateTaskDto.title !== undefined) {
data.title = updateTaskDto.title;
}
if (updateTaskDto.description !== undefined) {
data.description = updateTaskDto.description;
}
if (updateTaskDto.status !== undefined) {
data.status = updateTaskDto.status;
}
if (updateTaskDto.priority !== undefined) {
data.priority = updateTaskDto.priority;
}
if (updateTaskDto.dueDate !== undefined) {
data.dueDate = updateTaskDto.dueDate;
}
if (updateTaskDto.sortOrder !== undefined) {
data.sortOrder = updateTaskDto.sortOrder;
}
if (updateTaskDto.metadata !== undefined) {
data.metadata = updateTaskDto.metadata as unknown as Prisma.InputJsonValue;
}
if (updateTaskDto.assigneeId !== undefined && updateTaskDto.assigneeId !== null) {
data.assignee = { connect: { id: updateTaskDto.assigneeId } };
}
if (updateTaskDto.projectId !== undefined && updateTaskDto.projectId !== null) {
data.project = { connect: { id: updateTaskDto.projectId } };
}
if (updateTaskDto.parentId !== undefined && updateTaskDto.parentId !== null) {
data.parent = { connect: { id: updateTaskDto.parentId } };
}
// Handle completedAt based on status changes
if (updateTaskDto.status) {
if (
updateTaskDto.status === TaskStatus.COMPLETED &&
existingTask.status !== TaskStatus.COMPLETED
) {
data.completedAt = new Date();
} else if (
updateTaskDto.status !== TaskStatus.COMPLETED &&
existingTask.status === TaskStatus.COMPLETED
) {
data.completedAt = null;
// Handle completedAt based on status changes
if (updateTaskDto.status) {
if (
updateTaskDto.status === TaskStatus.COMPLETED &&
existingTask.status !== TaskStatus.COMPLETED
) {
data.completedAt = new Date();
} else if (
updateTaskDto.status !== TaskStatus.COMPLETED &&
existingTask.status === TaskStatus.COMPLETED
) {
data.completedAt = null;
}
}
const task = await client.task.update({
where: {
id,
workspaceId,
},
data,
include: {
assignee: {
select: { id: true, name: true, email: true },
},
creator: {
select: { id: true, name: true, email: true },
},
project: {
select: { id: true, name: true, color: true },
},
},
});
return { task, existingTask };
}
}
const task = await this.prisma.task.update({
where: {
id,
workspaceId,
},
data,
include: {
assignee: {
select: { id: true, name: true, email: true },
},
creator: {
select: { id: true, name: true, email: true },
},
project: {
select: { id: true, name: true, color: true },
},
},
});
);
// Log activities
await this.activityService.logTaskUpdated(workspaceId, userId, id, {
@@ -332,20 +363,23 @@ export class TasksService {
* Delete a task
*/
async remove(id: string, workspaceId: string, userId: string): Promise<void> {
// Verify task exists
const task = await this.prisma.task.findUnique({
where: { id, workspaceId },
});
const task = await this.withWorkspaceContextIfAvailable(workspaceId, userId, async (client) => {
const task = await client.task.findUnique({
where: { id, workspaceId },
});
if (!task) {
throw new NotFoundException(`Task with ID ${id} not found`);
}
if (!task) {
throw new NotFoundException(`Task with ID ${id} not found`);
}
await this.prisma.task.delete({
where: {
id,
workspaceId,
},
await client.task.delete({
where: {
id,
workspaceId,
},
});
return task;
});
// Log activity

View File

@@ -29,7 +29,7 @@ export const orchestratorConfig = registerAs("orchestrator", () => ({
defaultImage: process.env.SANDBOX_DEFAULT_IMAGE ?? "node:20-alpine",
defaultMemoryMB: parseInt(process.env.SANDBOX_DEFAULT_MEMORY_MB ?? "512", 10),
defaultCpuLimit: parseFloat(process.env.SANDBOX_DEFAULT_CPU_LIMIT ?? "1.0"),
networkMode: process.env.SANDBOX_NETWORK_MODE ?? "bridge",
networkMode: process.env.SANDBOX_NETWORK_MODE ?? "none",
},
coordinator: {
url: process.env.COORDINATOR_URL ?? "http://localhost:8000",

View File

@@ -104,19 +104,28 @@ describe("LoginPage", (): void => {
expect(screen.getByText("Loading authentication options")).toBeInTheDocument();
});
it("renders the page heading and description", (): void => {
it("renders the page heading and description", async (): Promise<void> => {
mockFetchConfig(EMAIL_ONLY_CONFIG);
render(<LoginPage />);
await waitFor((): void => {
expect(screen.getByLabelText(/email/i)).toBeInTheDocument();
});
expect(screen.getByRole("heading", { level: 1 })).toHaveTextContent("Welcome to Mosaic Stack");
expect(screen.getByText(/Your personal assistant platform/i)).toBeInTheDocument();
});
it("has proper layout styling", (): void => {
it("has proper layout styling", async (): Promise<void> => {
mockFetchConfig(EMAIL_ONLY_CONFIG);
const { container } = render(<LoginPage />);
await waitFor((): void => {
expect(screen.getByLabelText(/email/i)).toBeInTheDocument();
});
const main = container.querySelector("main");
expect(main).toHaveClass("flex", "min-h-screen");
});
@@ -430,37 +439,56 @@ describe("LoginPage", (): void => {
/* ------------------------------------------------------------------ */
describe("responsive layout", (): void => {
it("applies mobile-first padding to main element", (): void => {
it("applies mobile-first padding to main element", async (): Promise<void> => {
mockFetchConfig(EMAIL_ONLY_CONFIG);
const { container } = render(<LoginPage />);
await waitFor((): void => {
expect(screen.getByLabelText(/email/i)).toBeInTheDocument();
});
const main = container.querySelector("main");
expect(main).toHaveClass("p-4", "sm:p-8");
});
it("applies responsive text size to heading", (): void => {
it("applies responsive text size to heading", async (): Promise<void> => {
mockFetchConfig(EMAIL_ONLY_CONFIG);
render(<LoginPage />);
await waitFor((): void => {
expect(screen.getByLabelText(/email/i)).toBeInTheDocument();
});
const heading = screen.getByRole("heading", { level: 1 });
expect(heading).toHaveClass("text-2xl", "sm:text-4xl");
});
it("applies responsive padding to card container", (): void => {
it("applies responsive padding to card container", async (): Promise<void> => {
mockFetchConfig(EMAIL_ONLY_CONFIG);
const { container } = render(<LoginPage />);
await waitFor((): void => {
expect(screen.getByLabelText(/email/i)).toBeInTheDocument();
});
const card = container.querySelector(".bg-white");
expect(card).toHaveClass("p-4", "sm:p-8");
});
it("card container has full width with max-width constraint", (): void => {
it("card container has full width with max-width constraint", async (): Promise<void> => {
mockFetchConfig(EMAIL_ONLY_CONFIG);
const { container } = render(<LoginPage />);
await waitFor((): void => {
expect(screen.getByLabelText(/email/i)).toBeInTheDocument();
});
const wrapper = container.querySelector(".max-w-md");
expect(wrapper).toHaveClass("w-full", "max-w-md");
@@ -539,7 +567,9 @@ describe("LoginPage", (): void => {
});
// LoginForm auto-focuses the email input on mount
expect(screen.getByLabelText(/email/i)).toHaveFocus();
await waitFor((): void => {
expect(screen.getByLabelText(/email/i)).toHaveFocus();
});
// Tab forward through form: email -> password -> submit
await user.tab();

View File

@@ -1,5 +1,6 @@
import { describe, it, expect, vi, beforeEach } from "vitest";
import { render, screen, waitFor, fireEvent } from "@testing-library/react";
import userEvent from "@testing-library/user-event";
import type { ReactNode } from "react";
import UsagePage from "./page";
@@ -113,6 +114,15 @@ function setupMocks(overrides?: { empty?: boolean; error?: boolean }): void {
vi.mocked(fetchTaskOutcomes).mockResolvedValue(mockTaskOutcomes);
}
function setupPendingMocks(): void {
// eslint-disable-next-line @typescript-eslint/no-empty-function -- intentionally unresolved for loading-state test
const pending = new Promise<never>(() => {});
vi.mocked(fetchUsageSummary).mockReturnValue(pending);
vi.mocked(fetchTokenUsage).mockReturnValue(pending);
vi.mocked(fetchCostBreakdown).mockReturnValue(pending);
vi.mocked(fetchTaskOutcomes).mockReturnValue(pending);
}
// ─── Tests ───────────────────────────────────────────────────────────
describe("UsagePage", (): void => {
@@ -120,23 +130,32 @@ describe("UsagePage", (): void => {
vi.clearAllMocks();
});
it("should render the page title and subtitle", (): void => {
it("should render the page title and subtitle", async (): Promise<void> => {
setupMocks();
render(<UsagePage />);
await waitFor((): void => {
expect(screen.getByTestId("summary-cards")).toBeInTheDocument();
});
expect(screen.getByRole("heading", { level: 1 })).toHaveTextContent("Usage");
expect(screen.getByText("Token usage and cost overview")).toBeInTheDocument();
});
it("should have proper layout structure", (): void => {
it("should have proper layout structure", async (): Promise<void> => {
setupMocks();
const { container } = render(<UsagePage />);
await waitFor((): void => {
expect(screen.getByTestId("summary-cards")).toBeInTheDocument();
});
const main = container.querySelector("main");
expect(main).toBeInTheDocument();
});
it("should show loading skeleton initially", (): void => {
setupMocks();
setupPendingMocks();
render(<UsagePage />);
expect(screen.getByTestId("loading-skeleton")).toBeInTheDocument();
});
@@ -171,25 +190,34 @@ describe("UsagePage", (): void => {
});
});
it("should render the time range selector with three options", (): void => {
it("should render the time range selector with three options", async (): Promise<void> => {
setupMocks();
render(<UsagePage />);
await waitFor((): void => {
expect(screen.getByTestId("summary-cards")).toBeInTheDocument();
});
expect(screen.getByText("7 Days")).toBeInTheDocument();
expect(screen.getByText("30 Days")).toBeInTheDocument();
expect(screen.getByText("90 Days")).toBeInTheDocument();
});
it("should have 30 Days selected by default", (): void => {
it("should have 30 Days selected by default", async (): Promise<void> => {
setupMocks();
render(<UsagePage />);
await waitFor((): void => {
expect(screen.getByTestId("summary-cards")).toBeInTheDocument();
});
const button30d = screen.getByText("30 Days");
expect(button30d).toHaveAttribute("aria-pressed", "true");
});
it("should change time range when a different option is clicked", async (): Promise<void> => {
setupMocks();
const user = userEvent.setup();
render(<UsagePage />);
// Wait for initial load
@@ -199,7 +227,11 @@ describe("UsagePage", (): void => {
// Click 7 Days
const button7d = screen.getByText("7 Days");
fireEvent.click(button7d);
await user.click(button7d);
await waitFor((): void => {
expect(fetchUsageSummary).toHaveBeenCalledWith("7d");
});
expect(button7d).toHaveAttribute("aria-pressed", "true");
expect(screen.getByText("30 Days")).toHaveAttribute("aria-pressed", "false");

View File

@@ -0,0 +1,59 @@
import { NextResponse } from "next/server";
const DEFAULT_ORCHESTRATOR_URL = "http://localhost:3001";
function getOrchestratorUrl(): string {
return (
process.env.ORCHESTRATOR_URL ??
process.env.NEXT_PUBLIC_ORCHESTRATOR_URL ??
process.env.NEXT_PUBLIC_API_URL ??
DEFAULT_ORCHESTRATOR_URL
);
}
/**
* Server-side proxy for orchestrator agent status.
* Keeps ORCHESTRATOR_API_KEY out of browser code.
*/
export async function GET(): Promise<NextResponse> {
const orchestratorApiKey = process.env.ORCHESTRATOR_API_KEY;
if (!orchestratorApiKey) {
return NextResponse.json(
{ error: "ORCHESTRATOR_API_KEY is not configured on the web server." },
{ status: 503 }
);
}
const controller = new AbortController();
const timeout = setTimeout(() => {
controller.abort();
}, 10_000);
try {
const response = await fetch(`${getOrchestratorUrl()}/agents`, {
method: "GET",
headers: {
"Content-Type": "application/json",
"X-API-Key": orchestratorApiKey,
},
cache: "no-store",
signal: controller.signal,
});
const text = await response.text();
return new NextResponse(text, {
status: response.status,
headers: {
"Content-Type": response.headers.get("Content-Type") ?? "application/json",
},
});
} catch (error) {
const message =
error instanceof Error && error.name === "AbortError"
? "Orchestrator request timed out."
: "Unable to reach orchestrator.";
return NextResponse.json({ error: message }, { status: 502 });
} finally {
clearTimeout(timeout);
}
}

View File

@@ -1,4 +1,3 @@
/* eslint-disable @typescript-eslint/no-non-null-assertion */
/* eslint-disable @typescript-eslint/no-unnecessary-condition */
import React from "react";
import { render, screen, waitFor, fireEvent, act } from "@testing-library/react";
@@ -352,10 +351,7 @@ describe("LinkAutocomplete", (): void => {
vi.useRealTimers();
});
// TODO: Fix async/timer interaction - component works but test has timing issues with fake timers
it.skip("should perform debounced search when typing query", async (): Promise<void> => {
vi.useFakeTimers();
it("should perform debounced search when typing query", async (): Promise<void> => {
const mockResults = {
data: [
{
@@ -395,11 +391,6 @@ describe("LinkAutocomplete", (): void => {
// Should not call API immediately
expect(mockApiRequest).not.toHaveBeenCalled();
// Fast-forward 300ms and let promises resolve
await act(async () => {
await vi.runAllTimersAsync();
});
await waitFor(() => {
expect(mockApiRequest).toHaveBeenCalledWith(
"/api/knowledge/search?q=test&limit=10",
@@ -411,14 +402,9 @@ describe("LinkAutocomplete", (): void => {
await waitFor(() => {
expect(screen.getByText("Test Entry")).toBeInTheDocument();
});
vi.useRealTimers();
});
// TODO: Fix async/timer interaction - component works but test has timing issues with fake timers
it.skip("should navigate results with arrow keys", async (): Promise<void> => {
vi.useFakeTimers();
it("should navigate results with arrow keys", async (): Promise<void> => {
const mockResults = {
data: [
{
@@ -471,10 +457,6 @@ describe("LinkAutocomplete", (): void => {
fireEvent.input(textarea);
});
await act(async () => {
await vi.runAllTimersAsync();
});
await waitFor(() => {
expect(screen.getByText("Entry One")).toBeInTheDocument();
});
@@ -500,14 +482,9 @@ describe("LinkAutocomplete", (): void => {
const firstItem = screen.getByText("Entry One").closest("li");
expect(firstItem).toHaveClass("bg-blue-50");
});
vi.useRealTimers();
});
// TODO: Fix async/timer interaction - component works but test has timing issues with fake timers
it.skip("should insert link on Enter key", async (): Promise<void> => {
vi.useFakeTimers();
it("should insert link on Enter key", async (): Promise<void> => {
const mockResults = {
data: [
{
@@ -544,10 +521,6 @@ describe("LinkAutocomplete", (): void => {
fireEvent.input(textarea);
});
await act(async () => {
await vi.runAllTimersAsync();
});
await waitFor(() => {
expect(screen.getByText("Test Entry")).toBeInTheDocument();
});
@@ -558,14 +531,9 @@ describe("LinkAutocomplete", (): void => {
await waitFor(() => {
expect(onInsertMock).toHaveBeenCalledWith("[[test-entry|Test Entry]]");
});
vi.useRealTimers();
});
// TODO: Fix async/timer interaction - component works but test has timing issues with fake timers
it.skip("should insert link on click", async (): Promise<void> => {
vi.useFakeTimers();
it("should insert link on click", async (): Promise<void> => {
const mockResults = {
data: [
{
@@ -602,10 +570,6 @@ describe("LinkAutocomplete", (): void => {
fireEvent.input(textarea);
});
await act(async () => {
await vi.runAllTimersAsync();
});
await waitFor(() => {
expect(screen.getByText("Test Entry")).toBeInTheDocument();
});
@@ -616,14 +580,9 @@ describe("LinkAutocomplete", (): void => {
await waitFor(() => {
expect(onInsertMock).toHaveBeenCalledWith("[[test-entry|Test Entry]]");
});
vi.useRealTimers();
});
// TODO: Fix async/timer interaction - component works but test has timing issues with fake timers
it.skip("should close dropdown on Escape key", async (): Promise<void> => {
vi.useFakeTimers();
it("should close dropdown on Escape key", async (): Promise<void> => {
render(<LinkAutocomplete textareaRef={textareaRef} onInsert={onInsertMock} />);
const textarea = textareaRef.current;
@@ -636,28 +595,19 @@ describe("LinkAutocomplete", (): void => {
fireEvent.input(textarea);
});
await act(async () => {
await vi.runAllTimersAsync();
});
await waitFor(() => {
expect(screen.getByText(/Start typing to search/)).toBeInTheDocument();
expect(screen.getByText("↑↓ Navigate • Enter Select • Esc Cancel")).toBeInTheDocument();
});
// Press Escape
fireEvent.keyDown(textarea, { key: "Escape" });
await waitFor(() => {
expect(screen.queryByText(/Start typing to search/)).not.toBeInTheDocument();
expect(screen.queryByText("↑↓ Navigate • Enter Select • Esc Cancel")).not.toBeInTheDocument();
});
vi.useRealTimers();
});
// TODO: Fix async/timer interaction - component works but test has timing issues with fake timers
it.skip("should close dropdown when closing brackets are typed", async (): Promise<void> => {
vi.useFakeTimers();
it("should close dropdown when closing brackets are typed", async (): Promise<void> => {
render(<LinkAutocomplete textareaRef={textareaRef} onInsert={onInsertMock} />);
const textarea = textareaRef.current;
@@ -670,12 +620,8 @@ describe("LinkAutocomplete", (): void => {
fireEvent.input(textarea);
});
await act(async () => {
await vi.runAllTimersAsync();
});
await waitFor(() => {
expect(screen.getByText(/Start typing to search/)).toBeInTheDocument();
expect(screen.getByText("↑↓ Navigate • Enter Select • Esc Cancel")).toBeInTheDocument();
});
// Type closing brackets
@@ -686,16 +632,11 @@ describe("LinkAutocomplete", (): void => {
});
await waitFor(() => {
expect(screen.queryByText(/Start typing to search/)).not.toBeInTheDocument();
expect(screen.queryByText("↑↓ Navigate • Enter Select • Esc Cancel")).not.toBeInTheDocument();
});
vi.useRealTimers();
});
// TODO: Fix async/timer interaction - component works but test has timing issues with fake timers
it.skip("should show 'No entries found' when search returns no results", async (): Promise<void> => {
vi.useFakeTimers();
it("should show 'No entries found' when search returns no results", async (): Promise<void> => {
mockApiRequest.mockResolvedValue({
data: [],
meta: { total: 0, page: 1, limit: 10, totalPages: 0 },
@@ -713,32 +654,24 @@ describe("LinkAutocomplete", (): void => {
fireEvent.input(textarea);
});
await act(async () => {
await vi.runAllTimersAsync();
});
await waitFor(() => {
expect(screen.getByText("No entries found")).toBeInTheDocument();
});
vi.useRealTimers();
});
// TODO: Fix async/timer interaction - component works but test has timing issues with fake timers
it.skip("should show loading state while searching", async (): Promise<void> => {
vi.useFakeTimers();
it("should show loading state while searching", async (): Promise<void> => {
// Mock a slow API response
let resolveSearch: (value: unknown) => void;
const searchPromise = new Promise((resolve) => {
let resolveSearch: (value: {
data: unknown[];
meta: { total: number; page: number; limit: number; totalPages: number };
}) => void = () => undefined;
const searchPromise = new Promise<{
data: unknown[];
meta: { total: number; page: number; limit: number; totalPages: number };
}>((resolve) => {
resolveSearch = resolve;
});
mockApiRequest.mockReturnValue(
searchPromise as Promise<{
data: unknown[];
meta: { total: number; page: number; limit: number; totalPages: number };
}>
);
mockApiRequest.mockReturnValue(searchPromise);
render(<LinkAutocomplete textareaRef={textareaRef} onInsert={onInsertMock} />);
@@ -752,16 +685,12 @@ describe("LinkAutocomplete", (): void => {
fireEvent.input(textarea);
});
await act(async () => {
await vi.runAllTimersAsync();
});
await waitFor(() => {
expect(screen.getByText("Searching...")).toBeInTheDocument();
});
// Resolve the search
resolveSearch!({
resolveSearch({
data: [],
meta: { total: 0, page: 1, limit: 10, totalPages: 0 },
});
@@ -769,14 +698,9 @@ describe("LinkAutocomplete", (): void => {
await waitFor(() => {
expect(screen.queryByText("Searching...")).not.toBeInTheDocument();
});
vi.useRealTimers();
});
// TODO: Fix async/timer interaction - component works but test has timing issues with fake timers
it.skip("should display summary preview for entries", async (): Promise<void> => {
vi.useFakeTimers();
it("should display summary preview for entries", async (): Promise<void> => {
const mockResults = {
data: [
{
@@ -813,14 +737,8 @@ describe("LinkAutocomplete", (): void => {
fireEvent.input(textarea);
});
await act(async () => {
await vi.runAllTimersAsync();
});
await waitFor(() => {
expect(screen.getByText("This is a helpful summary")).toBeInTheDocument();
});
vi.useRealTimers();
});
});

View File

@@ -5,7 +5,6 @@
import { useState, useEffect } from "react";
import { Bot, Activity, AlertCircle, CheckCircle, Clock } from "lucide-react";
import type { WidgetProps } from "@mosaic/shared";
import { ORCHESTRATOR_URL } from "@/lib/config";
interface Agent {
agentId: string;
@@ -29,7 +28,7 @@ export function AgentStatusWidget({ id: _id, config: _config }: WidgetProps): Re
setError(null);
try {
const response = await fetch(`${ORCHESTRATOR_URL}/agents`, {
const response = await fetch("/api/orchestrator/agents", {
headers: {
"Content-Type": "application/json",
},

View File

@@ -8,7 +8,6 @@
import { useState, useEffect } from "react";
import { Activity, CheckCircle, XCircle, Clock, Loader2 } from "lucide-react";
import type { WidgetProps } from "@mosaic/shared";
import { ORCHESTRATOR_URL } from "@/lib/config";
interface AgentTask {
agentId: string;
@@ -100,7 +99,7 @@ export function TaskProgressWidget({ id: _id, config: _config }: WidgetProps): R
useEffect(() => {
const fetchTasks = (): void => {
fetch(`${ORCHESTRATOR_URL}/agents`)
fetch("/api/orchestrator/agents")
.then((res) => {
if (!res.ok) throw new Error(`HTTP ${String(res.status)}`);
return res.json() as Promise<AgentTask[]>;

View File

@@ -1,126 +1,55 @@
/**
* CalendarWidget Component Tests
* Following TDD principles
*/
import { describe, it, expect, vi, beforeEach } from "vitest";
import { render, screen, waitFor } from "@testing-library/react";
import { describe, it, expect, beforeEach, afterEach, vi } from "vitest";
import { act, render, screen } from "@testing-library/react";
import { CalendarWidget } from "../CalendarWidget";
global.fetch = vi.fn() as typeof global.fetch;
async function finishWidgetLoad(): Promise<void> {
await act(async () => {
await vi.advanceTimersByTimeAsync(500);
});
}
describe("CalendarWidget", (): void => {
beforeEach((): void => {
vi.clearAllMocks();
vi.useFakeTimers();
vi.setSystemTime(new Date("2026-02-01T08:00:00Z"));
});
it("should render loading state initially", (): void => {
vi.mocked(global.fetch).mockImplementation(
() =>
new Promise(() => {
// Intentionally never resolves to keep loading state
})
);
render(<CalendarWidget id="calendar-1" />);
expect(screen.getByText(/loading/i)).toBeInTheDocument();
afterEach((): void => {
vi.useRealTimers();
});
// TODO: Re-enable when CalendarWidget uses fetch API instead of setTimeout mock data
it.skip("should render upcoming events", async (): Promise<void> => {
const mockEvents = [
{
id: "1",
title: "Team Meeting",
startTime: new Date(Date.now() + 3600000).toISOString(),
endTime: new Date(Date.now() + 7200000).toISOString(),
},
{
id: "2",
title: "Project Review",
startTime: new Date(Date.now() + 86400000).toISOString(),
endTime: new Date(Date.now() + 90000000).toISOString(),
},
];
vi.mocked(global.fetch).mockResolvedValueOnce({
ok: true,
json: () => Promise.resolve(mockEvents),
} as unknown as Response);
it("renders loading state initially", (): void => {
render(<CalendarWidget id="calendar-1" />);
await waitFor(() => {
expect(screen.getByText("Team Meeting")).toBeInTheDocument();
expect(screen.getByText("Project Review")).toBeInTheDocument();
});
expect(screen.getByText("Loading events...")).toBeInTheDocument();
});
// TODO: Re-enable when CalendarWidget uses fetch API instead of setTimeout mock data
it.skip("should handle empty event list", async (): Promise<void> => {
vi.mocked(global.fetch).mockResolvedValueOnce({
ok: true,
json: () => Promise.resolve([]),
} as unknown as Response);
it("renders upcoming events after loading", async (): Promise<void> => {
render(<CalendarWidget id="calendar-1" />);
await waitFor(() => {
expect(screen.getByText(/no upcoming events/i)).toBeInTheDocument();
});
await finishWidgetLoad();
expect(screen.getByText("Upcoming Events")).toBeInTheDocument();
expect(screen.getByText("Team Standup")).toBeInTheDocument();
expect(screen.getByText("Project Review")).toBeInTheDocument();
expect(screen.getByText("Sprint Planning")).toBeInTheDocument();
});
// TODO: Re-enable when CalendarWidget uses fetch API instead of setTimeout mock data
it.skip("should handle API errors gracefully", async (): Promise<void> => {
vi.mocked(global.fetch).mockRejectedValueOnce(new Error("API Error"));
it("shows relative day labels", async (): Promise<void> => {
render(<CalendarWidget id="calendar-1" />);
await waitFor(() => {
expect(screen.getByText(/error/i)).toBeInTheDocument();
});
await finishWidgetLoad();
expect(screen.getAllByText("Today").length).toBeGreaterThan(0);
expect(screen.getByText("Tomorrow")).toBeInTheDocument();
});
// TODO: Re-enable when CalendarWidget uses fetch API instead of setTimeout mock data
it.skip("should format event times correctly", async (): Promise<void> => {
const now = new Date();
const startTime = new Date(now.getTime() + 3600000); // 1 hour from now
const mockEvents = [
{
id: "1",
title: "Meeting",
startTime: startTime.toISOString(),
endTime: new Date(startTime.getTime() + 3600000).toISOString(),
},
];
vi.mocked(global.fetch).mockResolvedValueOnce({
ok: true,
json: () => Promise.resolve(mockEvents),
} as unknown as Response);
it("shows event locations when present", async (): Promise<void> => {
render(<CalendarWidget id="calendar-1" />);
await waitFor(() => {
expect(screen.getByText("Meeting")).toBeInTheDocument();
// Should show time in readable format
});
});
await finishWidgetLoad();
// TODO: Re-enable when CalendarWidget uses fetch API and adds calendar-header test id
it.skip("should display current date", async (): Promise<void> => {
vi.mocked(global.fetch).mockResolvedValueOnce({
ok: true,
json: () => Promise.resolve([]),
} as unknown as Response);
render(<CalendarWidget id="calendar-1" />);
await waitFor(() => {
// Widget should display current date or month
expect(screen.getByTestId("calendar-header")).toBeInTheDocument();
});
expect(screen.getByText("Zoom")).toBeInTheDocument();
expect(screen.getByText("Conference Room A")).toBeInTheDocument();
});
});

View File

@@ -1,138 +1,54 @@
/**
* TasksWidget Component Tests
* Following TDD principles
*/
import { describe, it, expect, vi, beforeEach } from "vitest";
import { render, screen, waitFor } from "@testing-library/react";
import { describe, it, expect, beforeEach, afterEach, vi } from "vitest";
import { act, render, screen } from "@testing-library/react";
import { TasksWidget } from "../TasksWidget";
// Mock fetch for API calls
global.fetch = vi.fn() as typeof global.fetch;
async function finishWidgetLoad(): Promise<void> {
await act(async () => {
await vi.advanceTimersByTimeAsync(500);
});
}
describe("TasksWidget", (): void => {
beforeEach((): void => {
vi.clearAllMocks();
vi.useFakeTimers();
});
it("should render loading state initially", (): void => {
vi.mocked(global.fetch).mockImplementation(
() =>
new Promise(() => {
// Intentionally empty - creates a never-resolving promise for loading state
})
);
render(<TasksWidget id="tasks-1" />);
expect(screen.getByText(/loading/i)).toBeInTheDocument();
afterEach((): void => {
vi.useRealTimers();
});
// TODO: Re-enable when TasksWidget uses fetch API instead of setTimeout mock data
it.skip("should render task statistics", async (): Promise<void> => {
const mockTasks = [
{ id: "1", title: "Task 1", status: "IN_PROGRESS", priority: "HIGH" },
{ id: "2", title: "Task 2", status: "COMPLETED", priority: "MEDIUM" },
{ id: "3", title: "Task 3", status: "NOT_STARTED", priority: "LOW" },
];
vi.mocked(global.fetch).mockResolvedValueOnce({
ok: true,
json: () => Promise.resolve(mockTasks),
} as unknown as Response);
it("renders loading state initially", (): void => {
render(<TasksWidget id="tasks-1" />);
await waitFor(() => {
expect(screen.getByText("3")).toBeInTheDocument(); // Total
expect(screen.getByText("1")).toBeInTheDocument(); // In Progress
expect(screen.getByText("1")).toBeInTheDocument(); // Completed
});
expect(screen.getByText("Loading tasks...")).toBeInTheDocument();
});
// TODO: Re-enable when TasksWidget uses fetch API instead of setTimeout mock data
it.skip("should render task list", async (): Promise<void> => {
const mockTasks = [
{ id: "1", title: "Complete documentation", status: "IN_PROGRESS", priority: "HIGH" },
{ id: "2", title: "Review PRs", status: "NOT_STARTED", priority: "MEDIUM" },
];
vi.mocked(global.fetch).mockResolvedValueOnce({
ok: true,
json: () => Promise.resolve(mockTasks),
} as unknown as Response);
it("renders default summary stats", async (): Promise<void> => {
render(<TasksWidget id="tasks-1" />);
await waitFor(() => {
expect(screen.getByText("Complete documentation")).toBeInTheDocument();
expect(screen.getByText("Review PRs")).toBeInTheDocument();
});
await finishWidgetLoad();
expect(screen.getByText("Total")).toBeInTheDocument();
expect(screen.getByText("In Progress")).toBeInTheDocument();
expect(screen.getByText("Done")).toBeInTheDocument();
expect(screen.getByText("3")).toBeInTheDocument();
});
// TODO: Re-enable when TasksWidget uses fetch API instead of setTimeout mock data
it.skip("should handle empty task list", async (): Promise<void> => {
vi.mocked(global.fetch).mockResolvedValueOnce({
ok: true,
json: () => Promise.resolve([]),
} as unknown as Response);
it("renders default task rows", async (): Promise<void> => {
render(<TasksWidget id="tasks-1" />);
await waitFor(() => {
expect(screen.getByText(/no tasks/i)).toBeInTheDocument();
});
await finishWidgetLoad();
expect(screen.getByText("Complete project documentation")).toBeInTheDocument();
expect(screen.getByText("Review pull requests")).toBeInTheDocument();
expect(screen.getByText("Update dependencies")).toBeInTheDocument();
});
// TODO: Re-enable when TasksWidget uses fetch API instead of setTimeout mock data
it.skip("should handle API errors gracefully", async (): Promise<void> => {
vi.mocked(global.fetch).mockRejectedValueOnce(new Error("API Error"));
it("shows due date labels for each task", async (): Promise<void> => {
render(<TasksWidget id="tasks-1" />);
await waitFor(() => {
expect(screen.getByText(/error/i)).toBeInTheDocument();
});
});
await finishWidgetLoad();
// TODO: Re-enable when TasksWidget uses fetch API instead of setTimeout mock data
it.skip("should display priority indicators", async (): Promise<void> => {
const mockTasks = [
{ id: "1", title: "High priority task", status: "IN_PROGRESS", priority: "HIGH" },
];
vi.mocked(global.fetch).mockResolvedValueOnce({
ok: true,
json: () => Promise.resolve(mockTasks),
} as unknown as Response);
render(<TasksWidget id="tasks-1" />);
await waitFor(() => {
expect(screen.getByText("High priority task")).toBeInTheDocument();
// Priority icon should be rendered (high priority = red)
});
});
// TODO: Re-enable when TasksWidget uses fetch API instead of setTimeout mock data
it.skip("should limit displayed tasks to 5", async (): Promise<void> => {
const mockTasks = Array.from({ length: 10 }, (_, i) => ({
id: String(i + 1),
title: `Task ${String(i + 1)}`,
status: "NOT_STARTED",
priority: "MEDIUM",
}));
vi.mocked(global.fetch).mockResolvedValueOnce({
ok: true,
json: () => Promise.resolve(mockTasks),
} as unknown as Response);
render(<TasksWidget id="tasks-1" />);
await waitFor(() => {
const taskElements = screen.getAllByText(/Task \d+/);
expect(taskElements.length).toBeLessThanOrEqual(5);
});
expect(screen.getAllByText(/Due:/).length).toBe(3);
});
});

View File

@@ -4,26 +4,26 @@ Required and optional software for Mosaic Stack development.
## Required
### Node.js 20+
### Node.js 24+
```bash
# Install using nvm (recommended)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
nvm install 20
nvm use 20
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.0/install.sh | bash
nvm install 24
nvm use 24
# Verify installation
node --version # Should be v20.x.x
node --version # Should be v24.x.x
```
### pnpm 9+
### pnpm 10+
```bash
# Install globally
npm install -g pnpm@9
npm install -g pnpm@10
# Verify installation
pnpm --version # Should be 9.x.x
pnpm --version # Should be 10.x.x
```
### PostgreSQL 17+
@@ -158,8 +158,8 @@ Configure `OLLAMA_MODE=remote` and `OLLAMA_ENDPOINT` in `.env`.
Check all required tools are installed:
```bash
node --version # v20.x.x or higher
pnpm --version # 9.x.x or higher
node --version # v24.x.x or higher
pnpm --version # 10.x.x or higher
git --version # 2.x.x or higher
docker --version # 24.x.x or higher (if using Docker)
psql --version # 17.x.x or higher (if using native PostgreSQL)

View File

@@ -16,8 +16,8 @@ Thank you for your interest in contributing to Mosaic Stack! This document provi
### Prerequisites
- **Node.js:** 20.0.0 or higher
- **pnpm:** 10.19.0 or higher (package manager)
- **Node.js:** 24.0.0 or higher
- **pnpm:** 10.0.0 or higher (package manager)
- **Docker:** 20.10+ and Docker Compose 2.x+ (for database services)
- **Git:** 2.30+ for version control

View File

@@ -314,3 +314,31 @@
| 12 - QA: Test Coverage | #411 | 4 | 35K |
| 13 - QA R2: Hardening + Tests | #411 | 7 | 57K |
| **Total** | | **64** | **605K** |
---
## 2026-02-17 Full Code/Security/QA Review
**Reviewer:** Jarvis (Codex runtime)
**Scope:** Monorepo code review + security review + QA verification
**Branch:** `fix/auth-frontend-remediation`
### Verification Snapshot
- `pnpm lint`: pass
- `pnpm typecheck`: pass
- `pnpm --filter @mosaic/api test -- src/mosaic-telemetry/mosaic-telemetry.module.spec.ts src/auth/auth-rls.integration.spec.ts src/credentials/user-credential.model.spec.ts src/job-events/job-events.performance.spec.ts src/knowledge/services/fulltext-search.spec.ts`: pass (DB-bound suites intentionally skipped unless `RUN_DB_TESTS=true`)
- `pnpm audit --prod`: pass (0 vulnerabilities after overrides + lock refresh)
### Remediation Tasks
| id | status | severity | category | description | evidence |
| ------------ | ------ | -------- | ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| REV-2026-001 | done | high | security+functional | Web dashboard widgets call orchestrator `GET /agents` directly without `X-API-Key`, but orchestrator protects all `/agents` routes with `OrchestratorApiKeyGuard`. This creates a broken production path or pressures exposing a sensitive API key client-side. Add a server-side proxy/BFF route and remove direct browser calls. | `apps/web/src/app/api/orchestrator/agents/route.ts:1`, `apps/web/src/components/widgets/AgentStatusWidget.tsx:32`, `apps/web/src/components/widgets/TaskProgressWidget.tsx:103` |
| REV-2026-002 | done | high | security | RLS context helpers are now applied in `TasksService` service boundaries (`create`, `findAll`, `findOne`, `update`, `remove`) with safe fallback behavior for test doubles; controller now passes user context for list/detail paths, and regression tests assert context usage. | `apps/api/src/tasks/tasks.service.ts:27`, `apps/api/src/tasks/tasks.controller.ts:54`, `apps/api/src/tasks/tasks.service.spec.ts:15` |
| REV-2026-003 | done | medium | security | Docker sandbox defaults still use `bridge` networking; isolation hardening is incomplete by default. Move default to `none` and explicitly opt in to egress where required. | `apps/orchestrator/src/config/orchestrator.config.ts:32`, `apps/orchestrator/src/spawner/docker-sandbox.service.ts:115`, `apps/orchestrator/src/spawner/docker-sandbox.service.ts:265` |
| REV-2026-004 | done | high | security | Production dependency chain hardened via root overrides: replaced legacy `request` with `@cypress/request`, pinned `tough-cookie` and `qs` to patched ranges, and forced patched `ajv`; lockfile updated and production audit now reports zero vulnerabilities. | `package.json:68`, `pnpm-lock.yaml:1`, `pnpm audit --prod --json` (0 vulnerabilities) |
| REV-2026-005 | done | high | qa | API test suite is not hermetic for default `pnpm test`: database-backed tests run when `DATABASE_URL` exists but credentials are invalid, causing hard failures. Gate integration/perf suites behind explicit integration flag and connectivity preflight, or split commands in turbo pipeline. | `apps/api/src/credentials/user-credential.model.spec.ts:18`, `apps/api/src/knowledge/services/fulltext-search.spec.ts:30`, `apps/api/src/job-events/job-events.performance.spec.ts:19`, `apps/api/src/auth/auth-rls.integration.spec.ts:10` |
| REV-2026-006 | done | medium | qa+architecture | `MosaicTelemetryModule` imports `AuthModule`, causing telemetry module tests to fail on unrelated `ENCRYPTION_KEY` auth config requirements. Decouple telemetry module dependencies or provide test-safe module overrides. | `apps/api/src/mosaic-telemetry/mosaic-telemetry.module.ts:36`, `apps/api/src/mosaic-telemetry/mosaic-telemetry.module.spec.ts:1` |
| REV-2026-007 | done | medium | qa | Frontend skip cleanup completed for scoped findings: `TasksWidget`, `CalendarWidget`, and `LinkAutocomplete` coverage now runs with deterministic assertions and no stale `it.skip` markers in those suites. | `apps/web/src/components/widgets/__tests__/TasksWidget.test.tsx:1`, `apps/web/src/components/widgets/__tests__/CalendarWidget.test.tsx:1`, `apps/web/src/components/knowledge/__tests__/LinkAutocomplete.test.tsx:1` |
| REV-2026-008 | done | low | tooling | Repo session bootstrap reliability issue: `scripts/agent/session-start.sh` fails due stale branch tracking ref, which can silently block required lifecycle checks. Update script to tolerate missing remote branch or self-heal branch config. | `scripts/agent/session-start.sh:10`, `scripts/agent/session-start.sh:16`, `scripts/agent/session-start.sh:34` |

View File

@@ -66,7 +66,9 @@
"form-data": ">=2.5.4",
"lodash": ">=4.17.23",
"lodash-es": ">=4.17.23",
"qs": ">=6.14.1",
"request": "npm:@cypress/request@3.0.10",
"qs": ">=6.15.0",
"tough-cookie": ">=4.1.3",
"undici": ">=6.23.0"
}
}

View File

@@ -38,7 +38,7 @@
"orchestration"
],
"engines": {
"node": ">=18"
"node": ">=24.0.0"
},
"os": [
"linux",

View File

@@ -13,7 +13,7 @@
"./eslint/nestjs": "./eslint/nestjs.js",
"./prettier": "./prettier/index.js"
},
"dependencies": {
"devDependencies": {
"@eslint/js": "^9.21.0",
"@typescript-eslint/eslint-plugin": "^8.26.0",
"@typescript-eslint/parser": "^8.26.0",
@@ -22,9 +22,7 @@
"eslint-plugin-prettier": "^5.2.3",
"eslint-plugin-security": "^3.0.1",
"prettier": "^3.5.3",
"typescript": "^5.8.2",
"typescript-eslint": "^8.26.0"
},
"devDependencies": {
"typescript": "^5.8.2"
}
}

202
pnpm-lock.yaml generated
View File

@@ -9,7 +9,9 @@ overrides:
form-data: '>=2.5.4'
lodash: '>=4.17.23'
lodash-es: '>=4.17.23'
qs: '>=6.14.1'
request: npm:@cypress/request@3.0.10
qs: '>=6.15.0'
tough-cookie: '>=4.1.3'
undici: '>=6.23.0'
importers:
@@ -477,7 +479,7 @@ importers:
packages/cli-tools: {}
packages/config:
dependencies:
devDependencies:
'@eslint/js':
specifier: ^9.21.0
version: 9.39.2
@@ -502,13 +504,12 @@ importers:
prettier:
specifier: ^3.5.3
version: 3.8.1
typescript-eslint:
specifier: ^8.26.0
version: 8.54.0(eslint@9.39.2(jiti@2.6.1))(typescript@5.9.3)
devDependencies:
typescript:
specifier: ^5.8.2
version: 5.9.3
typescript-eslint:
specifier: ^8.26.0
version: 8.54.0(eslint@9.39.2(jiti@2.6.1))(typescript@5.9.3)
packages/shared:
devDependencies:
@@ -891,6 +892,10 @@ packages:
resolution: {integrity: sha512-Vd/9EVDiu6PPJt9yAh6roZP6El1xHrdvIVGjyBsHR0RYwNHgL7FJPyIIW4fANJNG6FtyZfvlRPpFI4ZM/lubvw==}
engines: {node: '>=18'}
'@cypress/request@3.0.10':
resolution: {integrity: sha512-hauBrOdvu08vOsagkZ/Aju5XuiZx6ldsLfByg1htFeldhex+PeMrYauANzFsMJeAA0+dyPLbDoX2OYuvVoLDkQ==}
engines: {node: '>= 6'}
'@discordjs/builders@1.13.1':
resolution: {integrity: sha512-cOU0UDHc3lp/5nKByDxkmRiNZBpdp0kx55aarbiAfakfKJHlxv/yFW1zmIqCAmwH5CRlrH9iMFKJMpvW4DPB+w==}
engines: {node: '>=16.11.0'}
@@ -3315,6 +3320,9 @@ packages:
ajv@8.17.1:
resolution: {integrity: sha512-B/gBuNg5SiMTrPkC+A2+cW0RszwxYmn6VYxB/inlBStS5nx6xHIt/ehKRhIMhqusl7a8LjQoZnjCs5vhwxOQ1g==}
ajv@8.18.0:
resolution: {integrity: sha512-PlXPeEWMXMZ7sPYOHqmDyCJzcfNrUr3fGNKtezX14ykXOEIvyK81d+qydx89KY5O71FKMPaQ2vBfBFI5NHR63A==}
another-json@0.2.0:
resolution: {integrity: sha512-/Ndrl68UQLhnCdsAzEXLMFuOR546o2qbYRqCglaNHbjXrwG1ayTcdwr3zkSGOGtGXDyR5X9nCFfnyG2AFJIsqg==}
@@ -4776,15 +4784,6 @@ packages:
hachure-fill@0.5.2:
resolution: {integrity: sha512-3GKBOn+m2LX9iq+JC1064cSFprJY4jL1jCXTcpnfER5HYE2l/4EfWSGzkPa/ZDBmYI0ZOEj5VHV/eKnPGkHuOg==}
har-schema@2.0.0:
resolution: {integrity: sha512-Oqluz6zhGX8cyRaTQlFMPw80bSJVG2x/cFb8ZPhUILGgHka9SsokCCOQgpveePerqidZOrT14ipqfJb7ILcW5Q==}
engines: {node: '>=4'}
har-validator@5.1.5:
resolution: {integrity: sha512-nmT2T0lljbxdQZfspsno9hgrG3Uir6Ks5afism62poxqBM6sDnMEuPmzTq8XN0OEwqKLLdh1jQI3qyE66Nzb3w==}
engines: {node: '>=6'}
deprecated: this library is no longer supported
has-flag@4.0.0:
resolution: {integrity: sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==}
engines: {node: '>=8'}
@@ -4833,9 +4832,9 @@ packages:
resolution: {integrity: sha512-T1gkAiYYDWYx3V5Bmyu7HcfcvL7mUrTWiM6yOfa3PIphViJ/gFPbvidQ+veqSOHci/PxBcDabeUNCzpOODJZig==}
engines: {node: '>= 14'}
http-signature@1.2.0:
resolution: {integrity: sha512-CAbnr6Rz4CYQkLYUtSNXxQPUH2gK8f3iWexVlsnMeD+GjlsQ0Xsy1cOX+mN3dtxYomRy21CiOzU8Uhw6OwncEQ==}
engines: {node: '>=0.8', npm: '>=1.3.7'}
http-signature@1.4.0:
resolution: {integrity: sha512-G5akfn7eKbpDN+8nPS/cb57YeA1jLTVxjpCj7tmm3QKPdyDy7T+qSC40e9ptydSWvkwjSXw1VbkpyEm39ukeAg==}
engines: {node: '>=0.10'}
https-proxy-agent@7.0.6:
resolution: {integrity: sha512-vK9P5/iUfdl95AI+JVyUuIcVtd4ofvtrOr3HNtM2yxC9bnMbEdp3x01OhQNnjb8IJYi38VlTE3mBXwcfvywuSw==}
@@ -5097,9 +5096,9 @@ packages:
jsonfile@6.2.0:
resolution: {integrity: sha512-FGuPw30AdOIUTRMC2OMRtQV+jkVj2cfPqSeWXv1NEAJ1qZ5zb1X6z1mFhbfOB/iy3ssJCD+3KuZ8r8C3uVFlAg==}
jsprim@1.4.2:
resolution: {integrity: sha512-P2bSOMAc/ciLz6DzgjVlGJP9+BrJWu5UDGK70C2iweC5QBIeFf0ZXRvGjEj2uYgrY2MkAAhsSWHDWlFtEroZWw==}
engines: {node: '>=0.6.0'}
jsprim@2.0.2:
resolution: {integrity: sha512-gqXddjPqQ6G40VdnI6T6yObEC+pDNvyP95wdQhkWkg7crHH3km5qP1FsOXEkzEQwnz6gz5qGTn1c2Y52wP3OyQ==}
engines: {'0': node >=0.6.0}
katex@0.16.28:
resolution: {integrity: sha512-YHzO7721WbmAL6Ov1uzN/l5mY5WWWhJBSW+jq4tkfZfsxmo1hu6frS0EOswvjBUnWE6NtjEs48SFn5CQESRLZg==}
@@ -5538,9 +5537,6 @@ packages:
engines: {node: '>=18'}
hasBin: true
oauth-sign@0.9.0:
resolution: {integrity: sha512-fexhUFFPTGV8ybAtSIGbV6gOkSv8UtRbDBnAyLQw4QPKkgNlsH2ByPGtMUqdWkos6YCRmAqViwgZrJc/mRDzZQ==}
object-assign@4.1.1:
resolution: {integrity: sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg==}
engines: {node: '>=0.10.0'}
@@ -5854,9 +5850,6 @@ packages:
proxy-from-env@1.1.0:
resolution: {integrity: sha512-D+zkORCbA9f1tdWRK0RaCR3GPv50cMxcrz4X8k5LTSUD1Dkw47mKJEZQNunItRTkWwgtaUSo1RVFRIG9ZXiFYg==}
psl@1.15.0:
resolution: {integrity: sha512-JZd3gMVBAVQkSs6HdNZo9Sdo0LNcQeMNP3CozBJb3JYC/QUYZTnKxP+f8oWRX4rHP5EurWxqAHTSwUCjlNKa1w==}
pump@3.0.3:
resolution: {integrity: sha512-todwxLMY7/heScKmntwQG8CXVkWUOdYxIvY2s0VWAAMh/nd8SoYiRaKjlr7+iCs984f2P8zvrfWcDDYVb73NfA==}
@@ -5867,8 +5860,8 @@ packages:
pure-rand@6.1.0:
resolution: {integrity: sha512-bVWawvoZoBYpp6yIoQtQXHZjmz35RSVHnUOTefl8Vcjr8snTPY1wnpSPMWekcFwbxI6gtmT7rSYPFvz71ldiOA==}
qs@6.14.1:
resolution: {integrity: sha512-4EK3+xJl8Ts67nLYNwqw/dsFVnCf+qR7RgXSK9jEEm9unao3njwMDdmsdvoKBKHzxd7tCYz5e5M+SnMjdtXGQQ==}
qs@6.15.0:
resolution: {integrity: sha512-mAZTtNCeetKMH+pSjrb76NAM8V9a05I9aBZOHztWy/UqcJdQYNsf59vrRKWnojAT9Y+GbIvoTBC++CPHqpDBhQ==}
engines: {node: '>=0.6'}
randombytes@2.1.0:
@@ -6015,11 +6008,6 @@ packages:
peerDependencies:
request: ^2.34
request@2.88.2:
resolution: {integrity: sha512-MsvtOrfG9ZcrOwAW+Qi+F6HbD0CWXEh9ou77uOb7FM2WPhwT7smM833PzanhJLsgXjN89Ir6V2PczXNnMpwKhw==}
engines: {node: '>= 6'}
deprecated: request has been deprecated, see https://github.com/request/request/issues/3142
require-directory@2.1.1:
resolution: {integrity: sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q==}
engines: {node: '>=0.10.0'}
@@ -6505,10 +6493,6 @@ packages:
resolution: {integrity: sha512-dRXchy+C0IgK8WPC6xvCHFRIWYUbqqdEIKPaKo/AcTUNzwLTK6AH7RjdLWsEZcAN/TBdtfUw3PYEgPr5VPr6ww==}
engines: {node: '>=14.16'}
tough-cookie@2.5.0:
resolution: {integrity: sha512-nlLsUzgm1kfLXSXfRZMc1KLAugd4hqJHDTvc2hDIwS3mZAfMEuMbc03SujMF+GEcpaX/qboeycw6iO8JwVv2+g==}
engines: {node: '>=0.8'}
tough-cookie@5.1.2:
resolution: {integrity: sha512-FVDYdxtnj0G6Qm/DhNPSb8Ju59ULcup3tuJxkFb5K8Bv2pUXILbf0xZWU8PX8Ov19OXljbUyveOFwRMwkXzO+A==}
engines: {node: '>=16'}
@@ -6700,9 +6684,8 @@ packages:
resolution: {integrity: sha512-0/A9rDy9P7cJ+8w1c9WD9V//9Wj15Ce2MPz8Ri6032usz+NfePxx5AcN3bN+r6ZL6jEo066/yNYB3tn4pQEx+A==}
hasBin: true
uuid@3.4.0:
resolution: {integrity: sha512-HjSDRw6gZE5JMggctHBcjVak08+KEVhSIiDzFnT9S9aegmp85S/bReBVTb4QTFaRNptJ9kuYaNhnbNEOkbKb/A==}
deprecated: Please upgrade to version 7 or higher. Older versions may use Math.random() in certain circumstances, which is known to be problematic. See https://v8.dev/blog/math-random for details.
uuid@8.3.2:
resolution: {integrity: sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==}
hasBin: true
uuid@9.0.1:
@@ -7408,7 +7391,7 @@ snapshots:
chalk: 5.6.2
commander: 12.1.0
dotenv: 17.2.4
drizzle-orm: 0.41.0(@opentelemetry/api@1.9.0)(@prisma/client@5.22.0(prisma@6.19.2(magicast@0.3.5)(typescript@5.9.3)))(@types/pg@8.16.0)(better-sqlite3@12.6.2)(kysely@0.28.10)(pg@8.17.2)(postgres@3.4.8)(prisma@6.19.2(magicast@0.3.5)(typescript@5.9.3))
drizzle-orm: 0.41.0(@opentelemetry/api@1.9.0)(@prisma/client@6.19.2(prisma@6.19.2(magicast@0.3.5)(typescript@5.9.3))(typescript@5.9.3))(@types/pg@8.16.0)(better-sqlite3@12.6.2)(kysely@0.28.10)(pg@8.17.2)(postgres@3.4.8)(prisma@6.19.2(magicast@0.3.5)(typescript@5.9.3))
open: 10.2.0
pg: 8.17.2
prettier: 3.8.1
@@ -7557,6 +7540,27 @@ snapshots:
'@csstools/css-tokenizer@3.0.4': {}
'@cypress/request@3.0.10':
dependencies:
aws-sign2: 0.7.0
aws4: 1.13.2
caseless: 0.12.0
combined-stream: 1.0.8
extend: 3.0.2
forever-agent: 0.6.1
form-data: 4.0.5
http-signature: 1.4.0
is-typedarray: 1.0.0
isstream: 0.1.2
json-stringify-safe: 5.0.1
mime-types: 2.1.35
performance-now: 2.1.0
qs: 6.15.0
safe-buffer: 5.2.1
tough-cookie: 5.1.2
tunnel-agent: 0.6.0
uuid: 8.3.2
'@discordjs/builders@1.13.1':
dependencies:
'@discordjs/formatters': 0.6.2
@@ -10243,9 +10247,9 @@ snapshots:
agent-base@7.1.4: {}
ajv-formats@2.1.1(ajv@8.17.1):
ajv-formats@2.1.1(ajv@8.18.0):
optionalDependencies:
ajv: 8.17.1
ajv: 8.18.0
ajv-formats@3.0.1(ajv@8.17.1):
optionalDependencies:
@@ -10255,9 +10259,9 @@ snapshots:
dependencies:
ajv: 6.12.6
ajv-keywords@5.1.0(ajv@8.17.1):
ajv-keywords@5.1.0(ajv@8.18.0):
dependencies:
ajv: 8.17.1
ajv: 8.18.0
fast-deep-equal: 3.1.3
ajv@6.12.6:
@@ -10274,6 +10278,13 @@ snapshots:
json-schema-traverse: 1.0.0
require-from-string: 2.0.2
ajv@8.18.0:
dependencies:
fast-deep-equal: 3.1.3
fast-uri: 3.1.0
json-schema-traverse: 1.0.0
require-from-string: 2.0.2
another-json@0.2.0: {}
ansi-colors@4.1.3: {}
@@ -10410,7 +10421,7 @@ snapshots:
optionalDependencies:
'@prisma/client': 5.22.0(prisma@6.19.2(magicast@0.3.5)(typescript@5.9.3))
better-sqlite3: 12.6.2
drizzle-orm: 0.41.0(@opentelemetry/api@1.9.0)(@prisma/client@5.22.0(prisma@6.19.2(magicast@0.3.5)(typescript@5.9.3)))(@types/pg@8.16.0)(better-sqlite3@12.6.2)(kysely@0.28.10)(pg@8.17.2)(postgres@3.4.8)(prisma@6.19.2(magicast@0.3.5)(typescript@5.9.3))
drizzle-orm: 0.41.0(@opentelemetry/api@1.9.0)(@prisma/client@6.19.2(prisma@6.19.2(magicast@0.3.5)(typescript@5.9.3))(typescript@5.9.3))(@types/pg@8.16.0)(better-sqlite3@12.6.2)(kysely@0.28.10)(pg@8.17.2)(postgres@3.4.8)(prisma@6.19.2(magicast@0.3.5)(typescript@5.9.3))
next: 16.1.6(@babel/core@7.28.6)(@opentelemetry/api@1.9.0)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)
pg: 8.17.2
prisma: 6.19.2(magicast@0.3.5)(typescript@5.9.3)
@@ -10435,7 +10446,7 @@ snapshots:
optionalDependencies:
'@prisma/client': 6.19.2(prisma@6.19.2(magicast@0.3.5)(typescript@5.9.3))(typescript@5.9.3)
better-sqlite3: 12.6.2
drizzle-orm: 0.41.0(@opentelemetry/api@1.9.0)(@prisma/client@5.22.0(prisma@6.19.2(magicast@0.3.5)(typescript@5.9.3)))(@types/pg@8.16.0)(better-sqlite3@12.6.2)(kysely@0.28.10)(pg@8.17.2)(postgres@3.4.8)(prisma@6.19.2(magicast@0.3.5)(typescript@5.9.3))
drizzle-orm: 0.41.0(@opentelemetry/api@1.9.0)(@prisma/client@6.19.2(prisma@6.19.2(magicast@0.3.5)(typescript@5.9.3))(typescript@5.9.3))(@types/pg@8.16.0)(better-sqlite3@12.6.2)(kysely@0.28.10)(pg@8.17.2)(postgres@3.4.8)(prisma@6.19.2(magicast@0.3.5)(typescript@5.9.3))
next: 16.1.6(@babel/core@7.28.6)(@opentelemetry/api@1.9.0)(react-dom@19.2.4(react@19.2.4))(react@19.2.4)
pg: 8.17.2
prisma: 6.19.2(magicast@0.3.5)(typescript@5.9.3)
@@ -10506,7 +10517,7 @@ snapshots:
http-errors: 2.0.1
iconv-lite: 0.4.24
on-finished: 2.4.1
qs: 6.14.1
qs: 6.15.0
raw-body: 2.5.3
type-is: 1.6.18
unpipe: 1.0.0
@@ -10521,7 +10532,7 @@ snapshots:
http-errors: 2.0.1
iconv-lite: 0.7.2
on-finished: 2.4.1
qs: 6.14.1
qs: 6.15.0
raw-body: 3.0.2
type-is: 2.0.1
transitivePeerDependencies:
@@ -11229,17 +11240,6 @@ snapshots:
dotenv@17.2.4: {}
drizzle-orm@0.41.0(@opentelemetry/api@1.9.0)(@prisma/client@5.22.0(prisma@6.19.2(magicast@0.3.5)(typescript@5.9.3)))(@types/pg@8.16.0)(better-sqlite3@12.6.2)(kysely@0.28.10)(pg@8.17.2)(postgres@3.4.8)(prisma@6.19.2(magicast@0.3.5)(typescript@5.9.3)):
optionalDependencies:
'@opentelemetry/api': 1.9.0
'@prisma/client': 5.22.0(prisma@6.19.2(magicast@0.3.5)(typescript@5.9.3))
'@types/pg': 8.16.0
better-sqlite3: 12.6.2
kysely: 0.28.10
pg: 8.17.2
postgres: 3.4.8
prisma: 6.19.2(magicast@0.3.5)(typescript@5.9.3)
drizzle-orm@0.41.0(@opentelemetry/api@1.9.0)(@prisma/client@6.19.2(prisma@6.19.2(magicast@0.3.5)(typescript@5.9.3))(typescript@5.9.3))(@types/pg@8.16.0)(better-sqlite3@12.6.2)(kysely@0.28.10)(pg@8.17.2)(postgres@3.4.8)(prisma@6.19.2(magicast@0.3.5)(typescript@5.9.3)):
optionalDependencies:
'@opentelemetry/api': 1.9.0
@@ -11250,7 +11250,6 @@ snapshots:
pg: 8.17.2
postgres: 3.4.8
prisma: 6.19.2(magicast@0.3.5)(typescript@5.9.3)
optional: true
dunder-proto@1.0.1:
dependencies:
@@ -11533,7 +11532,7 @@ snapshots:
parseurl: 1.3.3
path-to-regexp: 0.1.12
proxy-addr: 2.0.7
qs: 6.14.1
qs: 6.15.0
range-parser: 1.2.1
safe-buffer: 5.2.1
send: 0.19.2
@@ -11568,7 +11567,7 @@ snapshots:
once: 1.4.0
parseurl: 1.3.3
proxy-addr: 2.0.7
qs: 6.14.1
qs: 6.15.0
range-parser: 1.2.1
router: 2.2.0
send: 1.2.1
@@ -11833,13 +11832,6 @@ snapshots:
hachure-fill@0.5.2: {}
har-schema@2.0.0: {}
har-validator@5.1.5:
dependencies:
ajv: 6.12.6
har-schema: 2.0.0
has-flag@4.0.0: {}
has-symbols@1.1.0: {}
@@ -11897,10 +11889,10 @@ snapshots:
transitivePeerDependencies:
- supports-color
http-signature@1.2.0:
http-signature@1.4.0:
dependencies:
assert-plus: 1.0.0
jsprim: 1.4.2
jsprim: 2.0.2
sshpk: 1.18.0
https-proxy-agent@7.0.6:
@@ -12144,7 +12136,7 @@ snapshots:
optionalDependencies:
graceful-fs: 4.2.11
jsprim@1.4.2:
jsprim@2.0.2:
dependencies:
assert-plus: 1.0.0
extsprintf: 1.3.0
@@ -12344,8 +12336,8 @@ snapshots:
mkdirp: 3.0.1
morgan: 1.10.1
postgres: 3.4.8
request: 2.88.2
request-promise: 4.2.6(request@2.88.2)
request: '@cypress/request@3.0.10'
request-promise: 4.2.6(@cypress/request@3.0.10)
sanitize-html: 2.17.0
transitivePeerDependencies:
- supports-color
@@ -12578,8 +12570,6 @@ snapshots:
pathe: 2.0.3
tinyexec: 1.0.2
oauth-sign@0.9.0: {}
object-assign@4.1.1: {}
object-hash@3.0.0: {}
@@ -12888,10 +12878,6 @@ snapshots:
proxy-from-env@1.1.0: {}
psl@1.15.0:
dependencies:
punycode: 2.3.1
pump@3.0.3:
dependencies:
end-of-stream: 1.4.5
@@ -12901,7 +12887,7 @@ snapshots:
pure-rand@6.1.0: {}
qs@6.14.1:
qs@6.15.0:
dependencies:
side-channel: 1.1.0
@@ -13059,41 +13045,18 @@ snapshots:
regexp-tree@0.1.27: {}
request-promise-core@1.1.4(request@2.88.2):
request-promise-core@1.1.4(@cypress/request@3.0.10):
dependencies:
lodash: 4.17.23
request: 2.88.2
request: '@cypress/request@3.0.10'
request-promise@4.2.6(request@2.88.2):
request-promise@4.2.6(@cypress/request@3.0.10):
dependencies:
bluebird: 3.7.2
request: 2.88.2
request-promise-core: 1.1.4(request@2.88.2)
request: '@cypress/request@3.0.10'
request-promise-core: 1.1.4(@cypress/request@3.0.10)
stealthy-require: 1.1.1
tough-cookie: 2.5.0
request@2.88.2:
dependencies:
aws-sign2: 0.7.0
aws4: 1.13.2
caseless: 0.12.0
combined-stream: 1.0.8
extend: 3.0.2
forever-agent: 0.6.1
form-data: 4.0.5
har-validator: 5.1.5
http-signature: 1.2.0
is-typedarray: 1.0.0
isstream: 0.1.2
json-stringify-safe: 5.0.1
mime-types: 2.1.35
oauth-sign: 0.9.0
performance-now: 2.1.0
qs: 6.14.1
safe-buffer: 5.2.1
tough-cookie: 2.5.0
tunnel-agent: 0.6.0
uuid: 3.4.0
tough-cookie: 5.1.2
require-directory@2.1.1: {}
@@ -13233,9 +13196,9 @@ snapshots:
schema-utils@4.3.3:
dependencies:
'@types/json-schema': 7.0.15
ajv: 8.17.1
ajv-formats: 2.1.1(ajv@8.17.1)
ajv-keywords: 5.1.0(ajv@8.17.1)
ajv: 8.18.0
ajv-formats: 2.1.1(ajv@8.18.0)
ajv-keywords: 5.1.0(ajv@8.18.0)
section-matter@1.0.0:
dependencies:
@@ -13592,7 +13555,7 @@ snapshots:
formidable: 3.5.4
methods: 1.1.2
mime: 2.6.0
qs: 6.14.1
qs: 6.15.0
transitivePeerDependencies:
- supports-color
@@ -13717,11 +13680,6 @@ snapshots:
'@tokenizer/token': 0.3.0
ieee754: 1.2.1
tough-cookie@2.5.0:
dependencies:
psl: 1.15.0
punycode: 2.3.1
tough-cookie@5.1.2:
dependencies:
tldts: 6.1.86
@@ -13903,7 +13861,7 @@ snapshots:
uuid@11.1.0: {}
uuid@3.4.0: {}
uuid@8.3.2: {}
uuid@9.0.1: {}

29
scripts/agent/common.sh Executable file
View File

@@ -0,0 +1,29 @@
#!/usr/bin/env bash
set -euo pipefail
repo_root() {
git rev-parse --show-toplevel 2>/dev/null || pwd
}
ensure_repo_root() {
cd "$(repo_root)"
}
has_remote() {
git remote get-url origin >/dev/null 2>&1
}
run_step() {
local label="$1"
shift
echo "[agent-framework] $label"
"$@"
}
load_repo_hooks() {
local hooks_file=".mosaic/repo-hooks.sh"
if [[ -f "$hooks_file" ]]; then
# shellcheck disable=SC1090
source "$hooks_file"
fi
}

16
scripts/agent/critical.sh Executable file
View File

@@ -0,0 +1,16 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=./common.sh
source "$SCRIPT_DIR/common.sh"
ensure_repo_root
load_repo_hooks
if declare -F mosaic_hook_critical >/dev/null 2>&1; then
run_step "Run repo critical hook" mosaic_hook_critical
else
echo "[agent-framework] No repo critical hook configured (.mosaic/repo-hooks.sh)"
echo "[agent-framework] Define mosaic_hook_critical() for project-specific priority scans"
fi

44
scripts/agent/log-limitation.sh Executable file
View File

@@ -0,0 +1,44 @@
#!/usr/bin/env bash
set -euo pipefail
TITLE="${1:-}"
if [[ -z "$TITLE" ]]; then
echo "Usage: $0 \"Short limitation title\"" >&2
exit 1
fi
FILE="EVOLUTION.md"
if [[ ! -f "$FILE" ]]; then
echo "[agent-framework] $FILE not found. Create project-specific limitations log if needed."
exit 0
fi
if command -v rg >/dev/null 2>&1; then
last_num=$(rg -o "^### L-[0-9]{3}" "$FILE" | sed 's/^### L-//' | sort -n | tail -1)
else
last_num=$(grep -E "^### L-[0-9]{3}" "$FILE" | sed 's/^### L-//' | sort -n | tail -1)
fi
if [[ -z "$last_num" ]]; then
next_num="001"
else
next_num=$(printf "%03d" $((10#$last_num + 1)))
fi
entry_id="L-$next_num"
cat <<EOF2
### $entry_id: $TITLE
| Aspect | Details |
|--------|---------|
| **Pain** | TODO |
| **Impact** | TODO |
| **Frequency** | TODO |
| **Current Workaround** | TODO |
| **Proposed Solution** | TODO |
| **Platform Implication** | TODO |
EOF2
echo "[agent-framework] Suggested limitation ID: $entry_id"

20
scripts/agent/session-end.sh Executable file
View File

@@ -0,0 +1,20 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=./common.sh
source "$SCRIPT_DIR/common.sh"
ensure_repo_root
load_repo_hooks
if declare -F mosaic_hook_session_end >/dev/null 2>&1; then
run_step "Run repo end hook" mosaic_hook_session_end
else
echo "[agent-framework] No repo end hook configured (.mosaic/repo-hooks.sh)"
fi
if git rev-parse --is-inside-work-tree >/dev/null 2>&1; then
run_step "Show status" git status --short
run_step "Show diff summary" git diff --stat
fi

50
scripts/agent/session-start.sh Executable file
View File

@@ -0,0 +1,50 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=./common.sh
source "$SCRIPT_DIR/common.sh"
ensure_repo_root
load_repo_hooks
if git rev-parse --is-inside-work-tree >/dev/null 2>&1 && has_remote; then
current_branch="$(git rev-parse --abbrev-ref HEAD)"
upstream_ref="$(git rev-parse --abbrev-ref --symbolic-full-name "@{upstream}" 2>/dev/null || true)"
if [[ -n "$upstream_ref" ]] && ! git show-ref --verify --quiet "refs/remotes/$upstream_ref"; then
echo "[agent-framework] Upstream ref '$upstream_ref' is missing; attempting to self-heal branch tracking"
fallback_upstream=""
if git show-ref --verify --quiet "refs/remotes/origin/develop"; then
fallback_upstream="origin/develop"
elif git show-ref --verify --quiet "refs/remotes/origin/main"; then
fallback_upstream="origin/main"
fi
if [[ -n "$fallback_upstream" ]] && [[ "$current_branch" != "HEAD" ]]; then
git branch --set-upstream-to="$fallback_upstream" "$current_branch" >/dev/null
upstream_ref="$fallback_upstream"
echo "[agent-framework] Set upstream for '$current_branch' to '$fallback_upstream'"
else
echo "[agent-framework] No fallback upstream found; skipping pull"
upstream_ref=""
fi
fi
if git diff --quiet && git diff --cached --quiet; then
if [[ -n "$upstream_ref" ]]; then
run_step "Pull latest changes" git pull --rebase
else
echo "[agent-framework] Skip pull: no valid upstream configured"
fi
else
echo "[agent-framework] Skip pull: working tree has local changes"
fi
fi
if declare -F mosaic_hook_session_start >/dev/null 2>&1; then
run_step "Run repo start hook" mosaic_hook_session_start
else
echo "[agent-framework] No repo start hook configured (.mosaic/repo-hooks.sh)"
fi