fix: address review findings — backward compat, ACP safety, result timing, security
- Fix 1: tasks_md_sync only sets MACP fields when columns exist in table headers - Fix 2: ACP dispatch now escalates instead of falsely completing - Fix 3: Removed premature collect_result() from dispatch_task() - Fix 4: Yolo brief staged via temp file (0600) instead of process args - Fix 5: cleanup_worktree validates path against configured worktree base
This commit is contained in:
@@ -12,8 +12,8 @@ MACP Phase 1 extends `tools/orchestrator-matrix/` without replacing the existing
|
|||||||
## Dispatch Modes
|
## Dispatch Modes
|
||||||
|
|
||||||
1. `exec`: runs the task's `command` directly inside the task worktree.
|
1. `exec`: runs the task's `command` directly inside the task worktree.
|
||||||
2. `yolo`: launches `mosaic yolo <runtime>` with the task brief content via a PTY wrapper.
|
2. `yolo`: launches `mosaic yolo <runtime>` via a PTY wrapper and stages the brief in a temporary file so the brief body is not exposed in process arguments.
|
||||||
3. `acp`: emits the config payload a caller can hand to an ACP/OpenClaw session spawner.
|
3. `acp`: escalates immediately with `ACP dispatch requires OpenClaw integration (Phase 2)` until real ACP/OpenClaw spawning exists.
|
||||||
|
|
||||||
## Result Contract
|
## Result Contract
|
||||||
|
|
||||||
@@ -28,4 +28,4 @@ MACP writes task result JSON under `.mosaic/orchestrator/results/` by default. R
|
|||||||
|
|
||||||
## Compatibility
|
## Compatibility
|
||||||
|
|
||||||
Legacy tasks that omit `dispatch` still behave like the original matrix controller. This keeps existing `tasks.json` workflows functional while allowing orchestrators to opt into MACP incrementally.
|
Legacy tasks that omit `dispatch` still behave like the original matrix controller. `tasks_md_sync.py` only injects MACP fields when the corresponding markdown headers exist, which keeps existing `tasks.json` workflows functional while allowing orchestrators to opt into MACP incrementally.
|
||||||
|
|||||||
@@ -48,7 +48,7 @@ The current orchestrator-matrix rail can queue shell-based worker tasks, but it
|
|||||||
2. Event schema must recognize `task.gated`, `task.escalated`, and `task.retry.scheduled`, plus a `dispatcher` source.
|
2. Event schema must recognize `task.gated`, `task.escalated`, and `task.retry.scheduled`, plus a `dispatcher` source.
|
||||||
3. Dispatcher functions must set up worktrees, build commands, execute tasks, collect results, and clean up worktrees.
|
3. Dispatcher functions must set up worktrees, build commands, execute tasks, collect results, and clean up worktrees.
|
||||||
4. Controller `run_single_task()` must route MACP-aware tasks through the dispatcher and emit the correct lifecycle events/status transitions.
|
4. Controller `run_single_task()` must route MACP-aware tasks through the dispatcher and emit the correct lifecycle events/status transitions.
|
||||||
5. `tasks_md_sync.py` must map optional MACP table columns when present and otherwise apply config defaults.
|
5. `tasks_md_sync.py` must map optional MACP table columns only when those headers are present in `docs/TASKS.md`; absent MACP headers must not inject MACP fields into legacy tasks.
|
||||||
6. `bin/mosaic` must route `mosaic macp ...` to a new `bin/mosaic-macp` script.
|
6. `bin/mosaic` must route `mosaic macp ...` to a new `bin/mosaic-macp` script.
|
||||||
|
|
||||||
## Non-Functional Requirements
|
## Non-Functional Requirements
|
||||||
@@ -94,5 +94,5 @@ The current orchestrator-matrix rail can queue shell-based worker tasks, but it
|
|||||||
## Assumptions
|
## Assumptions
|
||||||
|
|
||||||
1. ASSUMPTION: A single issue can track the full Phase 1 implementation because the user requested one bounded feature delivery rather than separate independent tickets.
|
1. ASSUMPTION: A single issue can track the full Phase 1 implementation because the user requested one bounded feature delivery rather than separate independent tickets.
|
||||||
2. ASSUMPTION: For `acp` dispatch, generating the config/payload and returning it as dispatcher output is sufficient for Phase 1 because the brief explicitly says the caller will use it with OpenClaw.
|
2. ASSUMPTION: For `acp` dispatch in Phase 1, the controller must escalate the task immediately with a clear reason instead of pretending work ran before OpenClaw integration exists.
|
||||||
3. ASSUMPTION: `task.gated` should be emitted by the controller as the transition into quality-gate execution, which keeps gate-state ownership in one place alongside the existing gate loop.
|
3. ASSUMPTION: `task.gated` should be emitted by the controller as the transition into quality-gate execution, which keeps gate-state ownership in one place alongside the existing gate loop.
|
||||||
|
|||||||
@@ -14,4 +14,4 @@ Canonical tracking for active work. Keep this file current.
|
|||||||
|
|
||||||
| id | status | description | issue | repo | branch | depends_on | blocks | agent | started_at | completed_at | estimate | used | notes |
|
| id | status | description | issue | repo | branch | depends_on | blocks | agent | started_at | completed_at | estimate | used | notes |
|
||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
||||||
| MACP-PHASE1 | blocked | Implement MACP Phase 1 across orchestrator schemas, dispatcher, controller, CLI, config, and task sync while preserving legacy queue behavior. | #8 | bootstrap | feat/macp-phase1 | | | Jarvis | 2026-03-27T23:00:00Z | 2026-03-27T23:45:00Z | medium | completed | Implementation, verification, review, commit, and push completed. Blocked on PR creation: `~/.config/mosaic/tools/git/pr-create.sh -t 'feat: implement MACP phase 1 core protocol' -b ... -B main -H feat/macp-phase1` failed with `Remote repository required: Specify ID via --repo or execute from a local git repo.` |
|
| MACP-PHASE1 | in-progress | Implement MACP Phase 1 across orchestrator schemas, dispatcher, controller, CLI, config, and task sync while preserving legacy queue behavior. | #8 | bootstrap | feat/macp-phase1 | | | Jarvis | 2026-03-27T23:00:00Z | | medium | in-progress | Review-fix pass started 2026-03-28T00:38:01Z to address backward-compatibility, ACP safety, result timing, and worktree/brief security findings on top of the blocked PR-create state. Prior blocker remains: `~/.config/mosaic/tools/git/pr-create.sh -t 'feat: implement MACP phase 1 core protocol' -b ... -B main -H feat/macp-phase1` failed with `Remote repository required: Specify ID via --repo or execute from a local git repo.` |
|
||||||
|
|||||||
@@ -56,6 +56,32 @@ Implement MACP Phase 1 in `mosaic-bootstrap` by extending the orchestrator-matri
|
|||||||
- 2026-03-27: Added explicit worker escalation handling via the `MACP_ESCALATE:` stdout marker.
|
- 2026-03-27: Added explicit worker escalation handling via the `MACP_ESCALATE:` stdout marker.
|
||||||
- 2026-03-27: Committed and pushed branch `feat/macp-phase1` (`7ef49a3`, `fd6274f`).
|
- 2026-03-27: Committed and pushed branch `feat/macp-phase1` (`7ef49a3`, `fd6274f`).
|
||||||
- 2026-03-27: Blocked in PR workflow when `~/.config/mosaic/tools/git/pr-create.sh` failed to resolve the remote repository from this worktree.
|
- 2026-03-27: Blocked in PR workflow when `~/.config/mosaic/tools/git/pr-create.sh` failed to resolve the remote repository from this worktree.
|
||||||
|
- 2026-03-28: Resumed from blocked state for a review-fix pass covering 5 findings in `docs/tasks/MACP-PHASE1-fixes.md`.
|
||||||
|
|
||||||
|
## Review Fix Pass
|
||||||
|
|
||||||
|
### Scope
|
||||||
|
|
||||||
|
1. Restore legacy `tasks_md_sync.py` behavior so rows without MACP headers do not become MACP tasks.
|
||||||
|
2. Make ACP dispatch fail-safe via escalation instead of a no-op success path.
|
||||||
|
3. Move MACP result writes to the controller after quality gates determine the final task status.
|
||||||
|
4. Remove brief text from yolo command arguments by switching to file-based brief handoff.
|
||||||
|
5. Restrict worktree cleanup to validated paths under the configured worktree base.
|
||||||
|
|
||||||
|
### TDD / Test-First Decision
|
||||||
|
|
||||||
|
1. This is a bug-fix and security-hardening pass, so targeted reproducer verification is required.
|
||||||
|
2. Repo appears to use focused script-level verification rather than a Python test suite for this surface, so reproducer checks will be command-driven and recorded as evidence.
|
||||||
|
|
||||||
|
### Planned Verification Additions
|
||||||
|
|
||||||
|
| Finding | Verification |
|
||||||
|
|---|---|
|
||||||
|
| Legacy task reclassification | Sync `docs/TASKS.md` without MACP headers into `tasks.json` and confirm `dispatch` is absent so controller stays on `run_shell()` |
|
||||||
|
| ACP no-op success | Run controller/dispatcher with `dispatch=acp` and confirm `status=escalated`, exit path is non-zero, and `task.escalated` is emitted |
|
||||||
|
| Premature result write | Inspect result JSON after final controller state only; confirm gate results are present and no dispatcher pre-write remains |
|
||||||
|
| Brief exposure | Build yolo command and confirm the brief body is absent from the command text |
|
||||||
|
| Unsafe cleanup | Call cleanup against a path outside configured base and confirm it is refused |
|
||||||
|
|
||||||
## Tests Run
|
## Tests Run
|
||||||
|
|
||||||
|
|||||||
64
docs/tasks/MACP-PHASE1-fixes.md
Normal file
64
docs/tasks/MACP-PHASE1-fixes.md
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
# MACP Phase 1 — Review Fixes
|
||||||
|
|
||||||
|
**Branch:** `feat/macp-phase1` (amend/fix commits on top of existing)
|
||||||
|
**Repo worktree:** `~/src/mosaic-bootstrap-worktrees/macp-phase1`
|
||||||
|
|
||||||
|
These are 5 findings from code review and security review. Fix all of them.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Fix 1 (BLOCKER): Legacy task reclassification in tasks_md_sync.py
|
||||||
|
|
||||||
|
**File:** `tools/orchestrator-matrix/controller/tasks_md_sync.py`
|
||||||
|
**Problem:** `build_task()` now always sets `task["dispatch"]` from `macp_defaults` even when the `docs/TASKS.md` table has no `dispatch` column. The controller's `is_macp_task()` check is simply `"dispatch" in task`, so ALL synced tasks get reclassified as MACP tasks and routed through the dispatcher instead of the legacy `run_shell()` path. This breaks backward compatibility.
|
||||||
|
|
||||||
|
**Fix:** Only populate `dispatch`, `type`, `runtime`, and `branch` from MACP defaults when the markdown table row **explicitly contains that column**. If the column is absent from the table header, do NOT inject MACP fields. Check whether the parsed headers include these column names before adding them.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Fix 2 (BLOCKER): ACP dispatch is a no-op that falsely reports success
|
||||||
|
|
||||||
|
**File:** `tools/orchestrator-matrix/dispatcher/macp_dispatcher.py`
|
||||||
|
**Problem:** For `dispatch == "acp"`, `build_dispatch_command()` returns a `python3 -c ...` command that only prints JSON but doesn't actually spawn an ACP session. The controller then marks the task completed even though no work ran.
|
||||||
|
|
||||||
|
**Fix:** For `acp` dispatch, do NOT generate a command that runs and exits 0. Instead, set the task status to `"escalated"` with `escalation_reason = "ACP dispatch requires OpenClaw integration (Phase 2)"` and return exit code 1. This makes it fail-safe — ACP tasks won't silently succeed. The controller should emit a `task.escalated` event for these.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Fix 3 (SHOULD-FIX): Premature result write before quality gates
|
||||||
|
|
||||||
|
**File:** `tools/orchestrator-matrix/dispatcher/macp_dispatcher.py`
|
||||||
|
**Problem:** `dispatch_task()` calls `collect_result(task, exit_code, [], orch_dir)` immediately after the worker exits, before quality gates run. If the controller crashes between this write and the final overwrite, a false "completed" result with empty gate_results sits on disk.
|
||||||
|
|
||||||
|
**Fix:** Remove the `collect_result()` call from `dispatch_task()`. Instead, have `dispatch_task()` return `(exit_code, output)` only. The controller in `mosaic_orchestrator.py` should call `collect_result()` AFTER quality gates have run, when the final status is known. Make sure the controller passes gate_results to collect_result.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Fix 4 (SECURITY MEDIUM): Brief contents exposed in process arguments
|
||||||
|
|
||||||
|
**File:** `tools/orchestrator-matrix/dispatcher/macp_dispatcher.py`
|
||||||
|
**Problem:** For `yolo` dispatch, `build_dispatch_command()` puts the full task brief text into the shell command as an argument: `mosaic yolo codex "<entire brief>"`. This is visible in `ps` output, shell logs, and crash reports.
|
||||||
|
|
||||||
|
**Fix:** Write the brief contents to a temporary file with restrictive permissions (0600), and pass the file path to the worker instead. The command should read: `mosaic yolo <runtime> "$(cat /path/to/brief-TASKID.tmp)"` or better, restructure so the worker script reads from a file path argument. Use the existing `orchestrator-worker.sh` pattern which already reads from a task file. Clean up the temp file after dispatch completes.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Fix 5 (SECURITY MEDIUM): Unrestricted worktree cleanup path
|
||||||
|
|
||||||
|
**File:** `tools/orchestrator-matrix/dispatcher/macp_dispatcher.py`
|
||||||
|
**Problem:** `cleanup_worktree()` trusts `task['worktree']` without validating it belongs to the expected worktree base directory. A tampered task could cause deletion of unrelated worktrees.
|
||||||
|
|
||||||
|
**Fix:** Before running `git worktree remove`, validate that the resolved worktree path starts with the configured `worktree_base` (from `config.macp.worktree_base`, with `{repo}` expanded). If the path doesn't match the expected base, log a warning and refuse to clean up. Add a helper `_is_safe_worktree_path(worktree_path, config)` for this validation.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
After fixes:
|
||||||
|
1. `python3 -c "from tools.orchestrator_matrix.dispatcher import macp_dispatcher"` should not error (fix import path if needed, or just verify `python3 tools/orchestrator-matrix/dispatcher/macp_dispatcher.py` has no syntax errors)
|
||||||
|
2. Legacy tasks.json with no `dispatch` field must still work with the controller's `run_shell()` path
|
||||||
|
3. ACP dispatch should NOT mark tasks completed — should escalate
|
||||||
|
4. No brief text should appear in generated shell commands for yolo dispatch
|
||||||
|
5. `cleanup_worktree()` should refuse paths outside the configured base
|
||||||
|
|
||||||
|
Commit with: `fix: address review findings — backward compat, ACP safety, result timing, security`
|
||||||
@@ -99,6 +99,8 @@ Controller behavior remains backward compatible:
|
|||||||
|
|
||||||
- Tasks without `dispatch` continue through the legacy shell execution path.
|
- Tasks without `dispatch` continue through the legacy shell execution path.
|
||||||
- Tasks with `dispatch` use the MACP dispatcher and can emit `task.gated` and `task.escalated`.
|
- Tasks with `dispatch` use the MACP dispatcher and can emit `task.gated` and `task.escalated`.
|
||||||
|
- `acp` dispatch is fail-safe in Phase 1: it escalates with `ACP dispatch requires OpenClaw integration (Phase 2)` instead of reporting success.
|
||||||
|
- `yolo` dispatch stages the brief in a temporary file so the brief body does not appear in process arguments.
|
||||||
|
|
||||||
Manual queue operations are exposed through:
|
Manual queue operations are exposed through:
|
||||||
|
|
||||||
|
|||||||
@@ -225,6 +225,24 @@ def run_single_task(repo_root: pathlib.Path, orch_dir: pathlib.Path, config: dic
|
|||||||
rc, output, timed_out = run_shell(cmd, repo_root, log_path, timeout_sec)
|
rc, output, timed_out = run_shell(cmd, repo_root, log_path, timeout_sec)
|
||||||
|
|
||||||
if rc != 0:
|
if rc != 0:
|
||||||
|
if is_macp_task(task) and str(task.get("status") or "") == "escalated":
|
||||||
|
task["failed_at"] = str(task.get("failed_at") or now_iso())
|
||||||
|
emit_event(
|
||||||
|
events_path,
|
||||||
|
"task.escalated",
|
||||||
|
task_id,
|
||||||
|
"escalated",
|
||||||
|
"controller",
|
||||||
|
str(task.get("escalation_reason") or task.get("error") or "Task requires human intervention."),
|
||||||
|
)
|
||||||
|
save_json(tasks_path, {"tasks": task_items})
|
||||||
|
state["running_task_id"] = None
|
||||||
|
state["updated_at"] = now_iso()
|
||||||
|
save_json(state_path, state)
|
||||||
|
macp_dispatcher.collect_result(task, rc, [], orch_dir)
|
||||||
|
if bool(config.get("macp", {}).get("cleanup_worktrees", True)):
|
||||||
|
macp_dispatcher.cleanup_worktree(task, config)
|
||||||
|
return True
|
||||||
if not task.get("error"):
|
if not task.get("error"):
|
||||||
task["error"] = f"Worker command timed out after {timeout_sec}s" if timed_out else f"Worker command failed with exit code {rc}"
|
task["error"] = f"Worker command timed out after {timeout_sec}s" if timed_out else f"Worker command failed with exit code {rc}"
|
||||||
if attempt < max_attempts:
|
if attempt < max_attempts:
|
||||||
@@ -247,9 +265,10 @@ def run_single_task(repo_root: pathlib.Path, orch_dir: pathlib.Path, config: dic
|
|||||||
state["updated_at"] = now_iso()
|
state["updated_at"] = now_iso()
|
||||||
save_json(state_path, state)
|
save_json(state_path, state)
|
||||||
if is_macp_task(task):
|
if is_macp_task(task):
|
||||||
macp_dispatcher.collect_result(task, rc, [], orch_dir)
|
if task["status"] == "failed":
|
||||||
if task["status"] == "failed" and bool(config.get("macp", {}).get("cleanup_worktrees", True)):
|
macp_dispatcher.collect_result(task, rc, [], orch_dir)
|
||||||
macp_dispatcher.cleanup_worktree(task)
|
if bool(config.get("macp", {}).get("cleanup_worktrees", True)):
|
||||||
|
macp_dispatcher.cleanup_worktree(task, config)
|
||||||
else:
|
else:
|
||||||
save_json(
|
save_json(
|
||||||
results_dir / f"{task_id}.json",
|
results_dir / f"{task_id}.json",
|
||||||
@@ -269,7 +288,7 @@ def run_single_task(repo_root: pathlib.Path, orch_dir: pathlib.Path, config: dic
|
|||||||
save_json(state_path, state)
|
save_json(state_path, state)
|
||||||
macp_dispatcher.collect_result(task, rc, [], orch_dir)
|
macp_dispatcher.collect_result(task, rc, [], orch_dir)
|
||||||
if bool(config.get("macp", {}).get("cleanup_worktrees", True)):
|
if bool(config.get("macp", {}).get("cleanup_worktrees", True)):
|
||||||
macp_dispatcher.cleanup_worktree(task)
|
macp_dispatcher.cleanup_worktree(task, config)
|
||||||
return True
|
return True
|
||||||
|
|
||||||
task["status"] = "gated"
|
task["status"] = "gated"
|
||||||
@@ -332,9 +351,10 @@ def run_single_task(repo_root: pathlib.Path, orch_dir: pathlib.Path, config: dic
|
|||||||
state["updated_at"] = now_iso()
|
state["updated_at"] = now_iso()
|
||||||
save_json(state_path, state)
|
save_json(state_path, state)
|
||||||
if is_macp_task(task):
|
if is_macp_task(task):
|
||||||
macp_dispatcher.collect_result(task, rc, gate_results, orch_dir)
|
if task["status"] in {"completed", "failed", "escalated"}:
|
||||||
if task["status"] in {"completed", "escalated"} and bool(config.get("macp", {}).get("cleanup_worktrees", True)):
|
macp_dispatcher.collect_result(task, rc, gate_results, orch_dir)
|
||||||
macp_dispatcher.cleanup_worktree(task)
|
if bool(config.get("macp", {}).get("cleanup_worktrees", True)):
|
||||||
|
macp_dispatcher.cleanup_worktree(task, config)
|
||||||
else:
|
else:
|
||||||
save_json(
|
save_json(
|
||||||
results_dir / f"{task_id}.json",
|
results_dir / f"{task_id}.json",
|
||||||
|
|||||||
@@ -35,9 +35,9 @@ def split_pipe_row(line: str) -> list[str]:
|
|||||||
return [c.strip() for c in row.split("|")]
|
return [c.strip() for c in row.split("|")]
|
||||||
|
|
||||||
|
|
||||||
def parse_tasks_markdown(path: pathlib.Path) -> list[dict[str, str]]:
|
def parse_tasks_markdown(path: pathlib.Path) -> tuple[set[str], list[dict[str, str]]]:
|
||||||
if not path.exists():
|
if not path.exists():
|
||||||
return []
|
return set(), []
|
||||||
lines = path.read_text(encoding="utf-8").splitlines()
|
lines = path.read_text(encoding="utf-8").splitlines()
|
||||||
|
|
||||||
header_idx = -1
|
header_idx = -1
|
||||||
@@ -51,7 +51,7 @@ def parse_tasks_markdown(path: pathlib.Path) -> list[dict[str, str]]:
|
|||||||
headers = cells
|
headers = cells
|
||||||
break
|
break
|
||||||
if header_idx < 0:
|
if header_idx < 0:
|
||||||
return []
|
return set(), []
|
||||||
|
|
||||||
rows: list[dict[str, str]] = []
|
rows: list[dict[str, str]] = []
|
||||||
for line in lines[header_idx + 2 :]:
|
for line in lines[header_idx + 2 :]:
|
||||||
@@ -67,7 +67,7 @@ def parse_tasks_markdown(path: pathlib.Path) -> list[dict[str, str]]:
|
|||||||
if not task_id or task_id.lower() == "id":
|
if not task_id or task_id.lower() == "id":
|
||||||
continue
|
continue
|
||||||
rows.append(row)
|
rows.append(row)
|
||||||
return rows
|
return set(headers), rows
|
||||||
|
|
||||||
|
|
||||||
def map_status(raw: str) -> str:
|
def map_status(raw: str) -> str:
|
||||||
@@ -93,6 +93,7 @@ def parse_depends(raw: str) -> list[str]:
|
|||||||
|
|
||||||
def build_task(
|
def build_task(
|
||||||
row: dict[str, str],
|
row: dict[str, str],
|
||||||
|
headers: set[str],
|
||||||
existing: dict[str, Any],
|
existing: dict[str, Any],
|
||||||
macp_defaults: dict[str, str],
|
macp_defaults: dict[str, str],
|
||||||
runtime_default: str,
|
runtime_default: str,
|
||||||
@@ -114,13 +115,25 @@ def build_task(
|
|||||||
task["description"] = description
|
task["description"] = description
|
||||||
task["status"] = map_status(row.get("status", "pending"))
|
task["status"] = map_status(row.get("status", "pending"))
|
||||||
task["depends_on"] = depends_on
|
task["depends_on"] = depends_on
|
||||||
task["type"] = task_type or str(task.get("type") or macp_defaults.get("type") or "coding")
|
|
||||||
task["dispatch"] = dispatch or str(task.get("dispatch") or macp_defaults.get("dispatch") or "")
|
|
||||||
task["runtime"] = runtime or str(task.get("runtime") or macp_defaults.get("runtime") or runtime_default or "codex")
|
|
||||||
task["branch"] = branch or str(task.get("branch") or macp_defaults.get("branch") or "")
|
|
||||||
task["issue"] = issue or str(task.get("issue") or "")
|
task["issue"] = issue or str(task.get("issue") or "")
|
||||||
task["command"] = str(task.get("command") or "")
|
task["command"] = str(task.get("command") or "")
|
||||||
task["quality_gates"] = task.get("quality_gates") or []
|
task["quality_gates"] = task.get("quality_gates") or []
|
||||||
|
if "type" in headers:
|
||||||
|
task["type"] = task_type or str(task.get("type") or macp_defaults.get("type") or "coding")
|
||||||
|
else:
|
||||||
|
task.pop("type", None)
|
||||||
|
if "dispatch" in headers:
|
||||||
|
task["dispatch"] = dispatch or str(task.get("dispatch") or macp_defaults.get("dispatch") or "")
|
||||||
|
else:
|
||||||
|
task.pop("dispatch", None)
|
||||||
|
if "runtime" in headers:
|
||||||
|
task["runtime"] = runtime or str(task.get("runtime") or macp_defaults.get("runtime") or runtime_default or "codex")
|
||||||
|
else:
|
||||||
|
task.pop("runtime", None)
|
||||||
|
if "branch" in headers:
|
||||||
|
task["branch"] = branch or str(task.get("branch") or macp_defaults.get("branch") or "")
|
||||||
|
else:
|
||||||
|
task.pop("branch", None)
|
||||||
metadata = dict(task.get("metadata") or {})
|
metadata = dict(task.get("metadata") or {})
|
||||||
metadata.update(
|
metadata.update(
|
||||||
{
|
{
|
||||||
@@ -166,7 +179,7 @@ def main() -> int:
|
|||||||
"branch": "",
|
"branch": "",
|
||||||
}
|
}
|
||||||
|
|
||||||
rows = parse_tasks_markdown(docs_path)
|
headers, rows = parse_tasks_markdown(docs_path)
|
||||||
try:
|
try:
|
||||||
source_path = str(docs_path.relative_to(repo))
|
source_path = str(docs_path.relative_to(repo))
|
||||||
except ValueError:
|
except ValueError:
|
||||||
@@ -187,6 +200,7 @@ def main() -> int:
|
|||||||
out_tasks.append(
|
out_tasks.append(
|
||||||
build_task(
|
build_task(
|
||||||
row,
|
row,
|
||||||
|
headers,
|
||||||
existing_by_id.get(task_id, {}),
|
existing_by_id.get(task_id, {}),
|
||||||
macp_defaults,
|
macp_defaults,
|
||||||
runtime_default,
|
runtime_default,
|
||||||
|
|||||||
@@ -10,6 +10,7 @@ import pathlib
|
|||||||
import re
|
import re
|
||||||
import shlex
|
import shlex
|
||||||
import subprocess
|
import subprocess
|
||||||
|
import tempfile
|
||||||
from typing import Any
|
from typing import Any
|
||||||
|
|
||||||
|
|
||||||
@@ -128,6 +129,20 @@ def _read_brief(task: dict[str, Any], repo_root: pathlib.Path) -> str:
|
|||||||
return brief_path.read_text(encoding="utf-8").strip()
|
return brief_path.read_text(encoding="utf-8").strip()
|
||||||
|
|
||||||
|
|
||||||
|
def _stage_yolo_brief_file(task: dict[str, Any], repo_root: pathlib.Path, orch_dir: pathlib.Path) -> pathlib.Path:
|
||||||
|
brief_dir = (orch_dir / "tmp").resolve()
|
||||||
|
brief_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
task_id = _slugify(str(task.get("id") or "task"))
|
||||||
|
fd, raw_path = tempfile.mkstemp(prefix=f"brief-{task_id}-", suffix=".tmp", dir=str(brief_dir), text=True)
|
||||||
|
with os.fdopen(fd, "w", encoding="utf-8") as handle:
|
||||||
|
handle.write(_read_brief(task, repo_root))
|
||||||
|
handle.write("\n")
|
||||||
|
os.chmod(raw_path, 0o600)
|
||||||
|
path = pathlib.Path(raw_path).resolve()
|
||||||
|
task["_brief_temp_path"] = str(path)
|
||||||
|
return path
|
||||||
|
|
||||||
|
|
||||||
def _resolve_result_path(task: dict[str, Any], orch_dir: pathlib.Path) -> pathlib.Path:
|
def _resolve_result_path(task: dict[str, Any], orch_dir: pathlib.Path) -> pathlib.Path:
|
||||||
result_path_raw = str(task.get("result_path") or "").strip()
|
result_path_raw = str(task.get("result_path") or "").strip()
|
||||||
if result_path_raw:
|
if result_path_raw:
|
||||||
@@ -138,6 +153,12 @@ def _resolve_result_path(task: dict[str, Any], orch_dir: pathlib.Path) -> pathli
|
|||||||
return result_path
|
return result_path
|
||||||
|
|
||||||
|
|
||||||
|
def _resolve_worktree_base(config: dict[str, Any], repo_name: str) -> pathlib.Path:
|
||||||
|
macp_config = dict(config.get("macp") or {})
|
||||||
|
base_template = str(macp_config.get("worktree_base") or "~/src/{repo}-worktrees")
|
||||||
|
return pathlib.Path(os.path.expanduser(base_template.format(repo=repo_name))).resolve()
|
||||||
|
|
||||||
|
|
||||||
def _changed_files(task: dict[str, Any]) -> list[str]:
|
def _changed_files(task: dict[str, Any]) -> list[str]:
|
||||||
worktree_raw = str(task.get("worktree") or "").strip()
|
worktree_raw = str(task.get("worktree") or "").strip()
|
||||||
if not worktree_raw:
|
if not worktree_raw:
|
||||||
@@ -167,6 +188,29 @@ def _changed_files(task: dict[str, Any]) -> list[str]:
|
|||||||
return changed
|
return changed
|
||||||
|
|
||||||
|
|
||||||
|
def _resolve_repo_root_from_worktree(worktree: pathlib.Path) -> pathlib.Path | None:
|
||||||
|
try:
|
||||||
|
common_dir_raw = _git_capture(["git", "-C", str(worktree), "rev-parse", "--git-common-dir"], worktree)
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
common_dir = pathlib.Path(common_dir_raw)
|
||||||
|
if not common_dir.is_absolute():
|
||||||
|
common_dir = (worktree / common_dir).resolve()
|
||||||
|
return common_dir.parent if common_dir.name == ".git" else common_dir
|
||||||
|
|
||||||
|
|
||||||
|
def _is_safe_worktree_path(worktree_path: pathlib.Path, config: dict[str, Any]) -> bool:
|
||||||
|
repo_root = _resolve_repo_root_from_worktree(worktree_path)
|
||||||
|
if repo_root is None:
|
||||||
|
return False
|
||||||
|
expected_base = _resolve_worktree_base(config, repo_root.name)
|
||||||
|
try:
|
||||||
|
worktree_path.resolve().relative_to(expected_base)
|
||||||
|
return True
|
||||||
|
except ValueError:
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
def setup_worktree(task: dict[str, Any], repo_root: pathlib.Path) -> pathlib.Path:
|
def setup_worktree(task: dict[str, Any], repo_root: pathlib.Path) -> pathlib.Path:
|
||||||
"""Create git worktree for task. Returns worktree path."""
|
"""Create git worktree for task. Returns worktree path."""
|
||||||
|
|
||||||
@@ -202,26 +246,16 @@ def build_dispatch_command(task: dict[str, Any], repo_root: pathlib.Path) -> str
|
|||||||
return command
|
return command
|
||||||
|
|
||||||
if dispatch == "acp":
|
if dispatch == "acp":
|
||||||
payload = {
|
raise RuntimeError("ACP dispatch requires OpenClaw integration (Phase 2)")
|
||||||
"task_id": str(task.get("id") or ""),
|
|
||||||
"title": str(task.get("title") or ""),
|
|
||||||
"runtime": runtime,
|
|
||||||
"dispatch": dispatch,
|
|
||||||
"brief_path": str(task.get("brief_path") or ""),
|
|
||||||
"worktree": str(task.get("worktree") or ""),
|
|
||||||
"branch": str(task.get("branch") or ""),
|
|
||||||
"attempt": int(task.get("attempts") or 0),
|
|
||||||
"max_attempts": int(task.get("max_attempts") or 1),
|
|
||||||
}
|
|
||||||
python_code = "import json,sys; print(json.dumps(json.loads(sys.argv[1]), indent=2))"
|
|
||||||
return f"python3 -c {shlex.quote(python_code)} {shlex.quote(json.dumps(payload))}"
|
|
||||||
|
|
||||||
if dispatch == "yolo":
|
if dispatch == "yolo":
|
||||||
brief = _read_brief(task, repo_root)
|
brief_file = pathlib.Path(str(task.get("_brief_temp_path") or "")).resolve()
|
||||||
|
if not str(task.get("_brief_temp_path") or "").strip():
|
||||||
|
raise ValueError("MACP yolo dispatch requires a staged brief file")
|
||||||
inner = (
|
inner = (
|
||||||
'export PATH="$HOME/.config/mosaic/bin:$PATH"; '
|
'export PATH="$HOME/.config/mosaic/bin:$PATH"; '
|
||||||
f"cd {shlex.quote(str(worktree))}; "
|
f"cd {shlex.quote(str(worktree))}; "
|
||||||
f"mosaic yolo {shlex.quote(runtime)} {shlex.quote(brief)}"
|
f'mosaic yolo {shlex.quote(runtime)} "$(cat {shlex.quote(str(brief_file))})"'
|
||||||
)
|
)
|
||||||
return f"script -qec {shlex.quote(inner)} /dev/null"
|
return f"script -qec {shlex.quote(inner)} /dev/null"
|
||||||
|
|
||||||
@@ -280,7 +314,7 @@ def collect_result(task: dict[str, Any], exit_code: int, gate_results: list[dict
|
|||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
def cleanup_worktree(task: dict[str, Any]) -> None:
|
def cleanup_worktree(task: dict[str, Any], config: dict[str, Any]) -> None:
|
||||||
"""Remove git worktree after task is done."""
|
"""Remove git worktree after task is done."""
|
||||||
|
|
||||||
worktree_raw = str(task.get("worktree") or "").strip()
|
worktree_raw = str(task.get("worktree") or "").strip()
|
||||||
@@ -291,12 +325,12 @@ def cleanup_worktree(task: dict[str, Any]) -> None:
|
|||||||
if not worktree.exists():
|
if not worktree.exists():
|
||||||
return
|
return
|
||||||
|
|
||||||
common_dir_raw = _git_capture(["git", "-C", str(worktree), "rev-parse", "--git-common-dir"], worktree)
|
repo_root = _resolve_repo_root_from_worktree(worktree)
|
||||||
common_dir = pathlib.Path(common_dir_raw)
|
if repo_root is None or repo_root == worktree:
|
||||||
if not common_dir.is_absolute():
|
return
|
||||||
common_dir = (worktree / common_dir).resolve()
|
|
||||||
repo_root = common_dir.parent if common_dir.name == ".git" else common_dir
|
if not _is_safe_worktree_path(worktree, config):
|
||||||
if repo_root == worktree:
|
print(f"[macp_dispatcher] refusing to clean unsafe worktree path: {worktree}", flush=True)
|
||||||
return
|
return
|
||||||
|
|
||||||
subprocess.run(
|
subprocess.run(
|
||||||
@@ -316,7 +350,7 @@ def cleanup_worktree(task: dict[str, Any]) -> None:
|
|||||||
|
|
||||||
|
|
||||||
def dispatch_task(task: dict[str, Any], repo_root: pathlib.Path, orch_dir: pathlib.Path, config: dict[str, Any]) -> tuple[int, str]:
|
def dispatch_task(task: dict[str, Any], repo_root: pathlib.Path, orch_dir: pathlib.Path, config: dict[str, Any]) -> tuple[int, str]:
|
||||||
"""Full dispatch lifecycle: setup -> execute -> collect -> cleanup. Returns (exit_code, output)."""
|
"""Full dispatch lifecycle: setup -> execute. Returns (exit_code, output)."""
|
||||||
|
|
||||||
macp_config = dict(config.get("macp") or {})
|
macp_config = dict(config.get("macp") or {})
|
||||||
worker_config = dict(config.get("worker") or {})
|
worker_config = dict(config.get("worker") or {})
|
||||||
@@ -336,22 +370,32 @@ def dispatch_task(task: dict[str, Any], repo_root: pathlib.Path, orch_dir: pathl
|
|||||||
result_dir = result_dir[len(".mosaic/orchestrator/") :]
|
result_dir = result_dir[len(".mosaic/orchestrator/") :]
|
||||||
task["result_path"] = f"{result_dir.rstrip('/')}/{task.get('id', 'task')}.json"
|
task["result_path"] = f"{result_dir.rstrip('/')}/{task.get('id', 'task')}.json"
|
||||||
|
|
||||||
|
if task["dispatch"] == "acp":
|
||||||
|
task["status"] = "escalated"
|
||||||
|
task["failed_at"] = now_iso()
|
||||||
|
task["escalation_reason"] = "ACP dispatch requires OpenClaw integration (Phase 2)"
|
||||||
|
task["error"] = task["escalation_reason"]
|
||||||
|
task["_timed_out"] = False
|
||||||
|
return 1, task["escalation_reason"]
|
||||||
|
|
||||||
worktree = setup_worktree(task, repo_root)
|
worktree = setup_worktree(task, repo_root)
|
||||||
log_path = orch_dir / "logs" / f"{task.get('id', 'task')}.log"
|
log_path = orch_dir / "logs" / f"{task.get('id', 'task')}.log"
|
||||||
timeout_sec = int(task.get("timeout_seconds") or worker_config.get("timeout_seconds") or 7200)
|
timeout_sec = int(task.get("timeout_seconds") or worker_config.get("timeout_seconds") or 7200)
|
||||||
command = build_dispatch_command(task, repo_root)
|
if task["dispatch"] == "yolo":
|
||||||
exit_code, output, timed_out = _run_command(command, worktree, log_path, timeout_sec)
|
_stage_yolo_brief_file(task, repo_root, orch_dir)
|
||||||
task["_timed_out"] = timed_out
|
try:
|
||||||
if timed_out:
|
command = build_dispatch_command(task, repo_root)
|
||||||
task["error"] = f"Worker command timed out after {timeout_sec}s"
|
exit_code, output, timed_out = _run_command(command, worktree, log_path, timeout_sec)
|
||||||
elif exit_code != 0 and not task.get("error"):
|
task["_timed_out"] = timed_out
|
||||||
task["error"] = f"Worker command failed with exit code {exit_code}"
|
if timed_out:
|
||||||
|
task["error"] = f"Worker command timed out after {timeout_sec}s"
|
||||||
collect_result(task, exit_code, [], orch_dir)
|
elif exit_code != 0 and not task.get("error"):
|
||||||
|
task["error"] = f"Worker command failed with exit code {exit_code}"
|
||||||
attempts = int(task.get("attempts") or 0)
|
return exit_code, output
|
||||||
max_attempts = int(task.get("max_attempts") or 1)
|
finally:
|
||||||
if exit_code != 0 and attempts >= max_attempts and bool(macp_config.get("cleanup_worktrees", True)):
|
brief_temp_path = str(task.pop("_brief_temp_path", "") or "").strip()
|
||||||
cleanup_worktree(task)
|
if brief_temp_path:
|
||||||
|
try:
|
||||||
return exit_code, output
|
pathlib.Path(brief_temp_path).unlink(missing_ok=True)
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
|||||||
Reference in New Issue
Block a user