fix(docker): generic naming (mosaic-*), env-var-only config, no hardcoded values
All checks were successful
ci/woodpecker/push/infra Pipeline was successful

- Renamed all jarvis-* to mosaic-* (generic for any deployment)
- Config files are .json.template with ${VAR} placeholders
- entrypoint.sh renders templates via envsubst at startup
- Ollama is optional: set OLLAMA_BASE_URL to auto-inject provider
- Model is configurable via OPENCLAW_MODEL env var
- No hardcoded IPs, keys, model names, or user preferences
- Updated README with full env var reference
This commit is contained in:
2026-03-01 08:02:31 -06:00
parent 50f0dc6018
commit 89767e26ef
20 changed files with 327 additions and 279 deletions

View File

@@ -6,10 +6,10 @@ OpenClaw multi-agent deployment for Mosaic Stack using Docker Swarm and Portaine
| Agent | Service | Primary Model | Role |
| ----------------- | ------------------- | --------------- | ---------------------------------- |
| jarvis-main | `jarvis-main` | `zai/glm-5` | Orchestrator / user-facing gateway |
| jarvis-projects | `jarvis-projects` | `zai/glm-5` | Development and coding tasks |
| jarvis-research | `jarvis-research` | `zai/glm-5` | Research and web search |
| jarvis-operations | `jarvis-operations` | `ollama/cogito` | Monitoring, health checks, alerts |
| mosaic-main | `mosaic-main` | `zai/glm-5` | Orchestrator / user-facing gateway |
| mosaic-projects | `mosaic-projects` | `zai/glm-5` | Development and coding tasks |
| mosaic-research | `mosaic-research` | `zai/glm-5` | Research and web search |
| mosaic-operations | `mosaic-operations` | `ollama/cogito` | Monitoring, health checks, alerts |
## Prerequisites
@@ -24,10 +24,10 @@ OpenClaw multi-agent deployment for Mosaic Stack using Docker Swarm and Portaine
Set values in:
- `docker/openclaw-instances/jarvis-main.env`
- `docker/openclaw-instances/jarvis-projects.env`
- `docker/openclaw-instances/jarvis-research.env`
- `docker/openclaw-instances/jarvis-operations.env`
- `docker/openclaw-instances/mosaic-main.env`
- `docker/openclaw-instances/mosaic-projects.env`
- `docker/openclaw-instances/mosaic-research.env`
- `docker/openclaw-instances/mosaic-operations.env`
Required variables:
@@ -48,17 +48,17 @@ openssl rand -hex 32
From repo root:
```bash
docker stack deploy -c docker/openclaw-compose.yml jarvis
docker stack deploy -c docker/openclaw-compose.yml mosaic
```
### 4. Verify service status
```bash
docker stack services jarvis
docker service logs jarvis-jarvis-main --tail 100
docker service logs jarvis-jarvis-projects --tail 100
docker service logs jarvis-jarvis-research --tail 100
docker service logs jarvis-jarvis-operations --tail 100
docker stack services mosaic
docker service logs mosaic-mosaic-main --tail 100
docker service logs mosaic-mosaic-projects --tail 100
docker service logs mosaic-mosaic-research --tail 100
docker service logs mosaic-mosaic-operations --tail 100
```
### 5. First-time auth (if required)
@@ -66,7 +66,7 @@ docker service logs jarvis-jarvis-operations --tail 100
Exec into a container and run OpenClaw auth device flow:
```bash
docker exec -it $(docker ps -q -f name=jarvis-jarvis-main) sh
docker exec -it $(docker ps -q -f name=mosaic-mosaic-main) sh
openclaw auth
```
@@ -76,12 +76,12 @@ You can also complete this in the Mosaic WebUI terminal (xterm.js).
| Command | Description |
| ----------------------------------------------------------- | ---------------------- |
| `docker stack deploy -c docker/openclaw-compose.yml jarvis` | Deploy/update fleet |
| `docker stack services jarvis` | List services in stack |
| `docker service logs jarvis-<service>` | View service logs |
| `docker service update --force jarvis-<service>` | Restart rolling update |
| `docker service scale jarvis-<service>=N` | Scale a service |
| `docker stack rm jarvis` | Remove fleet |
| `docker stack deploy -c docker/openclaw-compose.yml mosaic` | Deploy/update fleet |
| `docker stack services mosaic` | List services in stack |
| `docker service logs mosaic-<service>` | View service logs |
| `docker service update --force mosaic-<service>` | Restart rolling update |
| `docker service scale mosaic-<service>=N` | Scale a service |
| `docker stack rm mosaic` | Remove fleet |
## Notes

View File

@@ -1,14 +1,18 @@
# Mosaic Agent Fleet — OpenClaw Docker Swarm Stack
# Deploy: docker stack deploy -c docker/openclaw-compose.yml mosaic-agents
# All config via env vars — see openclaw-instances/*.env
services:
jarvis-main:
mosaic-main:
image: alpine/openclaw:latest
command: ["gateway", "run", "--bind", "lan", "--auth", "token"]
command: ["/config/entrypoint.sh"]
env_file:
- ./openclaw-instances/jarvis-main.env
- ./openclaw-instances/mosaic-main.env
environment:
OPENCLAW_CONFIG_PATH: /config/openclaw.json
OPENCLAW_CONFIG_PATH: /tmp/openclaw.json
volumes:
- jarvis-main-config:/config/openclaw.json:ro
- jarvis-main-state:/home/node/.openclaw
- mosaic-main-config:/config:ro
- mosaic-main-state:/home/node/.openclaw
networks:
- mosaic-stack_internal
healthcheck:
@@ -29,19 +33,19 @@ services:
reservations:
memory: 512M
labels:
- com.mosaic.agent=jarvis-main
- com.mosaic.agent=mosaic-main
- com.mosaic.role=orchestrator
jarvis-projects:
mosaic-projects:
image: alpine/openclaw:latest
command: ["gateway", "run", "--bind", "lan", "--auth", "token"]
command: ["/config/entrypoint.sh"]
env_file:
- ./openclaw-instances/jarvis-projects.env
- ./openclaw-instances/mosaic-projects.env
environment:
OPENCLAW_CONFIG_PATH: /config/openclaw.json
OPENCLAW_CONFIG_PATH: /tmp/openclaw.json
volumes:
- jarvis-projects-config:/config/openclaw.json:ro
- jarvis-projects-state:/home/node/.openclaw
- mosaic-projects-config:/config:ro
- mosaic-projects-state:/home/node/.openclaw
networks:
- mosaic-stack_internal
healthcheck:
@@ -54,7 +58,7 @@ services:
replicas: 1
restart_policy:
condition: on-failure
delay: 5s
delay: 10s
max_attempts: 3
resources:
limits:
@@ -62,19 +66,19 @@ services:
reservations:
memory: 1G
labels:
- com.mosaic.agent=jarvis-projects
- com.mosaic.role=development
- com.mosaic.agent=mosaic-projects
- com.mosaic.role=developer
jarvis-research:
mosaic-research:
image: alpine/openclaw:latest
command: ["gateway", "run", "--bind", "lan", "--auth", "token"]
command: ["/config/entrypoint.sh"]
env_file:
- ./openclaw-instances/jarvis-research.env
- ./openclaw-instances/mosaic-research.env
environment:
OPENCLAW_CONFIG_PATH: /config/openclaw.json
OPENCLAW_CONFIG_PATH: /tmp/openclaw.json
volumes:
- jarvis-research-config:/config/openclaw.json:ro
- jarvis-research-state:/home/node/.openclaw
- mosaic-research-config:/config:ro
- mosaic-research-state:/home/node/.openclaw
networks:
- mosaic-stack_internal
healthcheck:
@@ -87,7 +91,7 @@ services:
replicas: 1
restart_policy:
condition: on-failure
delay: 5s
delay: 10s
max_attempts: 3
resources:
limits:
@@ -95,19 +99,19 @@ services:
reservations:
memory: 256M
labels:
- com.mosaic.agent=jarvis-research
- com.mosaic.agent=mosaic-research
- com.mosaic.role=research
jarvis-operations:
mosaic-operations:
image: alpine/openclaw:latest
command: ["gateway", "run", "--bind", "lan", "--auth", "token"]
command: ["/config/entrypoint.sh"]
env_file:
- ./openclaw-instances/jarvis-operations.env
- ./openclaw-instances/mosaic-operations.env
environment:
OPENCLAW_CONFIG_PATH: /config/openclaw.json
OPENCLAW_CONFIG_PATH: /tmp/openclaw.json
volumes:
- jarvis-operations-config:/config/openclaw.json:ro
- jarvis-operations-state:/home/node/.openclaw
- mosaic-operations-config:/config:ro
- mosaic-operations-state:/home/node/.openclaw
networks:
- mosaic-stack_internal
healthcheck:
@@ -120,7 +124,7 @@ services:
replicas: 1
restart_policy:
condition: on-failure
delay: 5s
delay: 10s
max_attempts: 3
resources:
limits:
@@ -128,7 +132,7 @@ services:
reservations:
memory: 256M
labels:
- com.mosaic.agent=jarvis-operations
- com.mosaic.agent=mosaic-operations
- com.mosaic.role=operations
networks:
@@ -136,31 +140,11 @@ networks:
external: true
volumes:
jarvis-main-config:
driver: local
driver_opts:
type: none
o: bind
device: ${PWD}/docker/openclaw-instances/jarvis-main.json
jarvis-projects-config:
driver: local
driver_opts:
type: none
o: bind
device: ${PWD}/docker/openclaw-instances/jarvis-projects.json
jarvis-research-config:
driver: local
driver_opts:
type: none
o: bind
device: ${PWD}/docker/openclaw-instances/jarvis-research.json
jarvis-operations-config:
driver: local
driver_opts:
type: none
o: bind
device: ${PWD}/docker/openclaw-instances/jarvis-operations.json
jarvis-main-state:
jarvis-projects-state:
jarvis-research-state:
jarvis-operations-state:
mosaic-main-config:
mosaic-main-state:
mosaic-projects-config:
mosaic-projects-state:
mosaic-research-config:
mosaic-research-state:
mosaic-operations-config:
mosaic-operations-state:

View File

@@ -1,47 +1,97 @@
# OpenClaw Agent Instance Setup
# Mosaic Agent Fleet — Setup Guide
Each service in the OpenClaw fleet reads:
## Prerequisites
- A per-agent environment file: `docker/openclaw-instances/<agent>.env`
- A per-agent JSON5 config: `docker/openclaw-instances/<agent>.json`
- Docker Swarm initialized on target host
- Mosaic Stack running (Postgres, Valkey on `mosaic-stack_internal` network)
## 1. Fill in API keys in `.env` files
## 1. Configure Environment Variables
Set `ZAI_API_KEY` in each instance env file:
- `jarvis-main.env`
- `jarvis-projects.env`
- `jarvis-research.env`
- `jarvis-operations.env`
## 2. Generate unique gateway tokens per agent
Generate one token per instance:
Copy and fill in each agent's `.env` file:
```bash
openssl rand -hex 32
cd docker/openclaw-instances
# Required for each agent:
# ZAI_API_KEY — Your Z.ai API key (or other LLM provider key)
# OPENCLAW_GATEWAY_TOKEN — Unique bearer token per agent
# Generate unique tokens:
for agent in main projects research operations; do
echo "OPENCLAW_GATEWAY_TOKEN=$(openssl rand -hex 32)"
done
```
Set a different `OPENCLAW_GATEWAY_TOKEN` in each `.env` file.
### Optional: Local Ollama
## 3. Deploy the Docker Swarm stack
From repository root:
If you have an Ollama instance, add to any agent's `.env`:
```bash
docker stack deploy -c docker/openclaw-compose.yml jarvis
OLLAMA_BASE_URL=http://your-ollama-host:11434
OLLAMA_MODEL=cogito # or any model you have pulled
```
## 4. First-time auth (if needed)
The entrypoint script will automatically inject the Ollama provider at startup.
If an instance requires first-time login, exec into the running container and run:
### Optional: Override Default Model
```bash
openclaw auth
OPENCLAW_MODEL=anthropic/claude-sonnet-4-6
```
This uses OpenClaw's headless OAuth device-code flow.
## 2. Populate Config Volumes
## 5. Use Mosaic WebUI terminal for auth
Each agent needs its `.json.template` file in its config volume:
You can complete the device-code auth flow from the Mosaic WebUI terminal (xterm.js) attached to the service container.
```bash
# Create config directories and copy templates
for agent in main projects research operations; do
mkdir -p /var/lib/docker/volumes/mosaic-agents_mosaic-${agent}-config/_data/
cp openclaw-instances/mosaic-${agent}.json.template \
/var/lib/docker/volumes/mosaic-agents_mosaic-${agent}-config/_data/openclaw.json.template
cp openclaw-instances/entrypoint.sh \
/var/lib/docker/volumes/mosaic-agents_mosaic-${agent}-config/_data/entrypoint.sh
done
```
## 3. Deploy
```bash
docker stack deploy -c docker/openclaw-compose.yml mosaic-agents
docker stack services mosaic-agents
```
## 4. First-Time Auth (if needed)
For providers requiring OAuth (e.g., Anthropic):
```bash
docker exec -it $(docker ps -q -f name=mosaic-main) openclaw auth
```
Follow the device-code flow in your browser. Tokens persist in the state volume.
You can also use the Mosaic WebUI terminal (xterm.js) for this.
## 5. Verify
```bash
# Check health
curl http://localhost:18789/health
# Test chat completions endpoint
curl http://localhost:18789/v1/chat/completions \
-H "Authorization: Bearer YOUR_GATEWAY_TOKEN" \
-H "Content-Type: application/json" \
-d '{"model":"openclaw:main","messages":[{"role":"user","content":"hello"}]}'
```
## Environment Variable Reference
| Variable | Required | Description |
| ------------------------ | -------- | ------------------------------------------------- |
| `ZAI_API_KEY` | Yes\* | Z.ai API key (\*or other provider key) |
| `OPENCLAW_GATEWAY_TOKEN` | Yes | Bearer token for this agent (unique per instance) |
| `OPENCLAW_MODEL` | No | Override default model (default: `zai/glm-5`) |
| `OLLAMA_BASE_URL` | No | Ollama endpoint (e.g., `http://10.1.1.42:11434`) |
| `OLLAMA_MODEL` | No | Ollama model name (default: `cogito`) |

View File

@@ -0,0 +1,53 @@
#!/bin/sh
# Mosaic Agent Fleet — OpenClaw container entrypoint
# Renders config template from env vars, optionally adds Ollama provider, starts gateway
set -e
TEMPLATE="/config/openclaw.json.template"
CONFIG="/tmp/openclaw.json"
if [ ! -f "$TEMPLATE" ]; then
echo "ERROR: Config template not found at $TEMPLATE"
echo "Mount your config volume at /config with a .json.template file"
exit 1
fi
# Validate required env vars
: "${OPENCLAW_GATEWAY_TOKEN:?OPENCLAW_GATEWAY_TOKEN is required (generate: openssl rand -hex 32)}"
# Render template with env var substitution
envsubst < "$TEMPLATE" > "$CONFIG"
# If OLLAMA_BASE_URL is set, inject Ollama provider into config
if [ -n "$OLLAMA_BASE_URL" ]; then
# Use python3 if available, fall back to node
if command -v python3 >/dev/null 2>&1; then
python3 -c "
import json, sys
with open('$CONFIG') as f: cfg = json.load(f)
cfg.setdefault('models', {})['mode'] = 'merge'
cfg['models'].setdefault('providers', {})['ollama'] = {
'baseUrl': '$OLLAMA_BASE_URL/v1',
'api': 'openai-completions',
'models': [{'id': '${OLLAMA_MODEL:-cogito}', 'name': '${OLLAMA_MODEL:-cogito} (Local)', 'reasoning': False, 'input': ['text'], 'cost': {'input':0,'output':0,'cacheRead':0,'cacheWrite':0}, 'contextWindow': 128000, 'maxTokens': 8192}]
}
with open('$CONFIG','w') as f: json.dump(cfg, f, indent=2)
"
echo "Ollama provider added: $OLLAMA_BASE_URL (model: ${OLLAMA_MODEL:-cogito})"
elif command -v node >/dev/null 2>&1; then
node -e "
const fs = require('fs');
const cfg = JSON.parse(fs.readFileSync('$CONFIG','utf8'));
cfg.models = cfg.models || {}; cfg.models.mode = 'merge';
cfg.models.providers = cfg.models.providers || {};
cfg.models.providers.ollama = {baseUrl:'$OLLAMA_BASE_URL/v1',api:'openai-completions',models:[{id:'${OLLAMA_MODEL:-cogito}',name:'${OLLAMA_MODEL:-cogito} (Local)',reasoning:false,input:['text'],cost:{input:0,output:0,cacheRead:0,cacheWrite:0},contextWindow:128000,maxTokens:8192}]};
fs.writeFileSync('$CONFIG', JSON.stringify(cfg, null, 2));
"
echo "Ollama provider added: $OLLAMA_BASE_URL (model: ${OLLAMA_MODEL:-cogito})"
else
echo "WARNING: OLLAMA_BASE_URL set but no python3/node available to inject provider"
fi
fi
export OPENCLAW_CONFIG_PATH="$CONFIG"
exec openclaw gateway run --bind lan --auth token "$@"

View File

@@ -1,3 +0,0 @@
OPENCLAW_CONFIG_PATH=/config/openclaw.json
ZAI_API_KEY=REPLACE_WITH_ZAI_API_KEY
OPENCLAW_GATEWAY_TOKEN=REPLACE_WITH_UNIQUE_GATEWAY_TOKEN

View File

@@ -1,41 +0,0 @@
{
"gateway": {
"mode": "local",
"port": 18789,
"bind": "lan",
"auth": { "mode": "token" },
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
},
"agents": {
"defaults": {
"workspace": "/home/node/workspace",
"model": { "primary": "zai/glm-5" }
}
},
// Z.ai is built in and uses ZAI_API_KEY.
// Ollama is configured for optional local reasoning fallback.
"models": {
"mode": "merge",
"providers": {
"ollama": {
"baseUrl": "http://10.1.1.42:11434/v1",
"api": "openai-completions",
"models": [
{
"id": "cogito",
"name": "Cogito (Local Reasoning)",
"reasoning": false,
"input": ["text"],
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
"contextWindow": 128000,
"maxTokens": 8192
}
]
}
}
}
}

View File

@@ -1,3 +0,0 @@
OPENCLAW_CONFIG_PATH=/config/openclaw.json
ZAI_API_KEY=REPLACE_WITH_ZAI_API_KEY
OPENCLAW_GATEWAY_TOKEN=REPLACE_WITH_UNIQUE_GATEWAY_TOKEN

View File

@@ -1,40 +0,0 @@
{
"gateway": {
"mode": "local",
"port": 18789,
"bind": "lan",
"auth": { "mode": "token" },
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
},
"agents": {
"defaults": {
"workspace": "/home/node/workspace",
"model": { "primary": "ollama/cogito" }
}
},
// Operations uses local Ollama Cogito as the primary model.
"models": {
"mode": "merge",
"providers": {
"ollama": {
"baseUrl": "http://10.1.1.42:11434/v1",
"api": "openai-completions",
"models": [
{
"id": "cogito",
"name": "Cogito (Local Reasoning)",
"reasoning": false,
"input": ["text"],
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
"contextWindow": 128000,
"maxTokens": 8192
}
]
}
}
}
}

View File

@@ -1,3 +0,0 @@
OPENCLAW_CONFIG_PATH=/config/openclaw.json
ZAI_API_KEY=REPLACE_WITH_ZAI_API_KEY
OPENCLAW_GATEWAY_TOKEN=REPLACE_WITH_UNIQUE_GATEWAY_TOKEN

View File

@@ -1,39 +0,0 @@
{
"gateway": {
"mode": "local",
"port": 18789,
"bind": "lan",
"auth": { "mode": "token" },
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
},
"agents": {
"defaults": {
"workspace": "/home/node/workspace",
"model": { "primary": "zai/glm-5" }
}
},
"models": {
"mode": "merge",
"providers": {
"ollama": {
"baseUrl": "http://10.1.1.42:11434/v1",
"api": "openai-completions",
"models": [
{
"id": "cogito",
"name": "Cogito (Local Reasoning)",
"reasoning": false,
"input": ["text"],
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
"contextWindow": 128000,
"maxTokens": 8192
}
]
}
}
}
}

View File

@@ -1,3 +0,0 @@
OPENCLAW_CONFIG_PATH=/config/openclaw.json
ZAI_API_KEY=REPLACE_WITH_ZAI_API_KEY
OPENCLAW_GATEWAY_TOKEN=REPLACE_WITH_UNIQUE_GATEWAY_TOKEN

View File

@@ -1,39 +0,0 @@
{
"gateway": {
"mode": "local",
"port": 18789,
"bind": "lan",
"auth": { "mode": "token" },
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
},
"agents": {
"defaults": {
"workspace": "/home/node/workspace",
"model": { "primary": "zai/glm-5" }
}
},
"models": {
"mode": "merge",
"providers": {
"ollama": {
"baseUrl": "http://10.1.1.42:11434/v1",
"api": "openai-completions",
"models": [
{
"id": "cogito",
"name": "Cogito (Local Reasoning)",
"reasoning": false,
"input": ["text"],
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
"contextWindow": 128000,
"maxTokens": 8192
}
]
}
}
}
}

View File

@@ -0,0 +1,14 @@
# Mosaic Agent: main
# Fill in all values before deploying.
# Required: LLM provider API key (Z.ai, OpenAI, etc.)
ZAI_API_KEY=
# Required: unique bearer token for this agent instance (generate: openssl rand -hex 32)
OPENCLAW_GATEWAY_TOKEN=
# Optional: override default model (default: zai/glm-5)
# OPENCLAW_MODEL=zai/glm-5
# Optional: Ollama endpoint for local inference (uncomment to enable)
# OLLAMA_BASE_URL=

View File

@@ -0,0 +1,19 @@
{
"gateway": {
"mode": "local",
"port": 18789,
"bind": "lan",
"auth": { "mode": "token" },
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
},
"agents": {
"defaults": {
"workspace": "/home/node/workspace",
"model": { "primary": "${OPENCLAW_MODEL:-zai/glm-5}" }
}
}
}

View File

@@ -0,0 +1,14 @@
# Mosaic Agent: operations
# Fill in all values before deploying.
# Required: LLM provider API key (Z.ai, OpenAI, etc.)
ZAI_API_KEY=
# Required: unique bearer token for this agent instance (generate: openssl rand -hex 32)
OPENCLAW_GATEWAY_TOKEN=
# Optional: override default model (default: zai/glm-5)
# OPENCLAW_MODEL=zai/glm-5
# Optional: Ollama endpoint for local inference (uncomment to enable)
# OLLAMA_BASE_URL=

View File

@@ -0,0 +1,19 @@
{
"gateway": {
"mode": "local",
"port": 18789,
"bind": "lan",
"auth": { "mode": "token" },
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
},
"agents": {
"defaults": {
"workspace": "/home/node/workspace",
"model": { "primary": "${OPENCLAW_MODEL:-zai/glm-5}" }
}
}
}

View File

@@ -0,0 +1,14 @@
# Mosaic Agent: projects
# Fill in all values before deploying.
# Required: LLM provider API key (Z.ai, OpenAI, etc.)
ZAI_API_KEY=
# Required: unique bearer token for this agent instance (generate: openssl rand -hex 32)
OPENCLAW_GATEWAY_TOKEN=
# Optional: override default model (default: zai/glm-5)
# OPENCLAW_MODEL=zai/glm-5
# Optional: Ollama endpoint for local inference (uncomment to enable)
# OLLAMA_BASE_URL=

View File

@@ -0,0 +1,19 @@
{
"gateway": {
"mode": "local",
"port": 18789,
"bind": "lan",
"auth": { "mode": "token" },
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
},
"agents": {
"defaults": {
"workspace": "/home/node/workspace",
"model": { "primary": "${OPENCLAW_MODEL:-zai/glm-5}" }
}
}
}

View File

@@ -0,0 +1,14 @@
# Mosaic Agent: research
# Fill in all values before deploying.
# Required: LLM provider API key (Z.ai, OpenAI, etc.)
ZAI_API_KEY=
# Required: unique bearer token for this agent instance (generate: openssl rand -hex 32)
OPENCLAW_GATEWAY_TOKEN=
# Optional: override default model (default: zai/glm-5)
# OPENCLAW_MODEL=zai/glm-5
# Optional: Ollama endpoint for local inference (uncomment to enable)
# OLLAMA_BASE_URL=

View File

@@ -0,0 +1,19 @@
{
"gateway": {
"mode": "local",
"port": 18789,
"bind": "lan",
"auth": { "mode": "token" },
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
},
"agents": {
"defaults": {
"workspace": "/home/node/workspace",
"model": { "primary": "${OPENCLAW_MODEL:-zai/glm-5}" }
}
}
}