fix(docker): generic naming (mosaic-*), env-var-only config, no hardcoded values
All checks were successful
ci/woodpecker/push/infra Pipeline was successful

- Renamed all jarvis-* to mosaic-* (generic for any deployment)
- Config files are .json.template with ${VAR} placeholders
- entrypoint.sh renders templates via envsubst at startup
- Ollama is optional: set OLLAMA_BASE_URL to auto-inject provider
- Model is configurable via OPENCLAW_MODEL env var
- No hardcoded IPs, keys, model names, or user preferences
- Updated README with full env var reference
This commit is contained in:
2026-03-01 08:02:31 -06:00
parent 50f0dc6018
commit 89767e26ef
20 changed files with 327 additions and 279 deletions

View File

@@ -1,47 +1,97 @@
# OpenClaw Agent Instance Setup
# Mosaic Agent Fleet — Setup Guide
Each service in the OpenClaw fleet reads:
## Prerequisites
- A per-agent environment file: `docker/openclaw-instances/<agent>.env`
- A per-agent JSON5 config: `docker/openclaw-instances/<agent>.json`
- Docker Swarm initialized on target host
- Mosaic Stack running (Postgres, Valkey on `mosaic-stack_internal` network)
## 1. Fill in API keys in `.env` files
## 1. Configure Environment Variables
Set `ZAI_API_KEY` in each instance env file:
- `jarvis-main.env`
- `jarvis-projects.env`
- `jarvis-research.env`
- `jarvis-operations.env`
## 2. Generate unique gateway tokens per agent
Generate one token per instance:
Copy and fill in each agent's `.env` file:
```bash
openssl rand -hex 32
cd docker/openclaw-instances
# Required for each agent:
# ZAI_API_KEY — Your Z.ai API key (or other LLM provider key)
# OPENCLAW_GATEWAY_TOKEN — Unique bearer token per agent
# Generate unique tokens:
for agent in main projects research operations; do
echo "OPENCLAW_GATEWAY_TOKEN=$(openssl rand -hex 32)"
done
```
Set a different `OPENCLAW_GATEWAY_TOKEN` in each `.env` file.
### Optional: Local Ollama
## 3. Deploy the Docker Swarm stack
From repository root:
If you have an Ollama instance, add to any agent's `.env`:
```bash
docker stack deploy -c docker/openclaw-compose.yml jarvis
OLLAMA_BASE_URL=http://your-ollama-host:11434
OLLAMA_MODEL=cogito # or any model you have pulled
```
## 4. First-time auth (if needed)
The entrypoint script will automatically inject the Ollama provider at startup.
If an instance requires first-time login, exec into the running container and run:
### Optional: Override Default Model
```bash
openclaw auth
OPENCLAW_MODEL=anthropic/claude-sonnet-4-6
```
This uses OpenClaw's headless OAuth device-code flow.
## 2. Populate Config Volumes
## 5. Use Mosaic WebUI terminal for auth
Each agent needs its `.json.template` file in its config volume:
You can complete the device-code auth flow from the Mosaic WebUI terminal (xterm.js) attached to the service container.
```bash
# Create config directories and copy templates
for agent in main projects research operations; do
mkdir -p /var/lib/docker/volumes/mosaic-agents_mosaic-${agent}-config/_data/
cp openclaw-instances/mosaic-${agent}.json.template \
/var/lib/docker/volumes/mosaic-agents_mosaic-${agent}-config/_data/openclaw.json.template
cp openclaw-instances/entrypoint.sh \
/var/lib/docker/volumes/mosaic-agents_mosaic-${agent}-config/_data/entrypoint.sh
done
```
## 3. Deploy
```bash
docker stack deploy -c docker/openclaw-compose.yml mosaic-agents
docker stack services mosaic-agents
```
## 4. First-Time Auth (if needed)
For providers requiring OAuth (e.g., Anthropic):
```bash
docker exec -it $(docker ps -q -f name=mosaic-main) openclaw auth
```
Follow the device-code flow in your browser. Tokens persist in the state volume.
You can also use the Mosaic WebUI terminal (xterm.js) for this.
## 5. Verify
```bash
# Check health
curl http://localhost:18789/health
# Test chat completions endpoint
curl http://localhost:18789/v1/chat/completions \
-H "Authorization: Bearer YOUR_GATEWAY_TOKEN" \
-H "Content-Type: application/json" \
-d '{"model":"openclaw:main","messages":[{"role":"user","content":"hello"}]}'
```
## Environment Variable Reference
| Variable | Required | Description |
| ------------------------ | -------- | ------------------------------------------------- |
| `ZAI_API_KEY` | Yes\* | Z.ai API key (\*or other provider key) |
| `OPENCLAW_GATEWAY_TOKEN` | Yes | Bearer token for this agent (unique per instance) |
| `OPENCLAW_MODEL` | No | Override default model (default: `zai/glm-5`) |
| `OLLAMA_BASE_URL` | No | Ollama endpoint (e.g., `http://10.1.1.42:11434`) |
| `OLLAMA_MODEL` | No | Ollama model name (default: `cogito`) |

View File

@@ -0,0 +1,53 @@
#!/bin/sh
# Mosaic Agent Fleet — OpenClaw container entrypoint
# Renders config template from env vars, optionally adds Ollama provider, starts gateway
set -e
TEMPLATE="/config/openclaw.json.template"
CONFIG="/tmp/openclaw.json"
if [ ! -f "$TEMPLATE" ]; then
echo "ERROR: Config template not found at $TEMPLATE"
echo "Mount your config volume at /config with a .json.template file"
exit 1
fi
# Validate required env vars
: "${OPENCLAW_GATEWAY_TOKEN:?OPENCLAW_GATEWAY_TOKEN is required (generate: openssl rand -hex 32)}"
# Render template with env var substitution
envsubst < "$TEMPLATE" > "$CONFIG"
# If OLLAMA_BASE_URL is set, inject Ollama provider into config
if [ -n "$OLLAMA_BASE_URL" ]; then
# Use python3 if available, fall back to node
if command -v python3 >/dev/null 2>&1; then
python3 -c "
import json, sys
with open('$CONFIG') as f: cfg = json.load(f)
cfg.setdefault('models', {})['mode'] = 'merge'
cfg['models'].setdefault('providers', {})['ollama'] = {
'baseUrl': '$OLLAMA_BASE_URL/v1',
'api': 'openai-completions',
'models': [{'id': '${OLLAMA_MODEL:-cogito}', 'name': '${OLLAMA_MODEL:-cogito} (Local)', 'reasoning': False, 'input': ['text'], 'cost': {'input':0,'output':0,'cacheRead':0,'cacheWrite':0}, 'contextWindow': 128000, 'maxTokens': 8192}]
}
with open('$CONFIG','w') as f: json.dump(cfg, f, indent=2)
"
echo "Ollama provider added: $OLLAMA_BASE_URL (model: ${OLLAMA_MODEL:-cogito})"
elif command -v node >/dev/null 2>&1; then
node -e "
const fs = require('fs');
const cfg = JSON.parse(fs.readFileSync('$CONFIG','utf8'));
cfg.models = cfg.models || {}; cfg.models.mode = 'merge';
cfg.models.providers = cfg.models.providers || {};
cfg.models.providers.ollama = {baseUrl:'$OLLAMA_BASE_URL/v1',api:'openai-completions',models:[{id:'${OLLAMA_MODEL:-cogito}',name:'${OLLAMA_MODEL:-cogito} (Local)',reasoning:false,input:['text'],cost:{input:0,output:0,cacheRead:0,cacheWrite:0},contextWindow:128000,maxTokens:8192}]};
fs.writeFileSync('$CONFIG', JSON.stringify(cfg, null, 2));
"
echo "Ollama provider added: $OLLAMA_BASE_URL (model: ${OLLAMA_MODEL:-cogito})"
else
echo "WARNING: OLLAMA_BASE_URL set but no python3/node available to inject provider"
fi
fi
export OPENCLAW_CONFIG_PATH="$CONFIG"
exec openclaw gateway run --bind lan --auth token "$@"

View File

@@ -1,3 +0,0 @@
OPENCLAW_CONFIG_PATH=/config/openclaw.json
ZAI_API_KEY=REPLACE_WITH_ZAI_API_KEY
OPENCLAW_GATEWAY_TOKEN=REPLACE_WITH_UNIQUE_GATEWAY_TOKEN

View File

@@ -1,41 +0,0 @@
{
"gateway": {
"mode": "local",
"port": 18789,
"bind": "lan",
"auth": { "mode": "token" },
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
},
"agents": {
"defaults": {
"workspace": "/home/node/workspace",
"model": { "primary": "zai/glm-5" }
}
},
// Z.ai is built in and uses ZAI_API_KEY.
// Ollama is configured for optional local reasoning fallback.
"models": {
"mode": "merge",
"providers": {
"ollama": {
"baseUrl": "http://10.1.1.42:11434/v1",
"api": "openai-completions",
"models": [
{
"id": "cogito",
"name": "Cogito (Local Reasoning)",
"reasoning": false,
"input": ["text"],
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
"contextWindow": 128000,
"maxTokens": 8192
}
]
}
}
}
}

View File

@@ -1,3 +0,0 @@
OPENCLAW_CONFIG_PATH=/config/openclaw.json
ZAI_API_KEY=REPLACE_WITH_ZAI_API_KEY
OPENCLAW_GATEWAY_TOKEN=REPLACE_WITH_UNIQUE_GATEWAY_TOKEN

View File

@@ -1,40 +0,0 @@
{
"gateway": {
"mode": "local",
"port": 18789,
"bind": "lan",
"auth": { "mode": "token" },
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
},
"agents": {
"defaults": {
"workspace": "/home/node/workspace",
"model": { "primary": "ollama/cogito" }
}
},
// Operations uses local Ollama Cogito as the primary model.
"models": {
"mode": "merge",
"providers": {
"ollama": {
"baseUrl": "http://10.1.1.42:11434/v1",
"api": "openai-completions",
"models": [
{
"id": "cogito",
"name": "Cogito (Local Reasoning)",
"reasoning": false,
"input": ["text"],
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
"contextWindow": 128000,
"maxTokens": 8192
}
]
}
}
}
}

View File

@@ -1,3 +0,0 @@
OPENCLAW_CONFIG_PATH=/config/openclaw.json
ZAI_API_KEY=REPLACE_WITH_ZAI_API_KEY
OPENCLAW_GATEWAY_TOKEN=REPLACE_WITH_UNIQUE_GATEWAY_TOKEN

View File

@@ -1,39 +0,0 @@
{
"gateway": {
"mode": "local",
"port": 18789,
"bind": "lan",
"auth": { "mode": "token" },
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
},
"agents": {
"defaults": {
"workspace": "/home/node/workspace",
"model": { "primary": "zai/glm-5" }
}
},
"models": {
"mode": "merge",
"providers": {
"ollama": {
"baseUrl": "http://10.1.1.42:11434/v1",
"api": "openai-completions",
"models": [
{
"id": "cogito",
"name": "Cogito (Local Reasoning)",
"reasoning": false,
"input": ["text"],
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
"contextWindow": 128000,
"maxTokens": 8192
}
]
}
}
}
}

View File

@@ -1,3 +0,0 @@
OPENCLAW_CONFIG_PATH=/config/openclaw.json
ZAI_API_KEY=REPLACE_WITH_ZAI_API_KEY
OPENCLAW_GATEWAY_TOKEN=REPLACE_WITH_UNIQUE_GATEWAY_TOKEN

View File

@@ -1,39 +0,0 @@
{
"gateway": {
"mode": "local",
"port": 18789,
"bind": "lan",
"auth": { "mode": "token" },
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
},
"agents": {
"defaults": {
"workspace": "/home/node/workspace",
"model": { "primary": "zai/glm-5" }
}
},
"models": {
"mode": "merge",
"providers": {
"ollama": {
"baseUrl": "http://10.1.1.42:11434/v1",
"api": "openai-completions",
"models": [
{
"id": "cogito",
"name": "Cogito (Local Reasoning)",
"reasoning": false,
"input": ["text"],
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
"contextWindow": 128000,
"maxTokens": 8192
}
]
}
}
}
}

View File

@@ -0,0 +1,14 @@
# Mosaic Agent: main
# Fill in all values before deploying.
# Required: LLM provider API key (Z.ai, OpenAI, etc.)
ZAI_API_KEY=
# Required: unique bearer token for this agent instance (generate: openssl rand -hex 32)
OPENCLAW_GATEWAY_TOKEN=
# Optional: override default model (default: zai/glm-5)
# OPENCLAW_MODEL=zai/glm-5
# Optional: Ollama endpoint for local inference (uncomment to enable)
# OLLAMA_BASE_URL=

View File

@@ -0,0 +1,19 @@
{
"gateway": {
"mode": "local",
"port": 18789,
"bind": "lan",
"auth": { "mode": "token" },
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
},
"agents": {
"defaults": {
"workspace": "/home/node/workspace",
"model": { "primary": "${OPENCLAW_MODEL:-zai/glm-5}" }
}
}
}

View File

@@ -0,0 +1,14 @@
# Mosaic Agent: operations
# Fill in all values before deploying.
# Required: LLM provider API key (Z.ai, OpenAI, etc.)
ZAI_API_KEY=
# Required: unique bearer token for this agent instance (generate: openssl rand -hex 32)
OPENCLAW_GATEWAY_TOKEN=
# Optional: override default model (default: zai/glm-5)
# OPENCLAW_MODEL=zai/glm-5
# Optional: Ollama endpoint for local inference (uncomment to enable)
# OLLAMA_BASE_URL=

View File

@@ -0,0 +1,19 @@
{
"gateway": {
"mode": "local",
"port": 18789,
"bind": "lan",
"auth": { "mode": "token" },
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
},
"agents": {
"defaults": {
"workspace": "/home/node/workspace",
"model": { "primary": "${OPENCLAW_MODEL:-zai/glm-5}" }
}
}
}

View File

@@ -0,0 +1,14 @@
# Mosaic Agent: projects
# Fill in all values before deploying.
# Required: LLM provider API key (Z.ai, OpenAI, etc.)
ZAI_API_KEY=
# Required: unique bearer token for this agent instance (generate: openssl rand -hex 32)
OPENCLAW_GATEWAY_TOKEN=
# Optional: override default model (default: zai/glm-5)
# OPENCLAW_MODEL=zai/glm-5
# Optional: Ollama endpoint for local inference (uncomment to enable)
# OLLAMA_BASE_URL=

View File

@@ -0,0 +1,19 @@
{
"gateway": {
"mode": "local",
"port": 18789,
"bind": "lan",
"auth": { "mode": "token" },
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
},
"agents": {
"defaults": {
"workspace": "/home/node/workspace",
"model": { "primary": "${OPENCLAW_MODEL:-zai/glm-5}" }
}
}
}

View File

@@ -0,0 +1,14 @@
# Mosaic Agent: research
# Fill in all values before deploying.
# Required: LLM provider API key (Z.ai, OpenAI, etc.)
ZAI_API_KEY=
# Required: unique bearer token for this agent instance (generate: openssl rand -hex 32)
OPENCLAW_GATEWAY_TOKEN=
# Optional: override default model (default: zai/glm-5)
# OPENCLAW_MODEL=zai/glm-5
# Optional: Ollama endpoint for local inference (uncomment to enable)
# OLLAMA_BASE_URL=

View File

@@ -0,0 +1,19 @@
{
"gateway": {
"mode": "local",
"port": 18789,
"bind": "lan",
"auth": { "mode": "token" },
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
},
"agents": {
"defaults": {
"workspace": "/home/node/workspace",
"model": { "primary": "${OPENCLAW_MODEL:-zai/glm-5}" }
}
}
}