Coolify: Fix managed service start (CoolifyTask failing) #442
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Root Cause
Coolify's restart operation (stop then start) combined with its periodic CleanupDocker action causes image pruning between the stop and start phases. When containers are stopped, images become 'unused' and get pruned. The subsequent start phase fails with 'No such image' errors.
Additionally, large images (400MB+) exceed CoolifyTask's ~40s timeout during pulls, causing start failures even when images need to be downloaded.
Resolution
Established a reliable start procedure:
Verified Lifecycle
Operational Note
Before any Coolify restart/start, always pre-pull images first. This is documented in docs/COOLIFY-DEPLOYMENT.md.
jason.woltje referenced this issue2026-02-22 07:21:50 +00:00
Root Cause Found
The CoolifyTask failure was a red herring. The real issue was:
docker compose up -dfrom the service directory, which created awork_internalnetwork (from the docker compose project name "work")ug0ssok4g44wocok8kws8gg8(Coolify's) andwork_internal(stale)traefik.docker.networklabelwork_internal, which Traefik wasn't connected to → timeoutResolution
work_internalnetworkdocker compose -p ug0ssok4g44wocok8kws8gg8 up -dRemaining
Coolify's managed start (CoolifyTask) still needs to be verified — the containers were started via docker CLI, not through Coolify's UI/API. Coolify should be able to manage restarts/redeploys going forward.