Deployment
Deploy types, blue-green strategy, health checks, rollbacks, and GitHub integration.
Every app in Vardo deploys as a Docker Compose stack. The pipeline is the same regardless of deploy type — clone or build, start a new container slot, health check, route traffic, tear down the old slot. Zero-downtime by default.
Deploy types
Vardo supports six deploy types:
| Type | What it does |
|---|---|
compose | Uses a docker-compose.yml from your repo or pasted inline |
dockerfile | Builds from a Dockerfile in the repo |
image | Pulls a pre-built image from any registry |
nixpacks | Auto-detects the runtime and builds with Nixpacks |
static | Serves static files |
railpack | Builds with Railpack — faster builds, first-class Rails support |
If deployType is compose but no compose file is found, Vardo auto-detects: checks for a Dockerfile first, then falls back to Nixpacks.
Railpack
Railpack is a buildpack-based deploy type alongside Nixpacks. It auto-detects your framework from the repo and produces a Docker image without requiring a Dockerfile. Select railpack as the deploy type when creating or editing an app.
Source types
Three ways to get code into Vardo:
Git repo — clone any git URL. For GitHub repos, Vardo authenticates via GitHub App token when available, or falls back to an SSH deploy key.
https://github.com/owner/repo.git # public or GitHub App auth
[email protected]:owner/repo.git # SSH key authDocker image — pull a pre-built image directly (e.g. postgres:16, ghcr.io/owner/app:latest). No build step.
Inline compose — paste a compose file in the UI. Vardo stores it in the database and uses it at deploy time.
Compose decomposition
Multi-service compose files are broken into managed child apps. Each service becomes its own app with independent deployments, logs and scaling — while still sharing the project's network and environment.
Blue-green deployment
Zero-downtime by default
Every deploy uses blue-green slots. The old slot keeps serving traffic until the new slot passes health checks. If the new slot fails, the old one stays up — no interruption.
The directory layout:
.host/projects/{appName}/{envName}/
blue/
docker-compose.yml
.env
green/
docker-compose.yml
.env
.active-slot # "blue" or "green"Step by step:
- Identify the active slot (read
.active-slot). The deploy targets the other slot. - Write the compose file and
.envto the new slot directory. - Start the new slot with
docker compose up -d. - Wait for the new slot to be healthy (up to 60 seconds).
- If unhealthy: tear down the new slot, mark the deploy as
failed. The active slot keeps serving. - If healthy: Traefik discovers the new containers via labels on the shared Docker network and starts routing traffic.
- Tear down the old slot.
- Write the new slot name to
.active-slot.
Traefik routes traffic by matching container labels. Both slots share the same vardo-network, so Traefik discovers them automatically without a reload.
Deployment lifecycle
| Status | Meaning |
|---|---|
queued | Deployment record created, waiting to start |
running | Deploy in progress |
success | Deployed and healthy |
failed | Build, start, or health check failed |
cancelled | Aborted before completion |
rolled_back | Auto-rollback triggered after a post-deploy crash |
Deploy stages
The pipeline emits stage events in real time over Redis pub/sub — you can follow them in the live log UI:
| Stage | What happens |
|---|---|
clone | Clone or pull git repo (skipped for image deploys) |
build | Build image (Dockerfile, Nixpacks, Railpack) or parse compose file |
deploy | Start the new slot |
healthcheck | Wait for containers to become healthy |
routing | Confirm Traefik has picked up the new containers |
cleanup | Tear down the old slot |
done | Deploy complete |
Health checks
After starting the new slot, Vardo polls containers for up to 60 seconds at a 2-second interval. A container is healthy when:
- All containers in the compose project are
running(notexited,dead, orrestarting) - Docker's built-in health check passes (if defined in the compose file)
If the check times out:
- Fetch the last 30 lines of container logs
- Tear down the new slot
- Mark the deployment as
failed - Leave the previous slot running — no traffic interruption
Rollbacks
Auto-rollback
Enable per app with autoRollback: true. After a successful deploy, a background monitor watches the new containers for crashes during a configurable grace period (rollbackGracePeriod, default 60 seconds).
If a crash is detected within the grace period:
- Tear down the crashed slot
- Bring back the previous slot
- Update
.active-slot - Mark the deployment as
rolled_back - Send a notification
The monitor polls every 5 seconds. It won't trigger on transient Docker socket errors — only on confirmed container exits or crash-restart loops.
Manual rollback
Trigger a new deploy from any previous successful deployment. The deploy API accepts a rollbackFromId parameter — rollbacks use the saved configSnapshot and envSnapshot from the target deployment.
Deployment triggers
| Trigger | Description |
|---|---|
manual | User clicked deploy in the UI or called the API |
webhook | GitHub push webhook |
api | REST API call |
rollback | Rollback operation |
GitHub integration
Auto-deploy on push
Install the Vardo GitHub App on your account or organization, connect repos in the app settings, and set autoDeploy: true. When a push hits the watched branch, Vardo queues a deployment automatically.
Preview environments
When a PR is opened against a branch Vardo is watching:
- Vardo creates a preview environment named
pr-{prNumber}. - The entire project is cloned into the new environment (all apps in the project).
- Each app gets a preview subdomain:
{appName}-pr-{prNumber}.{baseDomain}. - The PR branch is deployed.
When the PR is closed or merged, the preview environment and all its containers are destroyed. Preview environments also expire after 7 days (configurable) and get cleaned up by a cron job.
Preview environments only work for apps that belong to a project. Standalone apps don't get previews.
Webhook verification
All incoming GitHub webhooks are verified with HMAC-SHA256 signature checking.
Private repos
For GitHub repos, Vardo uses a GitHub App installation token automatically. For non-GitHub repos or as a fallback, attach an SSH deploy key to the app. Private keys are encrypted at rest (AES-256-GCM).
Deploy keys (SSH)
Deploy keys are RSA key pairs stored per-organization. Vardo generates the pair, encrypts the private key (AES-256-GCM), and gives you the public key.
- Create a deploy key in Settings > Deploy Keys.
- Add the public key to your GitHub/GitLab repo's deploy keys.
- Attach the key to your app's source settings.
When cloning, Vardo writes the private key to a temp file, sets GIT_SSH_COMMAND, runs the git operation, then immediately deletes the temp file.
Environment variables
Env vars are encrypted (AES-256-GCM) in the database and decrypted at deploy time. They're written to a .env file in the slot directory and loaded via env_file in the compose file.
Variable resolution
Values can contain ${...} expressions resolved at deploy time:
| Expression | Resolves to |
|---|---|
${OTHER_VAR} | Another env var in the same app |
${project.name} | The app's internal name |
${project.domain} | The app's primary domain |
${project.url} | https://{domain} |
${project.port} | The container port |
${project.internalHost} | Docker network hostname |
${project.gitBranch} | The configured git branch |
${org.name} | The organization name |
${org.baseDomain} | The org's base domain |
${org.MY_VAR} | An org-level shared env var |
${postgres.DATABASE_URL} | DATABASE_URL from the postgres app in the same org |
Cross-app references (${appName.VAR_KEY}) are resolved by decrypting the referenced app's env vars at deploy time.
Org-level env vars
Shared across all apps in the organization. Reference them with ${org.KEY}. Mark as secrets to store them encrypted.
Resource limits
Set CPU and memory limits per app:
- cpuLimit — CPU cores (e.g.
0.5,1,2). Maps to Docker Composedeploy.resources.limits.cpus. - memoryLimit — Memory in MB (e.g.
256,512,1024). Maps todeploy.resources.limits.memory.
Vardo injects these into the compose file before deploy. If either limit is set, it applies to all services in the compose file.
If a volume exceeds its configured maxSizeBytes limit, the deploy is blocked with an error. A warning fires when usage exceeds warnAtPercent (default 80%).
Persistent storage
Named Docker volumes survive deploys. Vardo tracks volumes in the database and mounts them across blue-green slots.
Volume naming convention: {appName}-{slot}_{volumeName} (e.g. myapp-blue_data). When backing up, Vardo checks the blue volume first, then green.
Volumes are auto-detected from three sources:
- Compose files — named volumes in the
volumes:section - Running containers — after a successful deploy, Vardo inspects containers and registers any new named volumes
vardo.yml— volumes declared in the config file are registered before deploy
Host bind mounts (paths starting with /, ./, or ../) aren't allowed by default. The compose validator rejects them unless unsafe compose is explicitly enabled.
Port exposure
For HTTP apps, Vardo routes traffic through Traefik — no host port bindings needed. For non-HTTP services (databases, etc.), declare exposed ports:
[{ "internal": 5432, "external": 5432 }]Container port detection priority:
containerPortset on the appEXPOSEinstruction in the image (inspected after build)PORTenv var- Default: 3000
Traefik labels
Vardo injects Traefik labels automatically based on the app's domains. For each domain:
- HTTPS router on the
websecureentrypoint with TLS - HTTP-to-HTTPS redirect router on the
webentrypoint - Load balancer pointing at the container port
- Certificate resolver (Let's Encrypt by default, with Google Trust Services and ZeroSSL also available)
For .localhost domains, TLS uses a self-signed cert (no resolver needed).
All services attach to the vardo-network external Docker network so Traefik can discover them.
Project group deploys
Apps in a project can declare dependencies on each other. When you deploy an entire project, Vardo:
- Builds a dependency graph from explicit
dependsOndeclarations and inferred cross-app${appName.VAR}references. - Topologically sorts apps into tiers.
- Deploys each tier in parallel.
- If any app in a tier fails, remaining tiers are aborted.
This ensures databases deploy before the web apps that depend on them.
vardo.yml config file
Drop a vardo.yml in your repo root to configure deployment as code. File settings take priority over database settings.
project:
rootDirectory: backend
runtime:
port: 8080
volumes:
- name: data
mountPath: /app/data
env:
- key: NODE_ENV
value: productionSettings take effect at deploy time. Env vars from the file only apply if the key isn't already set in the app's env vars — they don't override.
See Configuration for the full vardo.yml reference.