Vardo

Deployment

Deploy types, blue-green strategy, health checks, rollbacks, and GitHub integration.

Every app in Vardo deploys as a Docker Compose stack. The pipeline is the same regardless of deploy type — clone or build, start a new container slot, health check, route traffic, tear down the old slot. Zero-downtime by default.

Deploy types

Vardo supports six deploy types:

TypeWhat it does
composeUses a docker-compose.yml from your repo or pasted inline
dockerfileBuilds from a Dockerfile in the repo
imagePulls a pre-built image from any registry
nixpacksAuto-detects the runtime and builds with Nixpacks
staticServes static files
railpackBuilds with Railpack — faster builds, first-class Rails support

If deployType is compose but no compose file is found, Vardo auto-detects: checks for a Dockerfile first, then falls back to Nixpacks.

Railpack

Railpack is a buildpack-based deploy type alongside Nixpacks. It auto-detects your framework from the repo and produces a Docker image without requiring a Dockerfile. Select railpack as the deploy type when creating or editing an app.

Source types

Three ways to get code into Vardo:

Git repo — clone any git URL. For GitHub repos, Vardo authenticates via GitHub App token when available, or falls back to an SSH deploy key.

https://github.com/owner/repo.git   # public or GitHub App auth
[email protected]:owner/repo.git       # SSH key auth

Docker image — pull a pre-built image directly (e.g. postgres:16, ghcr.io/owner/app:latest). No build step.

Inline compose — paste a compose file in the UI. Vardo stores it in the database and uses it at deploy time.

Compose decomposition

Multi-service compose files are broken into managed child apps. Each service becomes its own app with independent deployments, logs and scaling — while still sharing the project's network and environment.

Blue-green deployment

Zero-downtime by default

Every deploy uses blue-green slots. The old slot keeps serving traffic until the new slot passes health checks. If the new slot fails, the old one stays up — no interruption.

The directory layout:

.host/projects/{appName}/{envName}/
  blue/
    docker-compose.yml
    .env
  green/
    docker-compose.yml
    .env
  .active-slot      # "blue" or "green"

Step by step:

  1. Identify the active slot (read .active-slot). The deploy targets the other slot.
  2. Write the compose file and .env to the new slot directory.
  3. Start the new slot with docker compose up -d.
  4. Wait for the new slot to be healthy (up to 60 seconds).
  5. If unhealthy: tear down the new slot, mark the deploy as failed. The active slot keeps serving.
  6. If healthy: Traefik discovers the new containers via labels on the shared Docker network and starts routing traffic.
  7. Tear down the old slot.
  8. Write the new slot name to .active-slot.

Traefik routes traffic by matching container labels. Both slots share the same vardo-network, so Traefik discovers them automatically without a reload.

Deployment lifecycle

StatusMeaning
queuedDeployment record created, waiting to start
runningDeploy in progress
successDeployed and healthy
failedBuild, start, or health check failed
cancelledAborted before completion
rolled_backAuto-rollback triggered after a post-deploy crash

Deploy stages

The pipeline emits stage events in real time over Redis pub/sub — you can follow them in the live log UI:

StageWhat happens
cloneClone or pull git repo (skipped for image deploys)
buildBuild image (Dockerfile, Nixpacks, Railpack) or parse compose file
deployStart the new slot
healthcheckWait for containers to become healthy
routingConfirm Traefik has picked up the new containers
cleanupTear down the old slot
doneDeploy complete

Health checks

After starting the new slot, Vardo polls containers for up to 60 seconds at a 2-second interval. A container is healthy when:

  • All containers in the compose project are running (not exited, dead, or restarting)
  • Docker's built-in health check passes (if defined in the compose file)

If the check times out:

  1. Fetch the last 30 lines of container logs
  2. Tear down the new slot
  3. Mark the deployment as failed
  4. Leave the previous slot running — no traffic interruption

Rollbacks

Auto-rollback

Enable per app with autoRollback: true. After a successful deploy, a background monitor watches the new containers for crashes during a configurable grace period (rollbackGracePeriod, default 60 seconds).

If a crash is detected within the grace period:

  1. Tear down the crashed slot
  2. Bring back the previous slot
  3. Update .active-slot
  4. Mark the deployment as rolled_back
  5. Send a notification

The monitor polls every 5 seconds. It won't trigger on transient Docker socket errors — only on confirmed container exits or crash-restart loops.

Manual rollback

Trigger a new deploy from any previous successful deployment. The deploy API accepts a rollbackFromId parameter — rollbacks use the saved configSnapshot and envSnapshot from the target deployment.

Deployment triggers

TriggerDescription
manualUser clicked deploy in the UI or called the API
webhookGitHub push webhook
apiREST API call
rollbackRollback operation

GitHub integration

Auto-deploy on push

Install the Vardo GitHub App on your account or organization, connect repos in the app settings, and set autoDeploy: true. When a push hits the watched branch, Vardo queues a deployment automatically.

Preview environments

When a PR is opened against a branch Vardo is watching:

  1. Vardo creates a preview environment named pr-{prNumber}.
  2. The entire project is cloned into the new environment (all apps in the project).
  3. Each app gets a preview subdomain: {appName}-pr-{prNumber}.{baseDomain}.
  4. The PR branch is deployed.

When the PR is closed or merged, the preview environment and all its containers are destroyed. Preview environments also expire after 7 days (configurable) and get cleaned up by a cron job.

Preview environments only work for apps that belong to a project. Standalone apps don't get previews.

Webhook verification

All incoming GitHub webhooks are verified with HMAC-SHA256 signature checking.

Private repos

For GitHub repos, Vardo uses a GitHub App installation token automatically. For non-GitHub repos or as a fallback, attach an SSH deploy key to the app. Private keys are encrypted at rest (AES-256-GCM).

Deploy keys (SSH)

Deploy keys are RSA key pairs stored per-organization. Vardo generates the pair, encrypts the private key (AES-256-GCM), and gives you the public key.

  1. Create a deploy key in Settings > Deploy Keys.
  2. Add the public key to your GitHub/GitLab repo's deploy keys.
  3. Attach the key to your app's source settings.

When cloning, Vardo writes the private key to a temp file, sets GIT_SSH_COMMAND, runs the git operation, then immediately deletes the temp file.

Environment variables

Env vars are encrypted (AES-256-GCM) in the database and decrypted at deploy time. They're written to a .env file in the slot directory and loaded via env_file in the compose file.

Variable resolution

Values can contain ${...} expressions resolved at deploy time:

ExpressionResolves to
${OTHER_VAR}Another env var in the same app
${project.name}The app's internal name
${project.domain}The app's primary domain
${project.url}https://{domain}
${project.port}The container port
${project.internalHost}Docker network hostname
${project.gitBranch}The configured git branch
${org.name}The organization name
${org.baseDomain}The org's base domain
${org.MY_VAR}An org-level shared env var
${postgres.DATABASE_URL}DATABASE_URL from the postgres app in the same org

Cross-app references (${appName.VAR_KEY}) are resolved by decrypting the referenced app's env vars at deploy time.

Org-level env vars

Shared across all apps in the organization. Reference them with ${org.KEY}. Mark as secrets to store them encrypted.

Resource limits

Set CPU and memory limits per app:

  • cpuLimit — CPU cores (e.g. 0.5, 1, 2). Maps to Docker Compose deploy.resources.limits.cpus.
  • memoryLimit — Memory in MB (e.g. 256, 512, 1024). Maps to deploy.resources.limits.memory.

Vardo injects these into the compose file before deploy. If either limit is set, it applies to all services in the compose file.

If a volume exceeds its configured maxSizeBytes limit, the deploy is blocked with an error. A warning fires when usage exceeds warnAtPercent (default 80%).

Persistent storage

Named Docker volumes survive deploys. Vardo tracks volumes in the database and mounts them across blue-green slots.

Volume naming convention: {appName}-{slot}_{volumeName} (e.g. myapp-blue_data). When backing up, Vardo checks the blue volume first, then green.

Volumes are auto-detected from three sources:

  • Compose files — named volumes in the volumes: section
  • Running containers — after a successful deploy, Vardo inspects containers and registers any new named volumes
  • vardo.yml — volumes declared in the config file are registered before deploy

Host bind mounts (paths starting with /, ./, or ../) aren't allowed by default. The compose validator rejects them unless unsafe compose is explicitly enabled.

Port exposure

For HTTP apps, Vardo routes traffic through Traefik — no host port bindings needed. For non-HTTP services (databases, etc.), declare exposed ports:

[{ "internal": 5432, "external": 5432 }]

Container port detection priority:

  1. containerPort set on the app
  2. EXPOSE instruction in the image (inspected after build)
  3. PORT env var
  4. Default: 3000

Traefik labels

Vardo injects Traefik labels automatically based on the app's domains. For each domain:

  • HTTPS router on the websecure entrypoint with TLS
  • HTTP-to-HTTPS redirect router on the web entrypoint
  • Load balancer pointing at the container port
  • Certificate resolver (Let's Encrypt by default, with Google Trust Services and ZeroSSL also available)

For .localhost domains, TLS uses a self-signed cert (no resolver needed).

All services attach to the vardo-network external Docker network so Traefik can discover them.

Project group deploys

Apps in a project can declare dependencies on each other. When you deploy an entire project, Vardo:

  1. Builds a dependency graph from explicit dependsOn declarations and inferred cross-app ${appName.VAR} references.
  2. Topologically sorts apps into tiers.
  3. Deploys each tier in parallel.
  4. If any app in a tier fails, remaining tiers are aborted.

This ensures databases deploy before the web apps that depend on them.

vardo.yml config file

Drop a vardo.yml in your repo root to configure deployment as code. File settings take priority over database settings.

project:
  rootDirectory: backend

runtime:
  port: 8080

volumes:
  - name: data
    mountPath: /app/data

env:
  - key: NODE_ENV
    value: production

Settings take effect at deploy time. Env vars from the file only apply if the key isn't already set in the app's env vars — they don't override.

See Configuration for the full vardo.yml reference.

On this page