Backup & restore
Configure backup targets, scheduled jobs, retention policies, and restore from backups.
Vardo backs up Docker volumes to remote or local storage. Each volume gets the right backup strategy automatically — tar for file volumes, pg_dump for Postgres. Backups run on scheduled jobs with distributed locking to prevent double-fire across instances.
How it works
Storage targets
A backup target defines where archives are stored. Configure them per-organization or at the system level as a shared default.
Supported types
| Type | Description |
|---|---|
s3 | AWS S3 or any S3-compatible endpoint |
r2 | Cloudflare R2 (S3-compatible) |
b2 | Backblaze B2 (S3-compatible) |
ssh | Remote server via SSH/SCP |
local | Local filesystem on the Vardo server |
S3, R2, and B2 all use the same S3-compatible API adapter — the only difference is the endpoint URL.
Local targets are fast for restores but don't protect against disk failure. Pair them with a cloud target for real durability. Local targets can be restricted with the ALLOW_LOCAL_BACKUPS environment variable.
S3 / R2 / B2 configuration
| Field | Description |
|---|---|
bucket | Bucket name |
region | Region (e.g. us-east-1, auto for R2) |
accessKeyId | Access key ID |
secretAccessKey | Secret access key |
endpoint | Custom endpoint URL (required for R2, B2, MinIO) |
prefix | Optional path prefix inside the bucket |
Cloudflare R2:
{
"bucket": "my-backups",
"region": "auto",
"accessKeyId": "abc123",
"secretAccessKey": "secret",
"endpoint": "https://<account-id>.r2.cloudflarestorage.com"
}Backblaze B2:
{
"bucket": "my-backups",
"region": "us-west-004",
"accessKeyId": "keyId",
"secretAccessKey": "applicationKey",
"endpoint": "https://s3.us-west-004.backblazeb2.com"
}SSH configuration
| Field | Description |
|---|---|
host | Remote hostname |
username | SSH username |
path | Remote directory path |
port | SSH port (default: 22) |
privateKey | PEM-encoded private key (optional — falls back to SSH agent) |
SSH targets support backup and restore but don't support pre-signed download URLs. Downloads stream through the Vardo server instead.
SSH targets don't have redundancy. If the remote server is lost, so are the backups. Use object storage for anything you care about.
System-level vs org-level targets
A target with organizationId = null is a system-level target — a global fallback for all organizations.
Resolution order:
- Org-level default target (
isDefault: true) - Any org-level target
- System-level target
If no target exists, backups are skipped silently.
System target from config
Set backup credentials in vardo.yml and Vardo auto-creates a system-level target named "System default" on startup:
backup:
type: r2
bucket: my-backups
region: auto
access_key: abc123
secret_key: secret
endpoint: https://<account-id>.r2.cloudflarestorage.comSee Configuration for the full config reference.
Backup jobs
A job defines the schedule, retention policy, and which apps to back up.
Job fields
| Field | Type | Description |
|---|---|---|
name | string | Display name |
schedule | string | Cron expression (e.g. 0 2 * * *) |
enabled | boolean | Whether the job runs |
targetId | string | Storage target to write to |
keepAll | boolean | Retain all backups forever |
keepLast | integer | Keep N most recent backups |
keepHourly | integer | Keep N hourly backups |
keepDaily | integer | Keep N daily backups |
keepWeekly | integer | Keep N weekly backups |
keepMonthly | integer | Keep N monthly backups |
keepYearly | integer | Keep N yearly backups |
notifyOnSuccess | boolean | Send notification on success |
notifyOnFailure | boolean | Send notification on failure (default: true) |
Auto-created jobs
When an app with persistent volumes is deployed and a backup target exists, Vardo automatically creates a daily backup job:
- Schedule:
0 2 * * *(2 AM) - Retention: keep last 1, 7 daily, 1 weekly, 1 monthly
- Notifications: failure only
- Name:
Auto: {appName}
The job is only created once. If the app already has a backup job, nothing happens.
Per-volume backup strategies
Volumes are tagged by type, and the backup strategy adapts:
- Generic volumes (file storage, uploads) — compressed with
tar czf, producing a.tar.gzarchive of the volume contents. - PostgreSQL volumes — backed up with
pg_dumpfor a consistent, importable dump. This avoids snapshotting live database files, which can produce corrupt backups.
The restore path handles both formats automatically — tar archives extract directly, database dumps pipe through psql.
What gets backed up
Vardo backs up Docker named volumes marked as persistent: true. For each volume:
- Find the Docker volume — check
{appName}-blue_{volumeName}first, then{appName}-green_{volumeName}. - Run the appropriate backup (tar or pg_dump).
- Upload the archive to the storage target.
Storage path: {orgSlug}/{appName}/{volumeName}/{timestamp}.tar.gz
Example: acme/postgres/data/2024-01-15T02-00-00-000Z.tar.gz
The timestamp uses dashes instead of colons because some storage systems don't allow colons in object keys.
Volumes without a corresponding Docker volume (e.g. never deployed) are skipped and recorded as failed.
What doesn't get backed up
- Non-persistent volumes (ephemeral)
- Vardo's own database (see below)
- Application code (comes from git)
- Docker images
Scheduler and distributed locking
The scheduler ticks every 60 seconds. On each tick:
- Load all enabled backup jobs.
- Check if
shouldRunNow(job.schedule, now)returns true. - Acquire a Redis lock:
lock:backup:{jobId}:{minuteTimestamp}with a 61-second TTL. If the lock is already held, skip — another instance got it. - Check if the job already has a
runningbackup. If so, skip. - Mark
lastRunAtto prevent concurrent double-fire. - Execute the backup.
Backups fire at most once per minute per job, even across multiple server instances.
Restore
To restore a backup:
- Download the archive from the storage target.
- Find the Docker volume for the app (blue slot first, then green; create blue if neither exists).
- For tar archives: clear the volume and extract.
- For database dumps: pipe through
psqlfor restoration.
Restoring replaces the entire volume contents. The app doesn't restart automatically — restart it manually after restore to pick up the new data.
Download URLs
For S3-compatible targets, Vardo generates a pre-signed URL (1-hour expiry) for direct download. For SSH targets, downloads stream through the Vardo server.
Vardo's own database
Auto-backup of Vardo's own PostgreSQL database is planned but not yet implemented. For now, back up the Postgres volume (/var/lib/postgresql/data) using a standard backup job, or use your hosting provider's database backup feature.
Backup history
Every backup attempt creates a record in the backup table:
| Status | Meaning |
|---|---|
pending | Not yet started |
running | In progress |
success | Completed, archive uploaded |
failed | Error during archive or upload |
pruned | Retained beyond the retention window and deleted |
Each record stores: job ID, app ID, target ID, volume name, archive size, storage path, timestamped log, and start/finish times.
Notifications
Backup jobs send notifications based on their settings:
- On failure (default): includes the failure count, total count, and error messages per volume.
- On success (optional): includes total count and combined archive size.
Notifications go through the org's configured channels — email, webhook, or Slack.
Monitoring backup health
Check backup health in Settings > Backups or the top-level Backups page:
- View backup history for each job.
- Confirm
lastRunAttimestamps are current. - Look for jobs with recent
failedruns. - Review per-run logs for error details.
A job that hasn't run when expected may mean the scheduler is stopped, a Redis lock is stuck, or the server restarted mid-run.
API
Targets
GET /api/v1/organizations/{orgId}/backups/targets
POST /api/v1/organizations/{orgId}/backups/targets
GET /api/v1/organizations/{orgId}/backups/targets/{targetId}
PUT /api/v1/organizations/{orgId}/backups/targets/{targetId}
DELETE /api/v1/organizations/{orgId}/backups/targets/{targetId}Jobs
GET /api/v1/organizations/{orgId}/backups/jobs
POST /api/v1/organizations/{orgId}/backups/jobs
GET /api/v1/organizations/{orgId}/backups/jobs/{jobId}
PUT /api/v1/organizations/{orgId}/backups/jobs/{jobId}
DELETE /api/v1/organizations/{orgId}/backups/jobs/{jobId}History
GET /api/v1/organizations/{orgId}/backups/history