-
-
Notifications
You must be signed in to change notification settings - Fork 0
Use cases
You have a Docker volume that you want to backup, but you can't/won't add the docker-backup image to that compose file.
Create a new docker-compose.yml
file and reference the volume using external: true
.
version: "3.7"
services:
backup:
image: niclaslindstedt/docker-backup:latest
volumes:
- nginx-config:/volumes/nginx-config
- /mnt/data/backups:/backup
- /mnt/nas/backups:/lts
volumes:
nginx-config:
external: true
The backup container will then create backups of the nginx-volume
according to the cron schedule and place them in the /mnt/data/backups
folder. The backups will then be copied (and pruned) for long-term storage to /mnt/nas/backups
.
You have files inside a Docker container that you would like to backup.
You will first need to create a Docker volume and mount it to that container.
In the (stripped-down) example below, we add a volume called nginx-config
inside /etc/nginx
.
version: "3.7"
services:
nginx:
image: nginx:latest
volumes:
- nginx-config:/etc/nginx
backup:
image: niclaslindstedt/docker-backup:latest
volumes:
- nginx-config:/volumes/nginx-config
- /mnt/data/backups:/backup
- /mnt/nas/backups:/lts
volumes:
nginx-config:
The backup container will then create backups of the nginx-volume
according to the cron schedule and place them in the /mnt/data/backups
folder. The backups will then be copied (and pruned) for long-term storage to /mnt/nas/backups
.
You have files on your host computer that you wish to backup.
Create a new docker-compose.yml
file and mount the host folder into the backup container.
version: "3.7"
services:
backup:
image: niclaslindstedt/docker-backup:latest
volumes:
- /home/niclas/Documents:/volumes/my-documents
- /mnt/data/backups:/backup
- /mnt/nas/backups:/lts
The backup container will then create backups of the /home/niclas/Documents
folder according to the cron schedule and place them in the /mnt/data/backups
folder. The backups will then be copied (and pruned) for long-term storage to /mnt/nas/backups
.
You want your long-term storage backups to be backed up to an S3 bucket instead of being copied to a local folder.
Use the elementar/s3-volume image and mount a shared volume:
version: "3.7"
services:
s3:
image: elementar/s3-volume
command: /data s3://${S3_BUCKET_NAME}
environment:
- BACKUP_INTERVAL=${S3_BACKUP_INTERVAL} # in minutes
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
volumes:
- s3-volume:/data
backup:
image: niclaslindstedt/docker-backup:latest
volumes:
- sample-app-data:/volumes/sample-app-1
- /mnt/data/backups:/backup
- s3-volume:/lts
volumes:
s3-volume:
This will read settings from the .env
file (make a copy of .env.example
and change the values).
You want your long-term storage backups to be backed up to an FTP server instead of being copied to a local folder.
Create a volume using the curlftpfs plugin:
docker plugin install valuya/curlftpfs:next
docker volume create -d valuya/curlftpfs:next -o address=<ip:port> -o credentials=<user:password> ftp-volume
Now simply mount it at /lts
in your backup container, and you're done.
version: "3.7"
services:
backup:
image: niclaslindstedt/docker-backup:latest
volumes:
- /home/niclas/Documents:/volumes/my-documents
- /mnt/data/backups:/backup
- ftp-volume:/lts
volumes:
ftp-volume:
external: true
This will only work if the FTP server's root folders match the folders in the /volumes
directory.
You want to be sent notifications when something important happens.
First, you need to create a Discord webhook or Slack webhook integration.
Then, simply modify your docker-compose to contain the following:
version: "3.7"
services:
backup:
image: niclaslindstedt/docker-backup:latest
environment:
- SEND_NOTIFICATIONS=true
- SLACK_WEBHOOK_URL=https://... # your Slack webhook url
- DISCORD_WEBHOOK_URL=https://... # your Discord webhook url
You can send notifications to both Slack and Discord, or just one of them.
You want to pause other Docker containers during startup to prevent files from being written to the volume while backing up.
First off, you need to use the -docker
image tag of docker-backup.
Set the PROJECT_NAME and PAUSE_CONTAINERS variables to match your compose project name (i.e. the folder that contains the docker-compose.yml
file you are running) and the names of the services you want to pause.
You will also need to mount /var/run/docker.sock
at /var/run/docker.sock
as seen below in the volumes
section.
CAUTION: You should only mount the Docker socket if you trust the code you're running. You should inspect this repository before continuing with this.
version: "3.7"
services:
sample-app:
image: bash
command: bash -c "while true; do echo \"$$(date)\" > /data/$$(date +\"%s\"); sleep 13; done"
volumes:
- ./data:/data # host-mounted folder
backup:
image: niclaslindstedt/docker-backup:latest-docker
environment:
- PROJECT_NAME=docker-backup # should be set to the parent folder of the compose file (this file)
- PAUSE_CONTAINERS=sample-app # these containers will be paused
volumes:
- ./data:/volumes/sample-app
- /var/run/docker.sock:/var/run/docker.sock # to be able to shut down other containers
You have your own GPG key you would like to use to sign your backups' checksum files.
Set the environment variable SIGNING_KEY
to match your GPG key name (or use the default of signing_key.asc
).
Set the key's passphrase using the environment variable SIGNING_PASSPHRASE
Mount the signing key at /gpg/signing_key.asc
(or whatever you used for SIGNING_KEY
).
version: "3.7"
services:
backup:
image: niclaslindstedt/docker-backup:latest
environment:
- SIGNING_KEY=mykey.asc
- SIGNING_PASSPHRASE=mys3cret
- CREATE_CHECKSUMS=true
- VERIFY_CHECKSUMS=true
- CREATE_SIGNATURES=true
- VERIFY_SIGNATURES=true
volumes:
- /path/to/key.asc:/gpg/mykey.asc