Self-Hosted Backups and Restores

  • Updated

Backups of Hudu

Manual Backup of Hudu Postgres:

  1. Login into the server you want to backup.
  2. Run cd ~/hudu2
  3. Make sure the server is up and running.
  4. Run: sudo docker-compose exec -T db pg_dump -U postgres hudu_production >> NAME-OF-DUMP.sql
  5. This file can now be moved on to a new server to be used. Make sure to keep the same .env variables.

Auto-Backup Script to Local File server:

By adding this to the docker compose file, you will have (by default) a backup every 6 hours that kicks off 10 minutes after starting Hudu up. It also compresses the files using GZip. It stores them for 7 days before deleting anything older than the 7 day mark.
 
Add to end of docker-compose.yml:
 
 backups:
 image: tiredofit/db-backup
 container_name: backups
 volumes:
   - /home/hudu/hudu2/backups:/backup
 restart: unless-stopped
 links:
   - db
 environment:
   TZ: 'America/Detroit'
   DB_TYPE: 'pgsql'
   DB_HOST: 'db'
   DB_NAME: 'hudu_production'
   DB_USER: 'postgres'
   DB_PASS: 'postgres'
   EXTRA_OPTS: '--schema=public --blobs'
   COMPRESSION: 'GZ'
   ENABLE_CHECKSUM: 'FALSE'
   DB_DUMP_FREQ: '360'
   DB_DUMP_BEGIN: '+10'
     DB_CLEANUP_TIME: '10080'

Auto-Backup Script to S3

There are two aspects of Hudu you will need to backup. The first is your database. There are multiple ways to automatically backup Postgres databases. One simple way is to add to the end of your docker-compose.yml:

  pgbackups3:
    image: hududocker/postgresql-backup-s3
    restart: unless-stopped
    links:
      - db
    environment:
      SCHEDULE: '@every 6h'
      S3_PREFIX: 'backup'
      S3_ENDPOINT: 'https://s3.us-west-1.wasabisys.com'
      S3_REGION: 'us-west-1'
      S3_ACCESS_KEY_ID: 'XXXX'
      S3_SECRET_ACCESS_KEY: 'XXXXXXXX'
      S3_BUCKET: 'bucketname'
      POSTGRES_DATABASE: 'hudu_production'
      POSTGRES_USER: 'postgres'
      POSTGRES_PASSWORD:
      POSTGRES_HOST: 'db'
      POSTGRES_EXTRA_OPTS: '--schema=public --blobs'

You will need to manually verify the integrity of these backups.

The other aspect you need to backup is object storage for your uploaded files. You can use s3cmd, awscli, and more to do this. Here is an example that will backup your files to your local system, using awscli:

  aws s3 sync s3://your-bucket-name /home/ubuntu/s3/your-bucket-name/

Restoring a database backup

Once you have a database backup, you can restore your Hudu instance to the old backup by following these steps:

  1. Make sure you have an up-to-date backup of your documentation before you begin.
  2. Move the .sql database dump file into the ~/hudu2 directory. Typically, the easiest way to move files is via SCP or SFTP.
  3. Run sudo docker-compose down to bring your instance down.
  4. Run sudo docker-compose up -d db
  5. Run the command: sudo docker-compose exec db dropdb hudu_production -U postgres
  6. Run the command: sudo docker-compose exec db createdb hudu_production -U postgres
  7. Run the command: cat NAME-OF-DUMP.sql | sudo docker-compose exec -T db psql -d hudu_production -U postgres
  8. Run sudo docker-compose down
  9. Run sudo docker-compose up -d to get your instance back up and running!

Still have questions?

Contact us