Full SQL and File backup of all sites on v4 - How I'm doing it using cron via script


#1

Hi all,

I just wanted to post a script I wrote to backup the individual site files and SQL databases from v4, in case it will help anybody.

Background/Notes/Considerations:

  • I wanted all the site files in one big zip file, along with each individual site’s database exported to a .sql file. I find this to be the best way to keep a restorable copy of the full site, for emergencies.

  • On my system, I put the backups in /var/Backups, then use azcopy to upload them to an Azure Blob storage.

  • I keep the clock on my server set to GMT, but I want the filename to show the local time that the backup was made. you can adjust this timezone by changing the timezone that is defined.

  • I put this script into a file in /usr/local/sbin/ called Backup.sh, did a chmod +x on it, and then run it from cron as root.

  • If you want to send it to Azure, update the last line with your own info. If you want to do something else with the backups, adjust/remove accordingly. Also - The removal of these backups from Azure is not handled here, so you’ll need to address that accordingly (We have another script that purges backups older than x days).

  • I have no error handling in here, This script presumes everything runs correctly :grin:.

I hope it helps someone.

David.

Script:

#!/bin/sh

USER="root"
PASSWORD=`cat /opt/easyengine/services/docker-compose.yml | grep MYSQL_ROOT_PASSWORD | awk -F'=' '{print $2}'`
OUTPUT="/var/Backups"
DATETIME=`TZ=":America/Edmonton" date +%Y%m%d-%H%M`
DOCKERDatabaseID=`docker ps | grep -e 'services_global-db' | cut -c1-12;`

databases=`docker exec $DOCKERDatabaseID bash -c "mysql -h localhost --user=$USER --password=$PASSWORD -e 'show databases;'" | tr -d "| " | grep -v Database`

for db in $databases; do
    if [[ "$db" != "information_schema" ]] && [[ "$db" != "performance_schema" ]] && [[ "$db" != "mysql" ]] && [[ "$db" != _* ]] ;
then
	#uncomment this next line if you want to know which DB the script is on.
	#echo "Dumping database: $db"
	sudo docker exec $DOCKERDatabaseID bash -c "/usr/bin/mysqldump -u $USER -p$PASSWORD --databases $db" > $OUTPUT/$db.sql
    fi
done

tar -jcf $OUTPUT/DBs-$DATETIME.tar.bz2 $OUTPUT/*.sql

rm -f $OUTPUT/*.sql

tar -jcf $OUTPUT/siteFiles-$DATETIME.tar.bz2 /opt/easyengine/sites/* --dereference

azcopy --source /var/Backups --destination [YOUR BLOB URL HERE] --dest-key [YOUR BLOG KEY HERE] --quiet

.


#2

Very nice and clean Thanks for sharing. I have open sourced my ee v3 & v4 backup scripts here https://github.com/microram/ee4-tools


#3

You do not need mysql credentials when running:

ee shell example.com --command='wp db export'

I’ve written a script myself but for v3 to v4, server to server migration.


#4

I’ve written a script myself but for v3 to v4, server to server migration.

Please share your scripts. Maybe you have ideas that would help others.


#5

Thanks for your work and sharing! I think that everyone needs backup. So, it will be nice to have some option to backup and upload using restic and Wasabi object storage. Yes, on wordpress are so many options.

I saw some inspiration here:


#6

I’ve updated my code above.

It now runs on the now-current v4.0.9, including the schema change to the docker container names, which the script will now automatically find the correct ID for.

It also finds the MySQL root password itself, so if that ever changes, the script doesn’t break.

David.


#7

Nice script :+1:

I must admit I’ve gotten lazy with my backups. Nowadays I simply use DigitalOcen’s droplet backup to take an entire snapshot, set-and-forget.

This script would work nicely with s3fs for mounting an S3 bucket and transferring the files/DB there. I guess I’d just use rsync with the archive/zip flags in that case instead of azcopy , thanks for sharing!


#8

@paramdeo, Thanks.

I too like the backups that DO/Linode/etc provide, but I find that because of its all-or-nothing style, it’s a lot of work to restore a single site from a few days ago.

The process for a granular restore is to restore the whole server, presumably to another server, then backup the required data from that site, then restore that backup onto the [broken] live site.

Given all this, I find a granular backup is just easier.

But to each their own!

Can you give me/post here a sanitized version of the line to send to S3, via a s3fs command? I’ll add it in for the reference of people that may be interested.

David.


#9

We can refer to this