Full SQL and File backup of all sites on v4 - How I'm doing it using cron via script

Hi all,

I just wanted to post a script I wrote to backup the individual site files and SQL databases from v4, in case it will help anybody.

Background/Notes/Considerations:

  • I wanted all the site files in one big zip file, along with each individual site’s database exported to a .sql file. I find this to be the best way to keep a restorable copy of the full site, for emergencies.

  • On my system, I put the backups in /var/Backups, then use azcopy to upload them to an Azure Blob storage.

  • I keep the clock on my server set to GMT, but I want the filename to show the local time that the backup was made. you can adjust this timezone by changing the timezone that is defined.

  • I put this script into a file in /usr/local/sbin/ called Backup.sh, did a chmod +x on it, and then run it from cron as root.

  • If you want to send it to Azure, update the last line with your own info. If you want to do something else with the backups, adjust/remove accordingly. Also - The removal of these backups from Azure is not handled here, so you’ll need to address that accordingly (We have another script that purges backups older than x days).

  • I have no error handling in here, This script presumes everything runs correctly :grin:.

I hope it helps someone.

David.

Script:

#!/bin/sh

USER="root"
PASSWORD=`cat /opt/easyengine/services/docker-compose.yml | grep MYSQL_ROOT_PASSWORD | awk -F'=' '{print $2}'`
OUTPUT="/var/Backups"
DATETIME=`TZ=":America/Edmonton" date +%Y%m%d-%H%M`
DOCKERDatabaseID=`docker ps | grep -e 'services_global-db' | cut -c1-12;`

databases=`docker exec $DOCKERDatabaseID bash -c "mysql -h localhost --user=$USER --password=$PASSWORD -e 'show databases;'" | tr -d "| " | grep -v Database`

for db in $databases; do
    if [[ "$db" != "information_schema" ]] && [[ "$db" != "performance_schema" ]] && [[ "$db" != "mysql" ]] && [[ "$db" != _* ]] ;
then
	#uncomment this next line if you want to know which DB the script is on.
	#echo "Dumping database: $db"
	sudo docker exec $DOCKERDatabaseID bash -c "/usr/bin/mysqldump -u $USER -p$PASSWORD --databases $db" > $OUTPUT/$db.sql
    fi
done

tar -jcf $OUTPUT/DBs-$DATETIME.tar.bz2 $OUTPUT/*.sql

rm -f $OUTPUT/*.sql

tar -jcf $OUTPUT/siteFiles-$DATETIME.tar.bz2 /opt/easyengine/sites/* --dereference

azcopy --source /var/Backups --destination [YOUR BLOB URL HERE] --dest-key [YOUR BLOG KEY HERE] --quiet

.

7 Likes

Very nice and clean Thanks for sharing. I have open sourced my ee v3 & v4 backup scripts here https://github.com/microram/ee4-tools

2 Likes

You do not need mysql credentials when running:

ee shell example.com --command='wp db export'

I’ve written a script myself but for v3 to v4, server to server migration.

I’ve written a script myself but for v3 to v4, server to server migration.

Please share your scripts. Maybe you have ideas that would help others.

Thanks for your work and sharing! I think that everyone needs backup. So, it will be nice to have some option to backup and upload using restic and Wasabi object storage. Yes, on wordpress are so many options.

I saw some inspiration here:

I’ve updated my code above.

It now runs on the now-current v4.0.9, including the schema change to the docker container names, which the script will now automatically find the correct ID for.

It also finds the MySQL root password itself, so if that ever changes, the script doesn’t break.

David.

Nice script :+1:

I must admit I’ve gotten lazy with my backups. Nowadays I simply use DigitalOcen’s droplet backup to take an entire snapshot, set-and-forget.

This script would work nicely with s3fs for mounting an S3 bucket and transferring the files/DB there. I guess I’d just use rsync with the archive/zip flags in that case instead of azcopy , thanks for sharing!

@paramdeo, Thanks.

I too like the backups that DO/Linode/etc provide, but I find that because of its all-or-nothing style, it’s a lot of work to restore a single site from a few days ago.

The process for a granular restore is to restore the whole server, presumably to another server, then backup the required data from that site, then restore that backup onto the [broken] live site.

Given all this, I find a granular backup is just easier.

But to each their own!

Can you give me/post here a sanitized version of the line to send to S3, via a s3fs command? I’ll add it in for the reference of people that may be interested.

David.

We can refer to this

Hello,

I’m using @davidsandbrand script to backup my EE site with Rclone ( I put backup.sh in /root/backup.sh). The backup will be uploaded to Google Drive then. However, I could not find a way to run the cron job properly. I tried these commands:

ee cron create rlylit.com --command=’/root/backup.sh’ --schedule=’@daily
ee cron create host --command=’/root/backup.sh’ --schedule=’@daily
EDITOR=nano crontab -e

The first two don’t work, cron job never runs. The third works but only for site files. It creates a blank tar.bz2 file for database.

If I just run the process by typing /root/backup.sh, everything is fine. Site files and database will be backup properly.

Can someone point me what’s wrong? Thanks in advance. Here is my backup.sh script:

SERVER_NAME=SITE_BACKUP

TIMESTAMP=$(date +"%F")
BACKUP_DIR="/root/backup/$TIMESTAMP"
MYSQL_USER=“root”
MYSQL_PASSWORD=cat /opt/easyengine/services/docker-compose.yml | grep MYSQL_ROOT_PASSWORD | awk -F'=' '{print $2}'
MYSQL=/usr/bin/mysql
MYSQLDUMP=/usr/bin/mysqldump
SECONDS=0
DOCKERDatabaseID=docker ps | grep -e 'services_global-db' | cut -c1-12;

mkdir -p “$BACKUP_DIR/mysql”
databases=docker exec $DOCKERDatabaseID bash -c "mysql -h localhost --user=$MYSQL_USER --password=$MYSQL_PASSWORD -e 'show databases;'" | tr -d "| " | grep -v Database

echo “Starting Backup Database”;

for db in $databases; do
if [[ “$db” != “information_schema” ]] && [[ “$db” != “performance_schema” ]] && [[ “$db” != “mysql” ]] && [[ “$db” != _* ]] ;
then
#uncomment this next line if you want to know which DB the script is on.
#echo “Dumping database: $db”
sudo docker exec $DOCKERDatabaseID bash -c “/usr/bin/mysqldump -u $MYSQL_USER -p$MYSQL_PASSWORD --databases $db” > $BACKUP_DIR/$db.sql
fi
done

tar -jcf $BACKUP_DIR/DBs-$TIMESTAMP.tar.bz2 $BACKUP_DIR/*.sql

rm -f $BACKUP_DIR/*.sql

echo “Finished”;
echo ‘’;

echo “Starting Backup Website”;

tar -jcf $BACKUP_DIR/siteFiles-$TIMESTAMP.tar.bz2 /opt/easyengine/sites/* --dereference

echo “Finished”;
echo ‘’;

size=$(du -sh $BACKUP_DIR | awk ‘{ print $1}’)

echo “Starting Uploading Backup”;
/usr/sbin/rclone --transfers=1 move $BACKUP_DIR “a2hostingvps:$SERVER_NAME/$TIMESTAMP” >> /var/log/rclone.log 2>&1
#Cleanup
rm -rf $BACKUP_DIR
/usr/sbin/rclone -q --min-age 1w delete “MyVPS:$SERVER_NAME” #Remove all backups older than 2 day
/usr/sbin/rclone -q --min-age 1w rmdirs “MyVPS:$SERVER_NAME” #Remove all empty folders older than 2 day
/usr/sbin/rclone cleanup “MyVPS:” #Cleanup Trash
echo “Finished”;
echo ‘’;

duration=$SECONDS
echo “Total $size, $(($duration / 60)) minutes and $(($duration % 60)) seconds elapsed.”

Run it from the system’s cron, not inside the docker container’s cron.

In case that doesn’t make sense, here’s what I mean:

access your systems cron schedule file:

sudo crontab -e

then create the schedule by adding a line at the bottom:

0 4 * * * /root/backup.sh >> /dev/null 2>&1

  • the 0 4 * * * means every day at 04:00 (system time)
  • /root/backup.sh is the script
  • ’ >> /dev/null 2>&1 redirects any output into null. since it’s running as a scheduled task, you just want to get rid of this stuff. (NOTE: the single apostrophe at the beginning of this line is to stop the >> from being shown as a quotation. just ignore it)
1 Like

Thank you. I also did that. The script does run but it only backup site files. If I manually type /root/backup.sh, it will backup everything. I don’t know why :frowning:

does the script have an interpreter defined on the first line?

like

#!/bin/sh

Jobs that run under cron sometimes have a different interpreter, so they can be processed differently that at the console if it’s not defined.

1 Like

Ohhh no it doesn’t. Thank you for pointing this out @davidsandbrand. You saved my life!

Worked like a charm for me. It’s cleaner than my complicated script I wrote. Thanks!

run your script but in /var/Backups/ saves only siteFile but not saving any DBs, can you please help me.

./Backup.sh: 12: [[: not found
./Backup.sh: 12: [[: not found
./Backup.sh: 12: [[: not found
./Backup.sh: 12: [[: not found
tar: Removing leading /' from member names tar: /var/Backups/*.sql: Cannot stat: No such file or directory tar: Exiting with failure status due to previous errors tar: Removing leading /’ from member names
tar: Removing leading `/’ from hard link targets

Sounds like a permission problem. Are you running as root, and are all the paths the same as the script is using?

Yes, running as a root and same path

./backup.sh: 12: [[: not found

If possible can you please create a video tutorial, it’ll help me as well as many more.

Can anyone please send me the backup script to [email protected]