Skip to main content

Backup WordPress Database And Filesystem Data On Linux With Scripts

f you’re like me, you run a WordPress blog and are terrified of the thought of something going wrong.  With core updates, theme updates, plugin updates, and server component updates, there is a lot of room for error.  This is where a WordPress backup could help ease your mind.
WordPress recommends taking a backup of your blog before any of these are done and there are even some popular plugins that will do this for you.  For example, you could use the popular UpdraftPlus or similar, but I believe there is room for error in those as well.  While I could be wrong, I think WordPress must be in good shape for backup plugins to be successful.
The alternative would be to create your own backup scripts that run on a cron schedule.  We’re going to see how to do this for WordPress instances running on a Linux machine.

Creating the Backup Script

There are two core components that need to be backed up in case of a catastrophe.  You need to backup the WordPress files which can include plugins, themes, and uploads, as well as the data that resides on your database.
Create the following backup.sh script somewhere on your server:

#!/bin/bash
 
TODAY=`date '+%Y%m%d'`
TEMP_DIR=/home/nraboy/backups/temp
 
BACKUP_NAME="blog"
DB_NAME="DATABASE_NAME_HERE"
DB_USER="DATABASE_USERNAME_HERE"
DB_PASS="DATABASE_PASSWORD_HERE"
SITE_PATH=/var/www
 
echo "Starting Backup..."
 
mkdir $TEMP_DIR
 
mysqldump -u $DB_USER -p$DB_PASS $DB_NAME > $TEMP_DIR/database.sql
 
tar --exclude="updraft" -zcf $TEMP_DIR/files.tar.gz $SITE_PATH
 
tar -zcf $BACKUP_NAME-$TODAY.tar.gz -C $TEMP_DIR .
 
rm -Rf $TEMP_DIR
 
echo "Backup Complete [$(du -sh $BACKUP_NAME-$TODAY.tar.gz | awk '{print $1}')]"


So what is happening in the above script?
First we are obtaining the date which will be used when naming our backups.  In my scenario I had never planned to take more than one backup per day.  We also need to define a temporary directory which will contain each of the backup components.
The next section we define the backup name which you can use to identify the backup.  The end result will be a file named something like blog-20161230.tar.gz, based on what I have in the script.
The SITE_PATH should be the location where your WordPress blog or website resides on the server.  A common location is the /var/www directory if you’re unsure.  Within the path there should be a file called wp-config.php which contains the database information.
Everything so far was initialization.
When the script runs, the temporary directory will be created and the MySQL database will be dumped into a SQL file.  This dump contains table structure and data.  Once the database is dumped all the WordPress files are archived into a tar file, excluding directories that we define.  Excluded directories could be other backup directories, cache directories, etc.
With two files in our temporary directory, we can create our single and final tar archive from them.
Now that you have a script that will create and bundle a file and database backup, you need to configure it to run on a schedule using crontab.
Execute crontab e on your server and add the following line:

The above line will execute our backup script every day at 2:00am.  You’ll end up with a backup in the current working directory unless you specify the output directory in our script, which we did not.
We could easily do something like this in our script:

tar -zcf /home/nraboy/$BACKUP_NAME-$TODAY.tar.gz -C $TEMP_DIR .

In the event that something bad happens and you need to restore your WordPress blog from this backup, you could extract the files and replace what you currently have, then import the SQL file into MySQL.  This is a full snapshot, not an incremental backup.

Conclusion

While there are many free and paid WordPress backup plugins available, sometimes it takes a good old fashioned Linux script to make you feel at ease about your website or blog.  I personally only take backups on a weekly or monthly basis, but your needs may be different than mine.  Just note that because these are not incremental, they may take a bit of space on your hard drive.

Comments

Popular posts from this blog

Python and Parquet Performance

In Pandas, PyArrow, fastparquet, AWS Data Wrangler, PySpark and Dask. This post outlines how to use all common Python libraries to read and write Parquet format while taking advantage of  columnar storage ,  columnar compression  and  data partitioning . Used together, these three optimizations can dramatically accelerate I/O for your Python applications compared to CSV, JSON, HDF or other row-based formats. Parquet makes applications possible that are simply impossible using a text format like JSON or CSV. Introduction I have recently gotten more familiar with how to work with  Parquet  datasets across the six major tools used to read and write from Parquet in the Python ecosystem:  Pandas ,  PyArrow ,  fastparquet ,  AWS Data Wrangler ,  PySpark  and  Dask . My work of late in algorithmic trading involves switching between these tools a lot and as I said I often mix up the APIs. I use Pandas and PyArrow for in-RAM comput...

How to construct a File System that lives in Shared Memory.

Shared Memory File System Goals 1. MOUNTED IN SHARED MEMORY The result is a very fast, real time file system. We use Shared Memory so that the file system is public and not private. 2. PERSISTS TO DISK When the file system is unmounted, what happens to it? We need to be able to save the file system so that a system reboot does not destroy it. A great way to achieve this is to save the file system to disk. 3. EXTENSIBLE IN PLACE We want to be able to grow the file system in place. 4. SUPPORTS CONCURRENCY We want multiple users to be able to access the file system at the same time. In fact, we want multiple users to be able to access the same file at the same time. With the goals now in mind we can now talk about the major design issues: FAT File System & Design Issues The  FAT File System  has been around for quite some time. Basically it provides a pretty good file structure. But I have two problems with it: 1. FAT IS NOT EXTENSIBLE IN PLAC...

Fetching Facebook Friends using Windows Azure Mobile Services

This tutorial shows you how to fetch Facebook Friends if you have Facebook accessToken. Here is the the code for Scheduled task called getFriends function getFriends() { //Name of the table where accounts are stored var accountTable = tables.getTable('FacebookAccounts'); //Name of the table where friends are stored var friendsTable = tables.getTable('Friends'); checkAccounts(); function checkAccounts(){ accountTable .read({success: function readAccounts(accounts){ if (accounts.length){ for (var i = 0; i < accounts.length; i++){ console.log("Creating query"); //Call createQuery function for all of the accounts that are found createQuery(accounts[i], getDataFromFacebook); } } else { console.log("Didn't find any account"); prepareAccountTable(); } }}); } function prepareAccountTable(){ var myAccount = { accessToken: "", //enter here you facebook accessToken. You can retrieve ...