Making Database BackupsIt's important to back up your databases in case tables are lost or damaged. If a serious system crash occurs, you want to be able to restore your tables to the state they were in at the time of the crash with as little data loss as possible. Likewise, a user who issues an unwise DROP DATABASE or DROP TABLE may show up at your door requesting that you perform data recovery. Database backups also are useful for copying databases to another server. Most commonly, a database is transferred to a server running on another host, but you can also transfer data to a different server running on the same host. You might do this if you're testing a server for a new release of MySQL and want to use it with some real data from your production server. Another use for a backup is to set up a replication server, because one of the first steps in setting up a slave server is to take a snapshot of the master server at a specific point in time. The backup serves as this snapshot, and loading it into the slave server brings it up to date with respect to the master server. Thereafter, updates made on the master server are replicated to the slave server through the standard replication protocol. The procedure for setting up replication is discussed in Chapter 11. There are two general categories of database backups:
Each method has its own advantages and disadvantages. Some of the factors to consider are whether you can leave the server running, the time needed to make the backup, and portability of the backup.
Whichever backup method you choose, there are certain principles to which you should adhere to ensure the best results if you ever need to restore database contents:
Making Backups with mysqldumpThe mysqldump program creates text dump files. By default, it writes a dump file in SQL format consisting of CREATE TABLE statements that create the tables being dumped and INSERT statements containing the data for the rows in the tables. To re-create the dumped tables later, reload the dump file into MySQL by using it as input to mysql. (Don't use mysqlimport to read SQL-format mysqldump output; mysqlimport expects to read raw data, not SQL statements.) To back up all tables from all databases into a file, you can use a command like this:
% mysqldump --opt --all-databases > /archive/mysql/dump-all.2005-01-02
However, that produces a rather large dump file if you have a lot of data. You can dump a single database into a file as follows:
% mysqldump --opt sampdb > /archive/mysql/sampdb.2005-01-02
The beginning of the output file will look something like this: -- MySQL dump 10.7 -- -- Host: localhost Database: sampdb -- ------------------------------------------------------ -- Server version 4.1.6-gamma-log -- -- Table structure for table `absence` -- DROP TABLE IF EXISTS `absence`; CREATE TABLE `absence` ( `student_id` int(10) unsigned NOT NULL default '0', `date` date NOT NULL default '0000-00-00', PRIMARY KEY (`student_id`,`date`), CONSTRAINT `absence_ibfk_1` FOREIGN KEY (`student_id`) REFERENCES `student` (`student_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -- Dumping data for table `absence` -- LOCK TABLES `absence` WRITE; INSERT INTO `absence` VALUES (3,'2004-09-03'),(5,'2004-09-03'), (10,'2004-09-06'),(10,'2004-09-09'),(17,'2004-09-07'),(20,'2004-09-07'); UNLOCK TABLES; ... The rest of the file consists of more CREATE TABLE and INSERT statements. Dump files often are large, so you'll likely want to do what you can to make them smaller. One way to reduce their size is to use the --opt option, which optimizes the dump process to generate less output. One effect of this option is to cause mysqldump to write multiple-row INSERT statements. These not only take less space than the equivalent set of single-row INSERT statements, they can be processed more quickly when you reload the dump file later. Another way to reduce the size of a dump file is to compress it. On Windows, you can use WinZip or similar program to compress the dump and produce a file in Zip format. On Unix, you might use gzip instead. You can even compress the backup as you generate it by using a command pipeline:
% mysqldump --opt sampdb | gzip > /archive/mysql/sampdb.2005-01-02.gz
If you find large dump files difficult to manage, it's possible to dump the contents of individual tables by naming them following the database name on the mysqldump command line. mysqldump will dump just the named tables rather than all the tables in the database, resulting in smaller, more manageable files. The following example shows how to dump subsets of the sampdb tables into separate files: % mysqldump --opt sampdb member president > hist-league.sql % mysqldump --opt sampdb student score grade_event absence > gradebook.sql mysqldump has many options. Some of those that you may find useful include the following:
mysqldump has several other options; consult Appendix F for more information. Making Binary Database BackupsA method for backing up databases or tables that doesn't involve mysqldump is to copy table files directly. Typically this is done using a special program developed for the task (such as mysqlhotcopy or InnoDB Hot Backup) or regular filesystem utilities (such as cp, tar, or cpio). There are two key points to observe when you use a direct-copy backup method:
If you make a binary backup, beware of symbolic links, such as symlinks in the data directory to database directories or symlinks to MyISAM data or index files. These present a problem because your file-copying technique might copy only the symlinks and not the data that they point to. Making a Complete Binary BackupA complete binary backup includes all files in which table contents are stored and any log files that are used by specific storage engines. For good measure, you should also copy the binary logs. If the server is a replication slave, copy the relay log files and the master.info and relay-log.info files. Also, the slave may have created files with names of the form SQL_LOAD-xxx in its temporary file directory. You should back these up, too; they're needed for LOAD DATA statements. These files will be in the directory named by the --slave-load-tmpdir option (which, if not given, defaults to the value of the tmpdir system variable). To make it easier to identify these files for backup, create a directory to be devoted to use by the slave server, and start the slave with the --slave-load-tmpdir option set to that value. To properly copy all the files just discussed, you must stop the server so that storage engines close their log files and the server closes any other logs that it is writing. All of that sounds like a lot of stuff to back up, but it is not necessarily complicated to do so. For example, all of your database directories are under the data directory, and logs and information files are created there by default as well. In this case, you can make a backup by stopping the server and copying the entire data directory. For example, to create a backup as a compressed tar file under the /archive/mysql directory, change location into the data directory and use a command like this:
% tar czf /archive/mysql/dump-all-2005-04-11.tar.gz .
Making a Partial Binary BackupMaking a partial binary backup by copying files is similar to making a complete backup, except that you copy only a subset of the full set of files. In some cases, a partial backup can be made without stopping the server if you use the read-locking protocol to lock the tables that you want to copy. This is true if a database contains only MyISAM tables, for example. Suppose that you want to back up the mydb database located under the data directory /usr/local/mysql/data and store the backup under the archive directory /archive/mysql. Read-lock the tables, and then execute these commands: % mkdir /archive/mysql/mydb % cd /usr/local/mysql/data/mydb % cp -r . /archive/mysql/mydb After executing these commands, the /archive/mysql/mydb directory contains a copy of the mydb database. Individual tables can be backed up like this: % mkdir /archive/mysql/mydb % cd /usr/local/mysql/data/mydb % cp tbl1.* /archive/mysql/mydb % cp tbl2.* /archive/mysql/mydb ... When you're done backing up, you can release the table locks. (As an alternative to using the read-locking protocol, stop the server before copying the files and restart it afterward.) Making Backups with mysqlhotcopymysqlhotcopy is a Perl DBI script that helps you make database backups. The "hot" in the name refers to the fact that the backups are made while the server is running. mysqlhotcopy has the following principal benefits:
mysqlhotcopy also has certain limitations:
The following examples assume that databases to be backed up contain only MyISAM or ISAM tables. There are several ways to invoke mysqlhotcopy. Suppose that you want to copy a database named mydb. The following command creates a directory mydb_copy in the server's data directory and copies the files in the mydb database directory into it:
% mysqlhotcopy mydb
However, it's not necessarily a good idea to back up a database into the data directory. The new directory will also appear to the server to be a database that it can access. (You can see this by issuing a SHOW DATABASES statement after executing the preceding command. The output will show both mydb and mydb_copy.) Because of this, the tables in the backup directory could be modified by clients that connect to the server. To copy the mydb database into a directory named mydb under a directory that you specify explicitly, give the directory pathname following the database name. For example, to copy the mydb database to a directory named /archive/mysql/mydb-2005-03-12, use this command:
% mysqlhotcopy mydb /archive/mysql/mydb-2005-03-12
To determine what actions mysqlhotcopy would perform for any given command, include the -n option in your invocation syntax. This runs mysqlhotcopy in "no execution" mode, so that it just prints commands rather than executing them. Backing Up InnoDB or BDB TablesTables for the InnoDB and BDB transactional storage engines can be dumped using mysqldump, just like any other kind of tables. One option that is useful for transactional engines is --single-transaction, which causes mysqldump to dump the tables as part of a transaction. For InnoDB this ensures that the tables are not modified during the dump so that you get a consistent backup. You can also make a binary backup. For InnoDB, the best way to do this is use InnoDB Hot Backup, available from Innobase Oy. InnoDB Hot Backup is a commercial tool that allows you to make InnoDB backups with the server running. Visit http://innodb.com for details. To make a binary backup yourself, take care to observe the following special requirements:
Making Backups Using a Replication SlaveIf you have a replication slave server set up, it can help you resolve a conflict of interest that arises from your duties as a MySQL administrator:
The goal of maintaining accessibility conflicts with enforcing complete or partial loss of database access to clients while making backups. A replication slave provides a way out of this dilemma. Rather than making backups of the master server, use the slave server instead. Suspend replication on the slave while you make the backup, resume it afterward, and the slave will catch up on any updates made by the master server during the backup period. This way you need not stop the master or otherwise make it unavailable to clients during the backup. The following list describes some possible strategies for backing up the slave:
Using a Backup to Rename a DatabaseMySQL has no command for renaming a database, but you can do so by using a dump file. Dump the database with mysqldump, create a new empty database with the new name, and then reload the dump file into the new database. After that, you can drop the old database. Here's an example that shows how to rename db1 to db2: % mysqldump db1 > db1.sql % mysqladmin create db2 % mysql db2 < db1.sql % mysqladmin drop db1 Do not use the --database option to mysqldump. That causes it to write a USE db1 statement to the dump file that would cause the file contents to be reloaded into db1 rather than db2. The preceding method might not work if db1 contains InnoDB tables that have foreign key relationships with tables in other databases. To rename a database while preserving foreign key relationships, create an empty database with the new name. Then use RENAME TABLE to rename each table in the original database into the new database, and drop the original database: mysql> CREATE DATABASE db2; mysql> RENAME TABLE db1.t1 TO db2.t1; mysql> RENAME TABLE db1.t2 TO db2.t2; mysql> ... mysql> DROP DATABASE db1; Whichever method you use for renaming a database, remember that access rights to it are controlled through the grant tables in the mysql database. If any of the grant tables have records that refer specifically to the database that was renamed, you'll need to adjust those records appropriately to refer to the new name. For a database renamed from db1 to db2, the statements to use look like this: mysql> UPDATE db SET Db = 'db2' WHERE Db = 'db1'; mysql> UPDATE tables_priv SET Db = 'db2' WHERE Db = 'db1'; mysql> UPDATE columns_priv SET Db = 'db2' WHERE Db = 'db1'; mysql> UPDATE host SET Db = 'db2' WHERE Db = 'db1'; mysql> FLUSH PRIVILEGES; No UPDATE statement is needed for the user table because it has no Db column. |