Backup and Recovery

Blog Author


Published Date

27th October, 2018




  • Share this blog :

It is important to back up your databases so that you can recover your data and be up and running again in case problems occur, such as system crashes, hardware failures, or users deleting data by mistake. Backups are also essential as a safeguard before upgrading a MySQL installation, and they can be used to transfer a MySQL installation to another system or to set up replication slave servers. MySQL offers a variety of backup strategies from which you can choose the methods that best suit the requirements.   11.1. Backup and Recovery Types   Logical Versus Physical (Raw) Backups: Logical backups save information represented as logical database structure (CREATE DATABASE, CREATE TABLE statements) and content (INSERT statements or delimited-text files). Physical backups consist of raw copies of the directories and files that store database contents.   Logical backup methods have these characteristics:

  • The backup is done by querying the MySQL server to obtain database structure and content information.
  • Backup is slower than physical methods because the server must access database information and convert it to logical format. If the output is written on the client side, the server must also send it to the backup program.
  • Backup and restore granularity is available at the server level (all databases), database level (all tables in a particular database), or table level. This is true regardless of storage engine.
  • The backup does not include log or configuration files, or other database-related files that are not part of databases
  • Backups stored in logical format are machine independent and highly portable.
  • Logical backups are performed with the MySQL server running. The server is not taken offline.
  • Logical backup tools include the mysqldump program and the SELECT ... INTO OUTFILE These work for any storage engine, even MEMORY.
  • To restore logical backups, SQL-format dump files can be processed using the mysql client. To load delimited-text files, use the LOAD DATA INFILE statement or the mysqlimport 

  Physical backup methods have these characteristics:

  • The backup consists of exact copies of database directories and files. Typically this is a copy of all or part of the MySQL data directory. Data from MEMORY tables cannot be backed up this way because their contents are not stored on disk.
  • Physical backup methods are faster than logical because they involve only file copying without conversion.
  • Backup and restore granularity ranges from the level of the entire data directory down to the level of individual files. This may or may not provide for table-level granularity, depending on storage engine.
  • In addition to databases, the backup can include any related files such as log or configuration files.
  • Backups are portable only to other machines that have identical or similar hardware characteristics.
  • Backups can be performed while the MySQL server is not running. If the server is running, it is necessary to perform appropriate locking so that the server does not change database contents during the backup.

  Local Versus Remote Backups: A local backup is performed on the same host where the MySQL server runs, whereas a remote backup is done from a different host. For some types of backups, the backup can be initiated from a remote host even if the output is written locally on the server host.

  • mysqldump can connect to local or remote servers. For SQL output (CREATE and INSERT statements), local or remote dumps can be done and generate output on the client. For delimited-text output (with the --tab option), data files are created on the server host.
  • mysqlhotcopy performs only local backups: It connects to the server to lock it against data modifications and then copies local table files.
  • SELECT ... INTO OUTFILE can be initiated from a local or remote client host, but the output file is created on the server host.

  Full Versus Incremental Backups: A full backup includes all data managed by a MySQL server at a given point in time. An incremental backup consists of the changes made to the data during a given time span (from one point in time to another). MySQL has different ways to perform full backups. Incremental backups are made possible by enabling the server's binary log, which the server uses to record data changes.   11.2  Database Backup Methods   Making Backups by Copying Table Files: For storage engines that represent each table using its own files, tables can be backed up by copying those files. For example, My-ISAM tables are stored as files, so it is easy to do a backup by copying files (*.frm, *.MYD, and *.MYI files). To get a consistent backup, stop the server or do a LOCK TABLES on the relevant tables followed by FLUSH TABLES for the tables. You need only a read lock; this enables other clients to continue to query the tables while you are making a copy of the files in the database directory. The FLUSH TABLES statement is needed to ensure that the all active index pages are written to disk before you start the backup. But note that table file copying methods do not work if your database contains InnoDB tables. mysqlhotcopy does not work for InnoDB tables because InnoDB does not necessarily store table contents in database directories.   Backing Up and Recovering of InnoDB Database: InnoDB Hot Backup enables you to back up a running MySQL database, including InnoDB and MyISAM tables, with minimal disruption to operations while producing a consistent snapshot of the database. When InnoDB Hot Backup is copying InnoDB tables, reads and writes to both InnoDB and MyISAM tables can continue. During the copying of MyISAM tables, reads (but not writes) to those tables are permitted. In addition, InnoDB Hot Backup supports creating compressed backup files, and performing backups of subsets of InnoDB tables. If you are able to shut down your MySQL server, you can make a binary backup that consists of all files used by InnoDB to manage its tables. Use the following procedure:

  1. Shut down the MySQL server and make sure that it stops without errors.
  2. Copy all InnoDB data files (ibdata files and .ibd files) into a safe place.
  3. Copy all the .frm files for InnoDB tables to a safe place.
  4. Copy all InnoDB log files (ib_logfile files) to a safe place.
  5. Copy your my.cnf configuration file or files to a safe place.

In addition to making binary backups as just described, regularly make dumps of your tables with mysqldump. A binary file might be corrupted without you noticing it. Dumped tables are stored into text files that are human-readable, so spotting table corruption becomes easier. Also, because the format is simpler, the chance for serious data corruption is smaller. mysqldump also has a --single-transaction option for making a consistent snapshot without locking out other clients. Replication works with InnoDB tables, so you can use MySQL replication capabilities to keep a copy of your database at database sites requiring high availability. To be able to recover your InnoDB database to the present from the time at which the binary backup was made, run your MySQL server with binary logging turned on. To achieve point-in-time recovery after restoring a backup, you can apply changes from the binary log that occurred after the backup was made. To recover from a crash of your MySQL server, the only requirement is to restart it. InnoDB automatically checks the logs and performs a roll-forward of the database to the present. InnoDB automatically rolls back uncommitted transactions that were present at the time of the crash.   Making Delimited-Text File Backups: To create a text file containing a table's data, you can use mysql > SELECT * INTO OUTFILE 'file_name' FROM tbl_name. Here is an example that produces a file in the comma-separated values (CSV) format used by many programs: mysql > SELECT a,b,a+b INTO OUTFILE '/tmp/result.txt' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n' FROM test_table; The file is created on the MySQL server host, not the client host. For this statement, the output file cannot already exist because permitting files to be overwritten constitutes a security risk. This method works for any kind of data file, but saves only table data, not the table structure. To reload a delimited-text data file, use LOAD DATA INFILE or mysqlimport., will cover in later sections.

mysqlhotcopy — A Database Backup Program

It uses FLUSH TABLES, LOCK TABLES, and cp or scp to make a database backup. It is a fast way to make a backup of the database or single tables, but it can be run only on the same machine where the database directories are located. mysqlhotcopy works only for backing up MyISAM and ARCHIVE tables. It runs on Unix. To use mysqlhotcopy, you must have read access to the files for the tables that you are backing up, the SELECT privilege for those tables, the RELOAD privilege (to be able to execute FLUSH TABLES), and the LOCK TABLES privilege (to be able to lock the tables). shell> mysqlhotcopy db_name [/path/to/new_directory] shell> mysqlhotcopy db_name_1 ... db_name_n /path/to/new_directory   11.3.  Using mysqldump for Backups:   11.3.1.  Dumping Data in SQL Format: This section describes how to use mysqldump to produce dump files, and how to reload dump files. A dump file can be used in several ways:

  • As a backup to enable data recovery in case of data loss.
  • As a source of data for setting up replication slaves.

There are three general ways to invoke mysqldump: shell> mysqldump [options] db_name [tbl_name ...] shell> mysqldump [options] --databases db_name ... shell> mysqldump [options] --all-databases By default, mysqldump writes information as SQL statements to the standard output. You can save the output in a file: shell> mysqldump [arguments] > file_name To dump all databases, invoke mysqldump with the --all-databases option: shell> mysqldump --all-databases > dump.sql To dump only specific databases, name them on the command line and use the --databases option: shell> mysqldump --databases db1 db2 db3 > dump.sql The --databases option causes all names on the command line to be treated as database names. Without this option, mysqldump treats the first name as a database name and those following as table names. With --all-databases or --databases, mysqldump writes CREATE DATABASE and USE statements prior to the dump output for each database. This ensures that when the dump file is reloaded, it creates each database if it does not exist and makes it the default database so database contents are loaded into the same database from which they came. To dump a single database, name it on the command line: shell> mysqldump --databases test > dump.sql In the single-database case, it is permissible to omit the --databases option: shell> mysqldump test > dump.sql To dump only specific tables from a database, name them on the command line following the database name: shell> mysqldump test t1 t3 t7 > dump.sql mysqldump does not dump the INFORMATION_SCHEMA database by default. As of MySQL 5.1.38, mysqldumpdumps INFORMATION_SCHEMA if you name it explicitly on the command line, although you must also use the --skip-lock-tables option. Before 5.1.38, mysqldump silently ignores INFORMATION_SCHEMA even if you name it explicitly on the command line. Use of --opt is the same as specifying --add-drop-table, --add-locks, --create-options, --disable-keys, --extended-insert, --lock-tables, --quick, and --set-charset. All of the options that --opt stands for also are on by default because --opt is on by default. To select the effect of --opt except for some features, use the --skip option for each feature. To disable extended inserts and memory buffering, use --opt --skip-extended-insert --skip-quick. Use of --compact is the same as specifying --skip-add-drop-table, --skip-add-locks, --skip-comments,--skip-disable-keys, and --skip-set-charset options.   Options:   --add-drop-database               Add a DROP DATABASE statement before each CREATE DATABASE statement. --add-drop-table                      Add a DROP TABLE statement before each CREATE TABLE statement. --add-drop-trigger                   Add a DROP TRIGGER statement before each CREATE TRIGGER statement. --add-locks                             Surround each table dump with LOCK TABLES and UNLOCK TABLES statements. --all-databases                                   Dump all tables in all databases. --complete-insert                    Use complete INSERT statements that include column names. --databases                            To dump several databases. --disable-keys                                    For each table, surround the INSERT statements with statements to disable and                                        enable keys. --dump-date                            Include dump date as "Dump completed on" comment if --comments is given --extended-insert                    Use multiple-row INSERT syntax that include several VALUES lists --tab=path                               Produce tab-separated data files --fields-enclosed-by=string   This option is used with the --tab option and has the same meaning as the                                                             corresponding clause for LOAD DATA INFILE --fields-terminated-by=string            This option is used with the --tab option and has the same meaning as the                                                             corresponding clause for LOAD DATA INFILE --flush-logs                             Flush the MySQL server log files before starting the dump --no-create-db                         This option suppresses the CREATE DATABASE statements --no-create-info                    Do not write CREATE TABLE statements that re-create each dumped table. --no-data                                 Do not dump table contents. --opt                                        Shorthand for --add-drop-table --add-locks --create-options --disable-keys                                                   --extended-insert --lock-tables --quick –set-charset. --routines                                Dump stored routines (procedures and functions) from the dumped databases --skip-trigers                           Do not dump triggers. --skip-add-drop-table             Do not add a DROP TABLE statement before each CREATE TABLE statement --skip-extended-insert                       Turn off extended-insert --tables                                 Override the --databases or -B option    --lock-all-tables Lock all tables across all databases. This is achieved by acquiring a global read lock for the duration of the whole dump. This option automatically turns off --single-transaction and --lock-tables.   --lock-tables, -l For each dumped database, lock all tables to be dumped before dumping them. The tables are locked with READ LOCAL to permit concurrent inserts in the case of MyISAM tables.   --master-data[=value] Use this option to dump a master replication server to produce a dump file that can be used to set up another server as a slave of the master. It causes the dump output to include a CHANGE MASTER TO statement. If the option value is 2, the CHANGE MASTER TO statement is written as an SQL comment, and thus is informative only; it has no effect when the dump file is reloaded. If the option value is 1, the statement is not written as a comment and takes effect when the dump file is reloaded. If no option value is specified, the default value is 1. It automatically also turns on --lock-all-tables.   --compact Produce more compact output. This option enables the --skip-add-drop-table, --skip-add-locks, --skip-comments, --skip-disable-keys, and --skip-set-charset options.   11.3.2.  Dumping Data in Delimited-Text Format with mysqldump: If you invoke mysqldump with the --tab=dir_name option, it uses dir_name as the output directory and dumps tables individually in that directory using two files for each table. The table name is the basename for these files. For a table named t1, the files are named t1.sql and t1.txt. The .sql file contains a CREATE TABLE statement for the table. The .txt file contains the table data, one line per table row. The following command dumps the contents of the db1 database to files in the /tmp database: shell> mysqldump --tab=/tmp db1 It is best that --tab be used only for dumping a local server. If you use it with a remote server, the --tab directory must exist on both the local and remote hosts, and the .txt files will be written by the server in the remote directory (on the server host), whereas the .sql files will be written by mysqldump in the local directory.   11.3.3.  Reloading SQL-Format Backups: To reload a dump file written by mysqldump that consists of SQL statements, use it as input to the mysql client.

  • When you reload the dump file, you must specify a default database name so that the server knows which database to reload.
  • For reloading, you can specify a database name different from the original name, which enables you to reload the data into a different database.
  • If the database to be reloaded does not exist, you must create it first.

If the dump file was created by mysqldump with the --all-databases or --databases option, it contains CREATE DATABASE and USE statements and it is not necessary to specify a default database into which to load the data: shell> mysql < dump.sql Alternatively, from within mysql, use a source command: mysql> source dump.sql If the file is a single-database dump not containing CREATE DATABASE and USE statements, create the database first Then specify the database name when you load the dump file: shell> mysql db1 < dump.sql Alternatively, from within mysql, create the database, select it as the default database, and load the dump file: mysql> CREATE DATABASE IF NOT EXISTS db1; mysql> USE db1; mysql> source dump.sql   Reloading Delimited-Text Format Backups: For backups produced with mysqldump --tab, each table is represented in the output directory by an .sql file containing the CREATE TABLE statement for the table, and a .txt file containing the table data. To reload a table, first change location into the output directory. Then process the .sql file with mysql to create an empty table and process the .txt file to load the data into the table: shell> mysql db1 < t1.sql shell> mysqlimport db1 t1.txt An alternative to using mysqlimport to load the data file is to use the LOAD DATA INFILE statement from within the mysql client: mysql> USE db1; mysql> LOAD DATA [LOCAL]  INFILE 't1.txt' INTO TABLE t1; If you used any data-formatting options with mysqldump when you initially dumped the table, you must use the same options with mysqlimport or LOAD DATA INFILE to ensure proper interpretation of the data file contents: shell> mysqlimport –fields-terminated-by=, --fields-enclosed-by='"' --lines-terminated-by=0x0d0a db1 t1.txt Or: mysql> USE db1; mysql> LOAD DATA INFILE 't1.txt' INTO TABLE t1 FIELDS TERMINATED BY ',' FIELDS ENCLOSED BY '"'  LINES TERMINATED BY '\r\n';


Point-in-Time (Incremental) Recovery Using the Binary Log

Point-in-time recovery is based on these principles: The source of information for point-in-time recovery is the set of incremental backups represented by the binary log files generated subsequent to the full backup operation. Therefore, the server must be started with the --log-bin option to enable binary logging. To restore data from the binary log, you must know the name and location of the current binary log files. suppose that exactly at 10:00 a.m. on April 20, 2005 an SQL statement was executed that deleted a large table. To restore the table and data, you could restore the previous night's backup, and then execute the following command: shell> mysqlbinlog --stop-datetime="2005-04-20 9:59:59" \ /var/log/mysql/bin.123456 | mysql -u root -p If you did not detect the erroneous SQL statement that was entered until hours later, you will probably also want to recover the activity that occurred afterward. Based on this, you could run mysqlbinlog again with a start date and time, like so: shell> mysqlbinlog --start-datetime="2005-04-20 10:01:00" \ /var/log/mysql/bin.123456 | mysql -u root -p   Forcing InnoDB Recovery: If there is database page corruption, you may want to dump your tables from the database with SELECT INTO ... OUTFILE.Usually, most of the data obtained in this way is intact. However, it is possible that the corruption might cause SELECT * FROM tbl_name statements or InnoDB background operations to crash or assert, or even cause InnoDB roll-forward recovery to crash. In such cases, you can use the  innodb_force_recovery option to force the InnoDB storage engine to start up while preventing background operations from running, so that you are able to dump your tables. For example, you can add the following line to the [mysqld] section of your option file before restarting the server: [mysqld] innodb_force_recovery = 4 innodb_force_recovery is 0 by default (normal startup without forced recovery) The permissible nonzero values for innodb_force_recovery follow. A larger number includes all precautions of smaller numbers. If you are able to dump your tables with an option value of at most 4, then you are relatively safe that only some data on corrupt individual pages is lost. A value of 6 is more drastic because database pages are left in an obsolete state, which in turn may introduce more corruption into B-trees and other database structures.   1 (SRV_FORCE_IGNORE_CORRUPT) Let the server run even if it detects a corrupt page. Try to make SELECT * FROM tbl_name jump over corrupt index records and pages, which helps in dumping tables.   2 (SRV_FORCE_NO_BACKGROUND) Prevent the main thread from running. If a crash would occur during the purge operation, this recovery value prevents it.   3 (SRV_FORCE_NO_TRX_UNDO) Do not run transaction rollbacks after recovery.   4 (SRV_FORCE_NO_IBUF_MERGE) Prevent insert buffer merge operations. If they would cause a crash, do not do them. Do not calculate table statistics.   5 (SRV_FORCE_NO_UNDO_LOG_SCAN) Do not look at undo logs when starting the database: InnoDB treats even incomplete transactions as committed.   6 (SRV_FORCE_NO_LOG_REDO) Do not do the log roll-forward in connection with recovery. The database must not otherwise be used with any nonzero value of innodb_force_recovery. As a safety measure, InnoDB prevents users from performing INSERT, UPDATE, or DELETE operations when innodb_force_recovery is greater than 0. You can SELECT from tables to dump them, or DROP or CREATE tables even if forced recovery is used. If you know that a given table is causing a crash on rollback, you can drop it. You can also use this to stop a runaway rollback caused by a failing mass import or ALTER TABLE. You can kill the mysqld process and set innodb_force_recovery to 3 to bring the database up without the rollback, then DROP the table that is causing the runaway rollback. Check Out Our Related Courses Sql Tutorials Oracle Pl SQL Tutorials Top 50 Sql Server Interview Questions SQL Server T-SQL Training In Hyderabad

About Author


Author Bio

TekSlate is the best online training provider in delivering world-class IT skills to individuals and corporates from all parts of the globe. We are proven experts in accumulating every need of an IT skills upgrade aspirant and have delivered excellent services. We aim to bring you all the essentials to learn and master new technologies in the market with our articles, blogs, and videos. Build your career success with us, enhancing most in-demand skills .

Related Blogs

Write For Us