Since MySQL tables are stored as files, it is easy to do a
backup. To get a consistent backup, do a LOCK TABLES on the
relevant tables. LOCK TABLES. You only need a
read lock; this allows other threads to continue to query the tables while
you are making a copy of the files in the database directory. If you want to
make a SQL level backup, you can use SELECT INTO OUTFILE.
Another way to backup a database is to use the mysqldump program:
-
Do a full backup of your databases:
shell> mysqldump --tab=/path/to/some/dir --opt --full
You can also simply copy all table files (`*.frm', `*.ISD' and
`*.ISM' files), as long as the server isn't updating anything.
-
Stop
mysqld if it's running, then start it with the
--log-update option. You will get log files with names of the
form `hostname.n', where n is a number that is
incremented each time you execute mysqladmin refresh or
mysqladmin flush-logs, the FLUSH LOGS statement, or
restart the server. These log files provide you with the information
you need to replicate changes to the database that are made subsequent
to the point at which you executed mysqldump.
If you have to restore something, try to recover your tables using
isamchk -r first. That should work in 99.9% of all cases. If
isamchk fails, try the following procedure:
-
Restore the original
mysqldump backup.
-
Execute the following command to re-run the updates in the update logs:
shell> ls -1 -t -r hostname.[0-9]* | xargs cat | mysql
ls is used to get all the log files in the right order.
You can also do selective backups with SELECT * INTO OUTFILE 'file_name'
FROM tbl_name and restore with LOAD DATA INFILE 'file_name' REPLACE
... To avoid duplicate records, you need a PRIMARY KEY or a
UNIQUE key in the table. The REPLACE keyword causes old records
to be replaced with new ones when a new record duplicates an old record on
a unique key value.