The mysqldump command is widely documented because it really can be very useful, and also because it offers a blistering array of options. Its most obvious use is for dumping databases for backup or relocation purposes:
mysqldump [options] [database [tables]]
It can either dump a whole database in one go, or you can specify a list of specific tables to be dumped.
Dumps are simply SQL statements which when executed will recreate the database (or specific tables or rows). This makes the command potentially useful for cross-platform data-exports, should that ever be important. Consequently though, mysqldump's output will benefit hugely from compression, which is easily accomplished by piping the output into gzip:
mysqldump --extended-insert=FALSE --compact -c -C --single-transaction --add-drop-table --quick -u dbuser -p myDatabase myTable1 myTable2 myTable3 | gzip > export.sql.gz
The small c
-c option creates complete insert statements, that specify the column names explicitly, the large
-C is for 'compress', which will compress information sent between the mysqldump database client and the database server (it reduces network traffic, and has no affect on the output which will remain as uncompressed text). The other option to note is
--add-drop-table, which does what it says and adds
DROP TABLE statements into the SQL dump. So dumps produced by this method will contain complete new database tables, wiping out any existing data on the target server where the dump is executed.
In some cases, you may not wish to do this - either because data on the target should be preserved or just to avoid dumping extra information unnecessarily. In which case, you may find the following command more appropriate:
mysqldump --extended-insert=FALSE --compact -c -C --single-transaction --no-create-info --where="tableId>9999" --insert-ignore --quick -u dbuser -p myDatabase myTable1 myTable2 myTable3 | gzip > export.sql.gz
The above would create a dump containing only those rows with a
tableId column greater than 9999: if all the exported tables contain a column so-named, this acts like a sort of join. The
--insert-ignore option means that the
INSERT statements in the dump are created as
INSERT INGORE: in the event that a row already exists with the same primary key, the row being inserted will simply be discarded, rather than causing a duplicate key violation which would otherwise kill the import. We've also added the
--no-create-info option so that
CREATE TABLE statements are not included in the dump.
So far so good. To import the file, you just need to stick it into mysql. If the SQL statements are in an uncompressed file, you can simply use the following:
mysql -u dbuser -p -h localhost myDatabase < dumpfile.sql
If gzipped, use this:
gunzip < export.sql.gz | mysql -u dbuser -p -h localhost myDatabase
Copy Selected Data between Mirrored Databases
mysqldump provides a quick way to copy selected items from one database to another e.g. from a live database to a development mirror. If you're just copying a few rows, you can copy and paste from a terminal on the live machine to a MySQL client on dev:
mysqldump --extended-insert=FALSE --compact --no-create-info -c -C --single-transaction --where='column=whatever' --quick -u dbuser -p myDatabase myTable1 myTable2 myTable3
Copying Database Tables from One Server to Another
If you can access both the destination and the source database servers from your current location, you can use the
-h argument to specify both hostnames:
mysqldump --compact -ceC --single-transaction --quick -h source.server.com -u dbuser -p`cat ~/.mysql_pass` sourceDatabase sourceTable | mysql -h destination.server.com -u dbuser -p`cat ~/mysql_pass` destinationDatabase
In this example, the password for both servers is the same, and is stored in the file ~/mysql_pass on the current machine.
Copying MySQL Databases over an SSH Connection
I got this cool tip from this chap's
list of 25 SSH commands. Whenever you're copying data from one database server to another, where the servers are locked down to allow access only from localhost (by having
bind-address = localhost in the
[mysqld] section of your servers'
my.cnf config files), you can stream it directly over SSH by using the lovely old pipe operator. Run the following on the machine where the source database is located:
mysqldump --compact -ceC --add-drop-table --single-transaction --quick -u dbuser -p myDatabase myTable1 | ssh -C user@destination "mysql -u dbuser -pPASSWORD myDatabase"
Note that we've got the MySQL password for the target server in the command - this is a potential security risk as it could be available in your history file. If this is likely to be an issue, there are ways around it, e.g. by storing the password in a file somewhere secure on the target machine and getting it from there (via cat), or by setting
HISTCONTROL to ignorespace and beginning this command with a space.
Creating Reports
You can also use mysqldump to create simple CSV reports based on single database tables, like so:
mysqldump --compact -c -C --single-transaction --quick -u dbuser -p --where='column=whatever' --fields-terminated-by=, --fields-enclosed-by=\" --no-create-info --tab=/tmp myDatabase myTable1
Note the
fields-terminated-by option which sets the comma as the field delimiter. Also, the CSV file will be saved in the directory specified by the
--tab option. If you want to just get a selection of fields, you might be able to use something like the following:
mysql -u dbuser -p`cat ~/.mysql_pass` --execute='SELECT field1, field2 FROM table' databaseName > /tmp/outfile