Search the MySQL manual:

4.8.5 mysqldump, Dumping Table Structure and Data

Utility to dump a database or a collection of database for backup or for transferring the data to another SQL server (not necessarily a MySQL server). The dump will contain SQL statements to create the table and/or populate the table.

If you are doing a backup on the server, you should consider using the mysqlhotcopy instead. See section 4.8.6 mysqlhotcopy, Copying MySQL Databases and Tables.

shell> mysqldump [OPTIONS] database [tables]
OR     mysqldump [OPTIONS] --databases [OPTIONS] DB1 [DB2 DB3...]
OR     mysqldump [OPTIONS] --all-databases [OPTIONS]

If you don't give any tables or use the --databases or --all-databases, the whole database(s) will be dumped.

You can get a list of the options your version of mysqldump supports by executing mysqldump --help.

Note that if you run mysqldump without --quick or --opt, mysqldump will load the whole result set into memory before dumping the result. This will probably be a problem if you are dumping a big database.

Note that if you are using a new copy of the mysqldump program and you are going to do a dump that will be read into a very old MySQL server, you should not use the --opt or -e options.

mysqldump supports the following options:

Add LOCK TABLES before and UNLOCK TABLE after each table dump. (To get faster inserts into MySQL.)
Add a drop table before each create statement.
-A, --all-databases
Dump all the databases. This will be same as --databases with all databases selected.
-a, --all
Include all MySQL-specific create options.
Allow creation of column names that are keywords. This works by prefixing each column name with the table name.
-c, --complete-insert
Use complete insert statements (with column names).
-C, --compress
Compress all information between the client and the server if both support compression.
-B, --databases
To dump several databases. Note the difference in usage. In this case no tables are given. All name arguments are regarded as database names. USE db_name; will be included in the output before each new database.
Insert rows with the INSERT DELAYED command.
-e, --extended-insert
Use the new multiline INSERT syntax. (Gives more compact and faster inserts statements.)
-#, --debug[=option_string]
Trace usage of the program (for debugging).
Display a help message and exit.
These options are used with the -T option and have the same meaning as the corresponding clauses for LOAD DATA INFILE. See section 6.4.9 LOAD DATA INFILE Syntax.
-F, --flush-logs
Flush log file in the MySQL server before starting the dump.
-f, --force,
Continue even if we get a SQL error during a table dump.
-h, --host=..
Dump data from the MySQL server on the named host. The default host is localhost.
-l, --lock-tables.
Lock all tables before starting the dump. The tables are locked with READ LOCAL to allow concurrent inserts in the case of MyISAM tables. Please note that when dumping multiple databases, --lock-tables will lock tables for each database separately. So using this option will not guarantee your tables will be logically consistent between databases. Tables in different databases may be dumped in completely different states.
-K, --disable-keys
/*!40000 ALTER TABLE tb_name DISABLE KEYS */; and /*!40000 ALTER TABLE tb_name ENABLE KEYS */; will be put in the output. This will make loading the data into a MySQL 4.0 server faster as the indexes are created after all data are inserted.
-n, --no-create-db
CREATE DATABASE /*!32312 IF NOT EXISTS*/ db_name; will not be put in the output. The above line will be added otherwise, if a --databases or --all-databases option was given.
-t, --no-create-info
Don't write table creation information (the CREATE TABLE statement).
-d, --no-data
Don't write any row information for the table. This is very useful if you just want to get a dump of the structure for a table!
Same as --quick --add-drop-table --add-locks --extended-insert --lock-tables. Should give you the fastest possible dump for reading into a MySQL server.
-pyour_pass, --password[=your_pass]
The password to use when connecting to the server. If you specify no `=your_pass' part, mysqldump you will be prompted for a password.
-P, --port=...
Port number to use for TCP/IP connections.
--protocol=(TCP | SOCKET | PIPE | MEMORY)
To specify the connect protocol to use. New in MySQL 4.1.
-q, --quick
Don't buffer query, dump directly to stdout. Uses mysql_use_result() to do this.
-Q, --quote-names
Quote table and column names within ``' characters.
-r, --result-file=...
Direct output to a given file. This option should be used in MSDOS, because it prevents new line `\n' from being converted to `\n\r' (new line + carriage return).
This option issues a BEGIN SQL command before dumping data from server. It is mostly useful with InnoDB tables and READ_COMMITTED transaction isolation level, as in this mode it will dump the consistent state of the database at the time then BEGIN was issued without blocking any applications. When using this option you should keep in mind that only transactional tables will be dumped in a consistent state, e.g., any MyISAM or HEAP tables dumped while using this option may still change state. The --single-transaction option was added in version 4.0.2. This option is mutually exclusive with the --lock-tables option as LOCK TABLES already commits a previous transaction internally.
-S /path/to/socket, --socket=/path/to/socket
The socket file to use when connecting to localhost (which is the default host).
Overrides option --databases (-B).
-T, --tab=path-to-some-directory
Creates a table_name.sql file, that contains the SQL CREATE commands, and a table_name.txt file, that contains the data, for each give table. The format of the `.txt' file is made according to the --fields-xxx and --lines--xxx options. Note: This option only works if mysqldump is run on the same machine as the mysqld daemon, and the user/group that mysqld is running as (normally user mysql, group mysql) needs to have permission to create/write a file at the location you specify.
-u user_name, --user=user_name
The MySQL user name to use when connecting to the server. The default value is your Unix login name.
-O var=option, --set-variable var=option
Set the value of a variable. The possible variables are listed below. Please note that --set-variable is deprecated since MySQL 4.0, just use --var=option on its own.
-v, --verbose
Verbose mode. Print out more information on what the program does.
-V, --version
Print version information and exit.
-w, --where='where-condition'
Dump only selected records. Note that quotes are mandatory:
"--where=user='jimf'" "-wuserid>1" "-wuserid<1"
-X, --xml
Dumps a database as well formed XML
-x, --first-slave
Locks all tables across all databases.
Like --first-slave, but also prints some CHANGE MASTER TO commands which will later make your slave start from the right position in the master's binlogs, if you have set up your slave using this SQL dump of the master.
-O net_buffer_length=#, where # < 16M
When creating multi-row-insert statements (as with option --extended-insert or --opt), mysqldump will create rows up to net_buffer_length length. If you increase this variable, you should also ensure that the max_allowed_packet variable in the MySQL server is bigger than the net_buffer_length.

The most normal use of mysqldump is probably for making a backup of whole databases. See section 4.4.1 Database Backups.

mysqldump --opt database > backup-file.sql

You can read this back into MySQL with:

mysql database < backup-file.sql


mysql -e "source /patch-to-backup/backup-file.sql" database

However, it's also very useful to populate another MySQL server with information from a database:

mysqldump --opt database | mysql --host=remote-host -C database

It is possible to dump several databases with one command:

mysqldump --databases database1 [database2 ...] > my_databases.sql

If all the databases are wanted, one can use:

mysqldump --all-databases > all_databases.sql

User Comments

Posted by Jeff Armstrong on Thursday June 27 2002, @8:14am[Delete] [Edit]

On my system - mysqldump Ver 8.16 Distrib
3.23.46, for pc-linux-gnu (i686)

mysqldump -u root -p -A

only dumps the FIRST database, if the user
specified is not same as the current unix user.

In the above example, you must actually be
logged in as root to really get all the
subsequent databases.

Posted by Karl Katzke on Friday May 17 2002, @6:24am[Delete] [Edit]

Note that if you include the -F option (Flush
tables) with the -A option (All databases), and
you are using logging, your logs will be cycled
each time mysqldump starts another database. So
if you've got six databases, your logs will be
flushed six times in rapid succession as
mysqldump reads from each one. Not sure if it's a
bug or a feature. (MySQL 3.23.46)

Posted by Jon Benson on Friday May 17 2002, @6:24am[Delete] [Edit]

If you'd like a quick example example of how to
dump to a local machine (with the mysqldump
binary locally) from a remote server this is how
i did it (windows machine):

DATABASE [table] > c:\localpath.sql

Posted by Dave Bateman on Monday July 29 2002, @4:21pm[Delete] [Edit]

how do i get the mysqldump to write the file with
the owner set to my username, not as 'nobody'

Posted by Mike Looijmans on Friday May 17 2002, @6:24am[Delete] [Edit]

Workaround for the -F -A problem is to use 'mysql
-u root -e "FLUSH LOGS"' before running mysqldump,
but this makes the flush and backup less 'atomic'.
Not a problem on our system, since no one will be
running updates at 2:00am

Posted by [name withheld] on Friday May 17 2002, @6:24am[Delete] [Edit]

In older versions, mysqldump dump table structure
and data but nothing about indexes.
Recent versions dump index information too.

Posted by [name withheld] on Friday July 12 2002, @1:51am[Delete] [Edit]

Sorry the documentation and examples is very small
here. This is not very good. There is a lot of
errormessages to read until you get a dump. And if
youre in the situation to do some more dedicated
stuff (e.g. dumping only some tables) there is no
more help than the stuff i read if i use the dump on
my shell.
So there is no need for this here ?

Posted by [name withheld] on Friday May 17 2002, @6:24am[Delete] [Edit]

Okay, here is my using phpMyAdmin and
the MySQL server is hosted by Ausgamers. And i
need to dump everything in my database, can
someone please tell me how to do it??

I wanna save it to my space or my HDD

Posted by Anders Kronquist on Monday July 1 2002, @3:00am[Delete] [Edit]

To restore a complete backup of all your
databases, use "mysql
< /path/to/all_databases_backup.sql"

Posted by Ulf Kner on Friday May 17 2002, @6:24am[Delete] [Edit]

mysqldump is a bit tricky to use, yet not a bad
tool for backups. Esp. the pipe right into
another servers db is a nice thing. I wonder if
it would be possible to add a replace syntax,
instead of only an insert. That would make
incremental backups right into another db much
easier (a new timestamp triggers a backup of an
updated row) . Now one has to dump a csv file on
one server, transfer it to another and then use
load data infile replace into table ...
OK, one could also use the server/slave variant -
problems with that is, I keep hearing it is not
as easy to use and understand as I'd like. ;-)

Posted by [name withheld] on Friday May 17 2002, @6:24am[Delete] [Edit]

How can I mysqldump.exe a database with table
create options when there is fields with a %
character? And how to import that with mysql.exe?
Removing % is not a option, and I dont want to
manually add ` characters to the dumped file. Im
using innodb.

Posted by [name withheld] on Friday May 17 2002, @6:24am[Delete] [Edit]

How can I use mysqldump to select some data from
mysqldump -u root -p DB_Name Table_Name --
where='where Name='SIM'' doesn't work
how can I use quote in where quote...

Posted by inhamuee on Friday May 17 2002, @6:24am[Delete] [Edit]

mysql database < backup-file.sql

what is the syntax of this command when multiple
databases are backed up into one file with
mysqldump --all-databases > bkp.sql
how are he databases retrieved?

Posted by [name withheld] on Saturday August 10 2002, @11:56am[Delete] [Edit]

If you want to compress your backups directly even
without hitting your HD. You can try this command:
#mysqldump --opt -u user --password="password"
database | bzip2 -c > database.sql.bz2

Posted by [name withheld] on Thursday August 15 2002, @8:56pm[Delete] [Edit]

hi,i use mysqldump --opt DB1 -p >DB1.sql,then i use
the command "mysqldump --delay DB2 -p
<DB1.sql",but i found no data imported at DB2.

any ideas?

Posted by [name withheld] on Sunday October 27 2002, @1:33pm[Delete] [Edit]

I'm using the following script as daily cron, it works.

date=`date -I`
mysqldump --opt --all-databases | bzip2 -c
> /var/backup/databasebackup-$date.sql.bz2

Posted by Eduard Gailis on Wednesday January 22 2003, @5:39am[Delete] [Edit]

mysqldump and MERGE tables

no need to dump data
need to generate "CREATE TABLE ...." at end of sql script, after all merged tables

also don't work BACKUP TABLE with MERGE tables

Posted by [name withheld] on Sunday January 26 2003, @4:42am[Delete] [Edit]

To dump specific rows from a table:

C:\>mysqldump "--where=cat='1'" -t database table > dump.sql.txt

Posted by Kamal Siddiqi on Thursday February 20 2003, @12:11am[Delete] [Edit]


Plz tell the exact syntax of the command to be executed on telnet prompt to dump a ".sql" file into a DB on Linux.

DB Name: testdb
User: testuser
Pass: testpass


Posted by P J on Friday February 21 2003, @4:17pm[Delete] [Edit]

To dump only select records into a file based on a timestamp field you can use this (last_modified is the timestamp field). This is used in a shell script to be used as a cron to take records that are more than a month old and dump them into an archive file (then the dumped records are deleted).
/yourpath/mysqldump "--where=(month(last_modified)+year(last_modified)*12 <
month(current_date)+(year(current_date)*12)-1)" database table > archive.sql

Posted by [name withheld] on Saturday February 22 2003, @6:21pm[Delete] [Edit]

I'm looking for an example that would give me a CSV type file using mysqldump. I do not understand the differance between --option and -option nor can I find any information on the format of an option file and any 'groups' clauses inside of an option file. Not that I don't enjoy spending an afternoon trying to firure out cryptic examples....
And yes I can cron a job that dumps all records created between two dates but I need CSV output. That is why I need to use mysqldump, I can not access mysql via a shell.

Posted by [name withheld] on Thursday March 6 2003, @2:47pm[Delete] [Edit]

does anyone know why I would be getting the following error:

virga 15> mysqldump weather > weather.sql
mysqldump: option `--databases' doesn't allow an argument

no matter what I do with mysqldump I get that same error. I checked "alias" and it's not aliased to anything. Does anyone know where else this error might be coming from?

Posted by [name withheld] on Thursday March 6 2003, @2:48pm[Delete] [Edit]

does anyone know why I would be getting the following error:

virga 15> mysqldump weather > weather.sql
mysqldump: option `--databases' doesn't allow an argument

no matter what I do with mysqldump I get that same error. I checked "alias" and it's not aliased to anything. Does anyone know where else this error might be coming from?

Posted by Dragos Moinescu on Wednesday March 12 2003, @8:04am[Delete] [Edit]

to dump a big table (called tblname) from a database (called dbname) with a where condition (called where condition without WHERE) do the following:
mysqldump --user=username --password=password --where='where condition without WHERE' dbname tblname | tar cfz > tblname.tgz
this will dump and archive it using tar.
if you wish not to add CREATE TABLE stuff, just add --no-create-info before --where.

Posted by Steve T. on Tuesday March 18 2003, @5:22am[Delete] [Edit]

The manual states, quote:
-w, --where='where-condition'
Dump only selected records. Note that quotes are mandatory

It is unclear where the quotes should go, so:

This is NOT correct:

This IS the CORRECT usage:

Posted by [name withheld] on Tuesday March 18 2003, @10:18am[Delete] [Edit]

We are using InnoDB tables and restoring from a mysqldump is problematic because the dump creates tables with foreign key definitions. There should be an option to create all foreign keys after the inserts.

Posted by george on Wednesday April 2 2003, @3:04am[Delete] [Edit]

For big dumps, -q (i.e. no output buffering), looks like a good idea.

A "default" dump of a 4GB table quickly ate up 1GB of real memory, and started grabbing swap but with nothing to show on disk!

Retrying with -q, the memory usage was trivial and at least I could watch the progress of the dump via the size of the dump file. It certainly felt a lot more comfortable!

Posted by Daniel D'Cotta on Wednesday April 16 2003, @3:37am[Delete] [Edit]

how do I use the --where option with multiple conditions?

eg. --where=isPartner='0'&&isMaster='0' (is this valid?)

Also, where do I place the double qoutes?

Posted by Daniel D'Cotta on Sunday April 20 2003, @10:05pm[Delete] [Edit]

Well after alot of trial an error, the following works for me.


My next question is:

How do I insert only rows based on conditions of other tables?

Posted by Chung Lau on Monday April 21 2003, @11:05am[Delete] [Edit]

in response to:
>Posted by [name withheld] on Tuesday March 18 2003, @10:18am
>We are using InnoDB tables and restoring from a mysqldump
>is problematic because the dump creates tables with
>foreign key definitions. There should be an option to
>create all foreign keys after the inserts.

you can turn off foreign key checks while you're importing innodb databases

source database.sql;

Posted by Jesus Cruz on Saturday April 26 2003, @7:58am[Delete] [Edit]

when using mysqldump on --tab option, make sure you are executing this commnad on mysql user, if you execute it by root you are not going to be able to habe this option work.

Posted by [name withheld] on Tuesday April 29 2003, @1:21pm[Delete] [Edit]

If you're trying to dump a single table and mysqldump interprets the table name as a database name, try using the --tables option:

mysqldump -p -T mySubdir --tables myDB myTable

Posted by Matt Rosin on Friday May 9 2003, @7:39am[Delete] [Edit]

Two wierd things came up but I worked around them..

1.) Some of these things may not work depending on the version of the database you are accessing.
I got an SQL syntax error when adding the --lock-tables switch dumping from a remote 3.22.22 server (client was 3.23.56). I was hoping the C-like comments would magically not lock tables but I guess they are also not recognized by this version. Error was:

mysqldump: Got error: 1064: You have an error in your SQL syntax near '32311 LOCAL */,Test READ /*!32311 LOCAL */,cart READ /*!32311 LOCAL */,cookie RE' at line 1 when using LOCK TABLES

My command was:
mysqldump -C -v -u myusername -pmypassword -h thedbhost thedatabase --add-drop-table --add-locks --quick --lock-tables > ./dl.tmp

2.) Since the mysql client does not return an error code it is not easy to see if it failed (e.g. when running from a perl script). I decided to only copy dumps which were greater than 0 bytes since nothing is saved when you get an Access Denied message. How strange then when I intentionally misspell the database name to see that the dump does fail, but only after writing a few lines stating the (wrong) name of the database. This I feel is a bug. Not a big one but just remember, wherever you go, there you are. (--Buckaroo Banzai)

Posted by [name withheld] on Monday May 12 2003, @6:52am[Delete] [Edit]

instead of using the dump feature, i made a copy of one of my databases by simply copying the folder it was in under c:mysql and renaming it to teh new databse name. This seemed to work (all structures and data came across). However it seemed too easy to do a backup like this...

Posted by [name withheld] on Tuesday June 10 2003, @11:40am[Delete] [Edit]

In response to:

> Posted by [name withheld] on Thursday August 15 2002,
> @8:56pm [Delete] [Edit]
> hi,i use mysqldump --opt DB1 -p >DB1.sql,then i use
> the command "mysqldump --delay DB2 -p
> <DB1.sql",but i found no data imported at DB2.

Instead use "mysql root -p DB2 < DB1.sql" to import data

Add your own comment.