I have a database containing a very large table including binary data which I want to update on a remote machine, once a day. Rather than dumping the entire table, transferring and recreating it on the remote machine, I want to dump only the updates, then use that dump to update the remote machine.
I already understand that I can produce the dump file as such.
mysqldump -u user -p=pass --quick --result-file=dump_file \
--where "Updated >= one_day_ago" \
1) Does the resulting "restore" on the remote machine
mysql -u user -p=pass database_name < dump_file
only update the necessary rows? Or are there other adverse effects?
2) Is this the best way to do this? (I am unable to pipe to the server directly and using
Best How To :
If you only dump rows where the
WHERE clause is true, then you will only get a .sql file that contains the values you want to update. So you will never get duplicate values if you use the current export options. However, inserting these into a database will not work. You will have to use the commandline parameter
--replace, otherwise, if you dump your database and a row with id
6 in table
table1 and try to import this into your other database, you'll get an error on duplicates if a row already has id =
6. Using the
--replace parameter, it will overwrite older values, which can only happen if there is a new one (according to your
So to quickly answer:
- Yes, this will restore on the remote machine, but only if you saved using --replace (this will restore the latest version of the file you have)
- I am not entirely sure if you can pipe backups. According to this website, you can, but I have never tried it before.