I have database on a server with 120 tables.
I want to clone the whole database with a new db name and the copied data.
Is there an efficient way to do this?
$ mysqldump yourFirstDatabase -u user -ppassword > yourDatabase.sql
$ mysql yourSecondDatabase -u user -ppassword < yourDatabase.sql
mysqldump -u <user> --password=<password> <DATABASE_NAME> | mysql -u <user> --password=<password> -h <hostname> <DATABASE_NAME_NEW>
Like accepted answer but without .sql files:
mysqldump sourcedb -u <USERNAME> -p<PASS> | mysql destdb -u <USERNAME> -p<PASS>
In case you use phpMyAdmin
Select the database you wish to copy (by clicking on the database from the phpMyAdmin home screen).
Once inside the database, select the Operations tab.
Scroll down to the section where it says "Copy database to:"
Type in the name of the new database.
Select "structure and data" to copy everything. Alternately, you can select "Structure only" if you want the columns but not the data.
Check the box "CREATE DATABASE before copying" to create a new database.
Check the box "Add AUTO_INCREMENT value."
Click on the Go button to proceed.
There is mysqldbcopy tool from the MySQL Utilities package.
http://dev.mysql.com/doc/mysql-utilities/1.3/en/mysqldbcopy.html
If you want to make sure it is an exact clone, the receiving database needs to be entirely cleared / dropped. This way, the new db only has the tables in your import file and nothing else. Otherwise, your receiving database could retain tables that weren't specified in your import file.
ex from prior answers:
DB1 == tableA, tableB
DB2 == tableB, tableC
DB1 imported to -> DB2
DB2 == tableA, tableB, tableC //true clone should not contain tableC
the change is easy with --databases and --add-drop-database (see mysql docs). This adds the drop statement to the sqldump so your new database will be an exact replica:
$ mysqldump -h $ip -u $user -p$pass --databases $dbname --add-drop-database > $file.sql
$ mysql -h $ip $dbname -u $user -p$pass < $file.sql
of course replace the $ variables and as always, no space between password and -p. For extra security, strip the -p$pass from your command
$newdb = (date('Y')-1);
$mysqli->query("DROP DATABASE `".$newdb."`;");
$mysqli->query("CREATE DATABASE `".$newdb."`;");
$query = "
SELECT
TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA LIKE 'rds'
";
$result = $mysqli->query($query)->fetch_all(MYSQLI_ASSOC);
foreach($result as $val) {
echo $val['TABLE_NAME'].PHP_EOL;
$mysqli->query("CREATE TABLE `".$newdb."`.`".$val['TABLE_NAME']."` LIKE rds.`".$val['TABLE_NAME']."`");
$mysqli->query("INSERT `".$newdb."`.`".$val['TABLE_NAME']."` SELECT * FROM rds.`".$val['TABLE_NAME']."`");
}
Related
got a lot of articles about how to mysqldump last 'n' rows from a table in a database. for example mysqldump --user=superman --password=batman --host=gothamcity.rds.com --where="1=1 ORDER BY id DESC LIMIT 10" DB_NAME TABLE_NAME ./path/to/dump/file.sql as found from these answers in StackOverflow and ServerFault
But, how do i tell mysqldump to export last 'n' rows for EVERY TABLE in a database
Here is what i did in the terminal. The idea is to basically get a list of all tablenames, and then pipe that list of tablenames into a while loop in bash where each of those tables are dumped into a separate dumpfile (named by the tablename) individually.
mysql --user=superman --password=batman --host=gothamcity.rds.com --port=3306 --database=jokersDB --execute="show tables" --silent --batch | while read tablename ; do mysqldump --user=superman --password=batman --host=gothamcity.rds.com --port=3306 --where="1=1 ORDER BY id DESC LIMIT 10" jokersDB $tablename --add-drop-table > $tablename.sql ; done
It worked. Only issue is, it dumped each table into it's own individual SQL file - not all tables were dumped to a single file. But i guess the contents of those individual files could also be joined together into a single file via some other bash commands.
Addition to solution of #Syed Rakid Al Hasan
If we change mysqldump writing part little bit like this
> $tablename.sql
into
>> jokersDB.sql
we can dump all database data with offset 10 in single file
Full command:
mysql --user=superman --password=batman --host=gothamcity.rds.com --port=3306 --database=jokersDB --execute="show tables" --silent --batch | while read tablename ; do mysqldump --user=superman --password=batman --host=gothamcity.rds.com --port=3306 --where="1=1 ORDER BY id DESC LIMIT 10" jokersDB $tablename --add-drop-table >> jokersDB.sql ; done
jokersDB.sql file must be present and empty
You can use the --where flag with multiple tables as long as it makes sense syntactically for each of the tables. So if all of your tables have a surrogate PK column called id, then you don't need to name any tables at all. Just dump with the --all-databases flag (or name the database you want) and your --where flag with the ORDER BY/LIMIT specified.
mysqldump --user=superman --password=batman --host=gothamcity.rds.com --where="1=1 ORDER BY id DESC LIMIT 10" --databases DB_NAME > /path/to/dump/file.sql
I have a table with three columns (id, name, age). I would like to keep the name and id the same, but remove all age data, and be able to reassign ages.
ie. I want to clear data from one column only, but not delete the entire column.
I am using sinatra, datamapper and postgresql.
Programmatically you could do something like this:
#myvariable = MyModel.all
#myvariable.map {|m|
m.update(:age => nil)
}
More info on Datamapper can be found here: http://datamapper.org/docs/
Or if you want to do it by hand, if your database is on Heroku, you can connect to the database instance on Heroku like so:
heroku pg:psql
Then you'll be connected to the database and you can just type out the SQL like Gus suggested:
update table_name set age = NULL
If your app is not on Heroku and you're just connecting to a local postgresql instance, it would be like so:
psql -d your_database -U your_user
More info on psql can be found here: http://www.postgresql.org/docs/9.2/static/app-psql.html
I have done some digging around and I can not find a way to make mysqldump create a file per table. I have about 100 tables (and growing) that I would like to be dumped into separate files without having to write a new mysqldump line for each table I have.
E.g. instead of my_huge_database_file.sql which contains all the tables for my DB.
I'd like mytable1.sql, mytable2.sql etc etc
Does mysqldump have a parameter for this or can it be done with a batch file? If so how.
It is for backup purposes.
I think I may have found a work around, and that is to make a small PHP script that fetches the names of my tables and runs mysqldump using exec().
$result = $dbh->query("SHOW TABLES FROM mydb") ;
while($row = $result->fetch()) {
exec('c:\Xit\xampp\mysql\bin\mysqldump.exe -uroot -ppw mydb > c:\dump\\'.$row[0]) ;
}
In my batch file I then simply do:
php mybackupscript.php
Instead of SHOW TABLES command, you could query the INFORMATION_SCHEMA database. This way you could easily dump every table for every database and also know how many tables there are in a given database (i.e. for logging purposes). In my backup, I use the following query:
SELECT DISTINCT CONVERT(`TABLE_SCHEMA` USING UTF8) AS 'dbName'
, CONVERT(`TABLE_NAME` USING UTF8) AS 'tblName'
, (SELECT COUNT(`TABLE_NAME`)
FROM `INFORMATION_SCHEMA`.`TABLES`
WHERE `TABLE_SCHEMA` = dbName
GROUP BY `TABLE_SCHEMA`) AS 'tblCount'
FROM `INFORMATION_SCHEMA`.`TABLES`
WHERE `TABLE_SCHEMA` NOT IN ('INFORMATION_SCHEMA', 'PERFORMANCE_SCHEMA', 'mysql')
ORDER BY 'dbName' ASC
, 'tblName' ASC;
You could also put a syntax in the WHERE clause such as TABLE_TYPE != 'VIEW', to make sure that the views will not get dump.
I can't test this, because I don't have a Windows MySQL installation, but this should point you to the right direction:
#echo off
mysql -u user -pyourpassword database -e "show tables;" > tables_file
for /f "skip=3 delims=|" %%TABLE in (tables_file) do (mysqldump -u user -pyourpassword database %%TABLE > %%TABLE.sql)
In the command line, this will successfully update table1:
pt-table-sync --execute h=host1,D=db1,t=table1 h=host2,D=db2
However if I want to update more than one table, I'm not sure how to write it. This only updates table1 as well and ignores the other tables:
pt-table-sync --execute h=host1,D=db1,t=table1,table2,table3 h=host2,D=db2
And this gives me an error:
pt-table-sync --execute h=host1,D=db1 --tables table1,table2,table3 h=host2,D=db2
Anyone have an example of how to list the '-tables'... so that it successfully update all the tables in the list?
The --tables option seems to be incompatible with the DSN notation, you get this error:
You specified a database but not a table in h=localhost,D=test.
Are you trying to sync only tables in the 'test' database?
If so, use '--databases test' instead.
As suggested in that error message, you can use --databases and then you can use --tables successfully.
For example, I created tables test.foo and test.bar, filled each with three rows, then deleted the rows from test.bar on the second server dewey.
I ran this:
$ pt-table-sync h=huey h=dewey --databases test --tables foo,bar --execute --verbose
# Syncing h=dewey
# DELETE REPLACE INSERT UPDATE ALGORITHM START END EXIT DATABASE.TABLE
# 0 0 3 0 Chunk 15:26:15 15:26:15 2 test.bar
# 0 0 0 0 Chunk 15:26:15 15:26:15 0 test.foo
It successfully re-inserted the 3 missing rows in test.bar.
Other tables in my test database were ignored.
This is an old question, but I searched everywhere for an answer. pt-table-sync only does one table. There is no tool that does the same thing to a list of tables or a full database schema. Specifically I want to run a Live server and be able to sync back to a Staging server, then edit code and files in the Staging server without fear of messing up Live or being overwritten by Live... and I want it to be free :)
I ended up writing a shell script called mysql_sync_live_to_stage.sh as follows:
#!/bin/bash
# sync db live to staging
error_log_file='./mysql_sync_errors.log'
echo $(date +"%Y %m %d %H:%M") > $error_log_file
function sync_table()
{
pt-table-sync --no-foreign-key-checks --execute
h=DB_1_HOST,u=DB_1_USER,p=DB_1_PASSWORD,D=$1,t=$3
h=DB_2_HOST,u=DB_2_USER,p=DB_2_PASSWORD,D=$2,t=$3 >> $error_log_file
}
# SYNC ALL TABLES IN name_of_live_database
mysql -h "DB_1_HOST" -u "DB_1_USER" -pDB_1_PASSWORD -D "DB_1_DBNAME" -e "SHOW TABLES" |
egrep -i '[0-9a-z\-\_]+' | egrep -i -v 'Tables_in' | while read -r table ; do
echo "Processing $table"
sync_table "name_of_live_database" "name_of_staging_database" $table
done
# FIX Config Settings For Staging
echo "Cleanup Queries..."
mysql -h "DB_2_HOST" -u "DB_2_USER" -pDB_2_PASSWORD -D "DB_2_DBNAME"
-e "UPDATE name_of_staging_database.nameofmyconfigtable SET value='bar'
WHERE config_id='foo'"
mysql -h "DB_2_HOST" -u "DB_2_USER" -pDB_2_PASSWORD -D "DB_2_DBNAME"
-e "UPDATE name_of_staging_database.nameofmyconfigtable SET value='bar2'
WHERE config_id='foo2'"
echo "Done"
This reads a list of table names from the live site then executes a sync on each one via the do loop. It goes through the list alphabetically, so I recommend keeping the --no-foreign-key-checks flag.
Its not perfect... It won't sync tables that don't exist in both databases, but when combined with a "git pull -f origin master" I get a complete sync in a couple minutes.
I want to do a mysqldump with the table definition and table data every day, for this I config a cron job with this comand: "mysqldump -u user -pxxxxx site_DB | gzip > backup/site/site_t_$(date | awk {'print $1""$2""$3"_"$4'}).sql.gz" but this only export the table definition. What is the correct command to export the data? Thanks
By default mysqldump exports data too - you have to use the --no-data flag to make it only export structure. Since yours IS doing it by default, that means "no-data" is set in your a MySQL options file, which you can find following these directions.
I was having the same issue. re-run the dump command with '--databases' as follows:
mysqldump -u user -pxxxxx --databases site_DB . . . .