I want to do a mysqldump with the table definition and table data every day, for this I config a cron job with this comand: "mysqldump -u user -pxxxxx site_DB | gzip > backup/site/site_t_$(date | awk {'print $1""$2""$3"_"$4'}).sql.gz" but this only export the table definition. What is the correct command to export the data? Thanks
By default mysqldump exports data too - you have to use the --no-data flag to make it only export structure. Since yours IS doing it by default, that means "no-data" is set in your a MySQL options file, which you can find following these directions.
I was having the same issue. re-run the dump command with '--databases' as follows:
mysqldump -u user -pxxxxx --databases site_DB . . . .
Related
I have made a script which get conneted to database and runs the query.
I want to handle the exception if database is not connected.
Below is my sample code
#!/bin/sh
export ORACLE_HOME=/opt/cia/oracle-client/product/11.2.0/client_1
export NLS_LANG=AMERICAN_AMERICA.UTF8
export PATH=$ORACLE_HOME/bin:$PATH
export HOME_DIR=/home/aytripat
export OBJECT_TYPE=$HOME_DIR/object_type.txt
export OBJECT_LIST=$HOME_DIR/object_list.txt
$ORACLE_HOME/bin/sqlplus -s username/password#dwebpre1 <<EOF
set feedback off heading off pages 4000
spool ${OBJECT_TYPE}
select distinct decode(object_type,'PACKAGE','PACKAGE_SPEC','DATABASE
LINK','DB_LINK','JOB','PROCOBJ',replace(OBJECT_TYPE,' ','_'))from user_objects where object_type not in ('TABLE PARTITION','LOB') order by 1;
spool off
Now if sqlplus -s username/password#dwebpre1 <
does not get connected to data base i want it should show exception,and stop there only
Please help
Check the $? return value of your connection command.
Based on its value, handle the "exception" cases.
#Yushi/#deimus
What he meant, is to add an "if" to check the return code. For example:
if [ $? -eq 0 ]; then
echo "success return"
else
echo "error"
fi
How to import and export schema from Cassandra or Cassandra cqlsh prompt?
To export keyspace schema:
cqlsh -e "DESC KEYSPACE user" > user_schema.cql
To export entire database schema:
cqlsh -e "DESC SCHEMA" > db_schema.cql
To import schema open terminal at 'user_schema.cql' ('db_schema.cql') location (or you can specify the full path) and open cqlsh shell. Then use the following command to import keyspace schema:
source 'user_schema.cql'
To import full database schema:
source 'db_schema.cql'
Everything straight from the command line. No need to go into cqlsh.
Import schema (.cql file):
$ cqlsh -e "SOURCE '/path/to/schema.cql'"
Export keyspace:
$ cqlsh -e "DESCRIBE KEYSPACE somekeyspace" > /path/to/somekeyspace.cql
Export database schema:
$ cqlsh -e "DESCRIBE SCHEMA" > /path/to/schema.cql
If using cassandra-cli, you can use the 'show schema;' command to dump the whole schema. You can restrict to a specific keyspace by running 'use keyspace;' first.
You can store the output in a file, then import with 'cassandra-cli -f filename'.
If using cqlsh, you can use the 'describe schema' command. You can restrict to a keyspace with 'describe keyspace keyspace'.
You can save this to a file then import with 'cqlsh -f filename'.
For someone who comes in future, just to get ddl for schema/keyspace with "myschema" in "CassandraHost" server.
echo -e "use myschema;\nDESCRIBE KEYSPACE;\n" | cqlsh CassandraHost > mySchema.cdl
and you can use following to import just DDL (without data):
cqlsh CassandraNEWhost -f mySchema.cdl
With authentication
cqlsh -u <user-name> -e "DESC KEYSPACE user" > user_schema.cql
password will be promted.
I have done some digging around and I can not find a way to make mysqldump create a file per table. I have about 100 tables (and growing) that I would like to be dumped into separate files without having to write a new mysqldump line for each table I have.
E.g. instead of my_huge_database_file.sql which contains all the tables for my DB.
I'd like mytable1.sql, mytable2.sql etc etc
Does mysqldump have a parameter for this or can it be done with a batch file? If so how.
It is for backup purposes.
I think I may have found a work around, and that is to make a small PHP script that fetches the names of my tables and runs mysqldump using exec().
$result = $dbh->query("SHOW TABLES FROM mydb") ;
while($row = $result->fetch()) {
exec('c:\Xit\xampp\mysql\bin\mysqldump.exe -uroot -ppw mydb > c:\dump\\'.$row[0]) ;
}
In my batch file I then simply do:
php mybackupscript.php
Instead of SHOW TABLES command, you could query the INFORMATION_SCHEMA database. This way you could easily dump every table for every database and also know how many tables there are in a given database (i.e. for logging purposes). In my backup, I use the following query:
SELECT DISTINCT CONVERT(`TABLE_SCHEMA` USING UTF8) AS 'dbName'
, CONVERT(`TABLE_NAME` USING UTF8) AS 'tblName'
, (SELECT COUNT(`TABLE_NAME`)
FROM `INFORMATION_SCHEMA`.`TABLES`
WHERE `TABLE_SCHEMA` = dbName
GROUP BY `TABLE_SCHEMA`) AS 'tblCount'
FROM `INFORMATION_SCHEMA`.`TABLES`
WHERE `TABLE_SCHEMA` NOT IN ('INFORMATION_SCHEMA', 'PERFORMANCE_SCHEMA', 'mysql')
ORDER BY 'dbName' ASC
, 'tblName' ASC;
You could also put a syntax in the WHERE clause such as TABLE_TYPE != 'VIEW', to make sure that the views will not get dump.
I can't test this, because I don't have a Windows MySQL installation, but this should point you to the right direction:
#echo off
mysql -u user -pyourpassword database -e "show tables;" > tables_file
for /f "skip=3 delims=|" %%TABLE in (tables_file) do (mysqldump -u user -pyourpassword database %%TABLE > %%TABLE.sql)
In the command line, this will successfully update table1:
pt-table-sync --execute h=host1,D=db1,t=table1 h=host2,D=db2
However if I want to update more than one table, I'm not sure how to write it. This only updates table1 as well and ignores the other tables:
pt-table-sync --execute h=host1,D=db1,t=table1,table2,table3 h=host2,D=db2
And this gives me an error:
pt-table-sync --execute h=host1,D=db1 --tables table1,table2,table3 h=host2,D=db2
Anyone have an example of how to list the '-tables'... so that it successfully update all the tables in the list?
The --tables option seems to be incompatible with the DSN notation, you get this error:
You specified a database but not a table in h=localhost,D=test.
Are you trying to sync only tables in the 'test' database?
If so, use '--databases test' instead.
As suggested in that error message, you can use --databases and then you can use --tables successfully.
For example, I created tables test.foo and test.bar, filled each with three rows, then deleted the rows from test.bar on the second server dewey.
I ran this:
$ pt-table-sync h=huey h=dewey --databases test --tables foo,bar --execute --verbose
# Syncing h=dewey
# DELETE REPLACE INSERT UPDATE ALGORITHM START END EXIT DATABASE.TABLE
# 0 0 3 0 Chunk 15:26:15 15:26:15 2 test.bar
# 0 0 0 0 Chunk 15:26:15 15:26:15 0 test.foo
It successfully re-inserted the 3 missing rows in test.bar.
Other tables in my test database were ignored.
This is an old question, but I searched everywhere for an answer. pt-table-sync only does one table. There is no tool that does the same thing to a list of tables or a full database schema. Specifically I want to run a Live server and be able to sync back to a Staging server, then edit code and files in the Staging server without fear of messing up Live or being overwritten by Live... and I want it to be free :)
I ended up writing a shell script called mysql_sync_live_to_stage.sh as follows:
#!/bin/bash
# sync db live to staging
error_log_file='./mysql_sync_errors.log'
echo $(date +"%Y %m %d %H:%M") > $error_log_file
function sync_table()
{
pt-table-sync --no-foreign-key-checks --execute
h=DB_1_HOST,u=DB_1_USER,p=DB_1_PASSWORD,D=$1,t=$3
h=DB_2_HOST,u=DB_2_USER,p=DB_2_PASSWORD,D=$2,t=$3 >> $error_log_file
}
# SYNC ALL TABLES IN name_of_live_database
mysql -h "DB_1_HOST" -u "DB_1_USER" -pDB_1_PASSWORD -D "DB_1_DBNAME" -e "SHOW TABLES" |
egrep -i '[0-9a-z\-\_]+' | egrep -i -v 'Tables_in' | while read -r table ; do
echo "Processing $table"
sync_table "name_of_live_database" "name_of_staging_database" $table
done
# FIX Config Settings For Staging
echo "Cleanup Queries..."
mysql -h "DB_2_HOST" -u "DB_2_USER" -pDB_2_PASSWORD -D "DB_2_DBNAME"
-e "UPDATE name_of_staging_database.nameofmyconfigtable SET value='bar'
WHERE config_id='foo'"
mysql -h "DB_2_HOST" -u "DB_2_USER" -pDB_2_PASSWORD -D "DB_2_DBNAME"
-e "UPDATE name_of_staging_database.nameofmyconfigtable SET value='bar2'
WHERE config_id='foo2'"
echo "Done"
This reads a list of table names from the live site then executes a sync on each one via the do loop. It goes through the list alphabetically, so I recommend keeping the --no-foreign-key-checks flag.
Its not perfect... It won't sync tables that don't exist in both databases, but when combined with a "git pull -f origin master" I get a complete sync in a couple minutes.
I have database on a server with 120 tables.
I want to clone the whole database with a new db name and the copied data.
Is there an efficient way to do this?
$ mysqldump yourFirstDatabase -u user -ppassword > yourDatabase.sql
$ mysql yourSecondDatabase -u user -ppassword < yourDatabase.sql
mysqldump -u <user> --password=<password> <DATABASE_NAME> | mysql -u <user> --password=<password> -h <hostname> <DATABASE_NAME_NEW>
Like accepted answer but without .sql files:
mysqldump sourcedb -u <USERNAME> -p<PASS> | mysql destdb -u <USERNAME> -p<PASS>
In case you use phpMyAdmin
Select the database you wish to copy (by clicking on the database from the phpMyAdmin home screen).
Once inside the database, select the Operations tab.
Scroll down to the section where it says "Copy database to:"
Type in the name of the new database.
Select "structure and data" to copy everything. Alternately, you can select "Structure only" if you want the columns but not the data.
Check the box "CREATE DATABASE before copying" to create a new database.
Check the box "Add AUTO_INCREMENT value."
Click on the Go button to proceed.
There is mysqldbcopy tool from the MySQL Utilities package.
http://dev.mysql.com/doc/mysql-utilities/1.3/en/mysqldbcopy.html
If you want to make sure it is an exact clone, the receiving database needs to be entirely cleared / dropped. This way, the new db only has the tables in your import file and nothing else. Otherwise, your receiving database could retain tables that weren't specified in your import file.
ex from prior answers:
DB1 == tableA, tableB
DB2 == tableB, tableC
DB1 imported to -> DB2
DB2 == tableA, tableB, tableC //true clone should not contain tableC
the change is easy with --databases and --add-drop-database (see mysql docs). This adds the drop statement to the sqldump so your new database will be an exact replica:
$ mysqldump -h $ip -u $user -p$pass --databases $dbname --add-drop-database > $file.sql
$ mysql -h $ip $dbname -u $user -p$pass < $file.sql
of course replace the $ variables and as always, no space between password and -p. For extra security, strip the -p$pass from your command
$newdb = (date('Y')-1);
$mysqli->query("DROP DATABASE `".$newdb."`;");
$mysqli->query("CREATE DATABASE `".$newdb."`;");
$query = "
SELECT
TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA LIKE 'rds'
";
$result = $mysqli->query($query)->fetch_all(MYSQLI_ASSOC);
foreach($result as $val) {
echo $val['TABLE_NAME'].PHP_EOL;
$mysqli->query("CREATE TABLE `".$newdb."`.`".$val['TABLE_NAME']."` LIKE rds.`".$val['TABLE_NAME']."`");
$mysqli->query("INSERT `".$newdb."`.`".$val['TABLE_NAME']."` SELECT * FROM rds.`".$val['TABLE_NAME']."`");
}