mysqldump table per *.sql file batch script - batch-file

I have done some digging around and I can not find a way to make mysqldump create a file per table. I have about 100 tables (and growing) that I would like to be dumped into separate files without having to write a new mysqldump line for each table I have.
E.g. instead of my_huge_database_file.sql which contains all the tables for my DB.
I'd like mytable1.sql, mytable2.sql etc etc
Does mysqldump have a parameter for this or can it be done with a batch file? If so how.
It is for backup purposes.
I think I may have found a work around, and that is to make a small PHP script that fetches the names of my tables and runs mysqldump using exec().
$result = $dbh->query("SHOW TABLES FROM mydb") ;
while($row = $result->fetch()) {
exec('c:\Xit\xampp\mysql\bin\mysqldump.exe -uroot -ppw mydb > c:\dump\\'.$row[0]) ;
}
In my batch file I then simply do:
php mybackupscript.php

Instead of SHOW TABLES command, you could query the INFORMATION_SCHEMA database. This way you could easily dump every table for every database and also know how many tables there are in a given database (i.e. for logging purposes). In my backup, I use the following query:
SELECT DISTINCT CONVERT(`TABLE_SCHEMA` USING UTF8) AS 'dbName'
, CONVERT(`TABLE_NAME` USING UTF8) AS 'tblName'
, (SELECT COUNT(`TABLE_NAME`)
FROM `INFORMATION_SCHEMA`.`TABLES`
WHERE `TABLE_SCHEMA` = dbName
GROUP BY `TABLE_SCHEMA`) AS 'tblCount'
FROM `INFORMATION_SCHEMA`.`TABLES`
WHERE `TABLE_SCHEMA` NOT IN ('INFORMATION_SCHEMA', 'PERFORMANCE_SCHEMA', 'mysql')
ORDER BY 'dbName' ASC
, 'tblName' ASC;
You could also put a syntax in the WHERE clause such as TABLE_TYPE != 'VIEW', to make sure that the views will not get dump.

I can't test this, because I don't have a Windows MySQL installation, but this should point you to the right direction:
#echo off
mysql -u user -pyourpassword database -e "show tables;" > tables_file
for /f "skip=3 delims=|" %%TABLE in (tables_file) do (mysqldump -u user -pyourpassword database %%TABLE > %%TABLE.sql)

Related

Trying to Export Tables to CSVs from SQL Server

I ran the following script to try to get all tables in my DB exported (trying to backup the data in CSVs).
SELECT 'sqlcmd -S . -d '+DB_NAME()+' -E -s, -W -Q "SET NOCOUNT ON; SELECT * FROM '+table_schema+'.'+TABLE_name+'" > "C:\Temp\'+Table_Name+'.csv"'
FROM [INFORMATION_SCHEMA].[TABLES]
I saved the results as a batch file and ran the batch file as Administrator.
That runs without an error, but I get no data exported. All it does is create blank CSV files.
I ran this as well: 'EXEC sp_configure 'remote access',1 reconfigure'.
Still, nothing is exported. CSVs are created, but no data is exported...
Any thoughts?
I ended up using R to do the task...
library("RODBC")
conn <- odbcDriverConnect('driver={SQL Server};server=Server_Name;DB_Name;trusted_connection=true')
data <- sqlQuery(conn, "SELECT * FROM DB.dbo.TBL#1")
write.csv(data,file=paste("C:/Users/TBL#1.csv",sep=""),row.names=FALSE)
data <- sqlQuery(conn, "SELECT * FROM DB.dbo.TBL#2")
write.csv(data,file=paste("C:/Users/TBL#2.csv",sep=""),row.names=FALSE)
Gotta love the IT teams in corporate America...especially when they lock down your system so tight, you need to come up with all kinds of weird hacks just so you can do the job that you were hired to do...
Is there a word for negative synergy?

Issues using "-f" flag in CQLSH to run a query.cql file

I'm using cqlsh to add data to Cassandra with the BATCH query and I can load the data with a query using the "-e" flag but not from a file using the "-f" flag. I think that's because the file is local and Cassandra is remote. Details below:
This is a sample of my query (there are more rows to insert, obviously):
BEGIN BATCH;
INSERT INTO keyspace.table (id, field1) VALUES ('1','value1');
INSERT INTO keyspace.table (id, field1) VALUES ('2','value2');
APPLY BATCH;
If I enter the query via the "-e" flag then it works no problem:
>cqlsh -e "BEGIN BATCH; INSERT INTO keyspace.table (id, field1) VALUES ('1','value1'); INSERT INTO keyspace.table (id, field1) VALUES ('2','value2'); APPLY BATCH;" -u username -p password -k keyspace 99.99.99.99
But if I save the query to a text file (query.cql) and call as below, I get the following output:
>cqlsh -f query.cql -u username -p password -k keyspace 99.99.99.99
Using 3 child processes
Starting copy of keyspace.table with columns ['id', 'field1'].
Processed: 0 rows; Rate: 0 rows/s; Avg. rate: 0 rows/s
0 rows imported from 0 files in 0.076 seconds (0 skipped).
Cassandra obviously accepts the command but doesn't read the file, I'm guessing that's because the Cassandra is located on a remote server and the file is located locally. The Cassandra instance I'm using is a managed service with other users, so I don't have access to it to copy files into folders.
How do I run this query on a remote instance of Cassandra where I only have CLI access?
I want to be able to use another tool to build the query.cql file and have a batch job run the command with the "-f" flag but I can't work out how I'm going wrong.
You're executing a local cqlsh client so it should be able to access your local query.cql file.
Try to remove the BEGIN BATCH and APPLY BATCH and just let the 2 INSERT statements in the query.cql and retry again.
One other solution to insert data quickly is to provide a csv file and use the COPY command inside cqlsh. Read this blog post: http://www.datastax.com/dev/blog/new-features-in-cqlsh-copy
Scripting insert by generating one cqlsh -e '...' per line is feasible but it will be horribly slow

how to mysqldump last 10 rows for EVERY TABLE in a database?

got a lot of articles about how to mysqldump last 'n' rows from a table in a database. for example mysqldump --user=superman --password=batman --host=gothamcity.rds.com --where="1=1 ORDER BY id DESC LIMIT 10" DB_NAME TABLE_NAME ./path/to/dump/file.sql as found from these answers in StackOverflow and ServerFault
But, how do i tell mysqldump to export last 'n' rows for EVERY TABLE in a database
Here is what i did in the terminal. The idea is to basically get a list of all tablenames, and then pipe that list of tablenames into a while loop in bash where each of those tables are dumped into a separate dumpfile (named by the tablename) individually.
mysql --user=superman --password=batman --host=gothamcity.rds.com --port=3306 --database=jokersDB --execute="show tables" --silent --batch | while read tablename ; do mysqldump --user=superman --password=batman --host=gothamcity.rds.com --port=3306 --where="1=1 ORDER BY id DESC LIMIT 10" jokersDB $tablename --add-drop-table > $tablename.sql ; done
It worked. Only issue is, it dumped each table into it's own individual SQL file - not all tables were dumped to a single file. But i guess the contents of those individual files could also be joined together into a single file via some other bash commands.
Addition to solution of #Syed Rakid Al Hasan
If we change mysqldump writing part little bit like this
> $tablename.sql
into
>> jokersDB.sql
we can dump all database data with offset 10 in single file
Full command:
mysql --user=superman --password=batman --host=gothamcity.rds.com --port=3306 --database=jokersDB --execute="show tables" --silent --batch | while read tablename ; do mysqldump --user=superman --password=batman --host=gothamcity.rds.com --port=3306 --where="1=1 ORDER BY id DESC LIMIT 10" jokersDB $tablename --add-drop-table >> jokersDB.sql ; done
jokersDB.sql file must be present and empty
You can use the --where flag with multiple tables as long as it makes sense syntactically for each of the tables. So if all of your tables have a surrogate PK column called id, then you don't need to name any tables at all. Just dump with the --all-databases flag (or name the database you want) and your --where flag with the ORDER BY/LIMIT specified.
mysqldump --user=superman --password=batman --host=gothamcity.rds.com --where="1=1 ORDER BY id DESC LIMIT 10" --databases DB_NAME > /path/to/dump/file.sql

Extract one by one data from Database through Shell Script

I have to code in Korn Shell. I have to take data from one database; and create "insert into" statements in a .sql file. And run this .sql file in another database.
There are 24 columns in the table; I'm not able to extract data from that table one by one in order to create insert into statement.
Can anyone help me with the same?
I wrote the following code till now(just a sample, with two columns data)
$ cat analysis.sh
#!/bin/ksh
function sqlQuery {
ied sqlplus -s / << 'EOF'
DEFINE DELIMITER='${TAB_SPACE}'
set heading OFF termout ON trimout ON feedback OFF
set pagesize 0
SELECT ID, H00
FROM SW_ABC
WHERE ID=361140;
EOF
}
eval x=(`sqlQuery`)
ID=${x[0]}
HOUR=${x[1]}
echo ID is $ID
echo HOUR is $HOUR
But here eval is not working.

Oracle external tables - Specifying dynamic filename

CREATE TABLE LOG_FILES (
LOG_DTM VARCHAR(18),
LOG_TXT VARCHAR(300)
)
ORGANIZATION EXTERNAL(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY LOG_DIR
ACCESS PARAMETERS(
RECORDS DELIMITED BY NEWLINE
FIELDS(
LOG_DTM position(1:18),
LOG_TXT position(19:300)
)
)
LOCATION('logadm'))
)
REJECT LIMIT UNLIMITED
/
LOG_DIR is an oracle directory that points to /u/logs/
The problem though is that the contents of /u/logs/ looks like this
logadm_12012012.log
logadm_13012012.log
logadm_14012012.log
logadm_15012012.log
Is there any way i can specify the location of the file dynamically? i.e. every time i run Select * from LOG_FILES it should use the log file of the day. (e.g. log_adm_DDMMYYYYY).
I know i can use alter table log_files location ('logadm_15012012.log') but i would like not to have to issue the alter command.
Any other possibilities?
Thanks
It's a shame you're running 10g. On 11g we can associate a pre-processor script - a shell script - with an external table. In your case you could run a script which would figure out the latest file and then issue a copy command. Something like:
cp logadm_15012012.log logadm
Adrian Billington has blogged about this feature here. Frankly his write-up is more helpful than the official docs.
But as you're on 10g all you can do is run the ALTER TABLE statement, or use a scheduled job (cron or whatever) to sync a new file with the generic name.

Resources