mysqldump table names prefix - database

I have two mysql databases that have almost the same structure and representing the data of the same web app but one of them represents the current version and second one was made long time ago.
How can I create the database with both dumps inside but with old_ prefix for tables from the first and new_ prefix for tables from the second database?
Is there any mysqldump options to setup the prefix or other solution?

A "mysqldump file" is just a text file full of SQL statements, so you can make quick modifications like these in a text editor.
1) Dump the two databases individually.
2) Edit the "old" dump file:
add the correct use mydatabase; line
do a search and replace to add old_ in front of the table names.
3) Then, cat dump1 dump2 > combined_dump
4) mysql < combined_dump

This sed script is perhaps a little safer. Save it to a file and use sed -f to filter the dump file.
s/\(-- Table structure for table `\)\([^`]\+\)\(`\)/\1xyzzy_\2\3/
s/\(DROP TABLE IF EXISTS `\)\([^`]\+\)\(`\)/\1xyzzy_\2\3/
s/\(CREATE TABLE `\)\([^`]\+\)\(` (\)/\1xyzzy_\2\3/
s/\(-- Dumping data for table `\)\([^`]\+\)\(`\)/\1xyzzy_\2\3/
s/\(\/\*!40000 ALTER TABLE `\)\([^`]\+\)\(` DISABLE KEYS \*\/\)/\1xyzzy_\2\3/
s/\(LOCK TABLES `\)\([^`]\+\)\(` WRITE\)/\1xyzzy_\2\3/
s/\(INSERT INTO `\)\([^`]\+\)\(` VALUES (\)/\1xyzzy_\2\3/
s/\(\/\*!40000 ALTER TABLE `\)\([^`]\+\)\(` ENABLE KEYS \*\/\)/\1xyzzy_\2\3/
Search and replace xyzzy_ with your desired table prefix.

Restore both the databases as it is.
Use the following stored procedure to move all the tables from one DB to another DB after adding the prefix.
After moving delete the source database.
This stored procedure gets the table list from MySQL's inmemory tables in information_schema and automatically moves to another DB using the RENAME command.
DELIMITER $$
USE `db`$$
DROP PROCEDURE IF EXISTS `renameDbTables`$$
CREATE DEFINER=`db`#`%` PROCEDURE `renameDbTables`(
IN from_db VARCHAR(20),
IN to_db VARCHAR(30),
IN to_name_prefix VARCHAR(20)
)
BEGIN
/*
call db.renameDbTables('db1','db2','db_');
db1.xxx will be renamed to db2.db_xxx
*/
DECLARE from_state_table VARCHAR(20) DEFAULT '';
DECLARE done INT DEFAULT 0;
DECLARE b VARCHAR(255) DEFAULT '';
DECLARE cur1 CURSOR FOR SELECT TABLE_NAME FROM information_schema.TABLES
WHERE TABLE_SCHEMA=from_db;
DECLARE CONTINUE HANDLER FOR SQLSTATE '02000' SET done = 1;
OPEN cur1;
REPEAT
FETCH cur1 INTO from_state_table;
IF NOT done THEN
-- select from_state_table;
SET #QUERY = '';
SET #QUERY = CONCAT(#QUERY,'RENAME TABLE ',from_db,'.', from_state_table,' TO ',to_db,'.', to_name_prefix, from_state_table,';');
-- SELECT #query;
PREPARE s FROM #QUERY;
EXECUTE s;
DEALLOCATE PREPARE s;
END IF;
UNTIL done END REPEAT;
CLOSE cur1;
END$$
DELIMITER ;

Import them into different databases. Say they're called newdb and olddb. Then you can copy table1 to old_table1 like:
insert into newdb.old_table1
select *
from olddb.table1
If you have a huge number of tables, generate a script to copy them:
select concat('insert into newdb.old_', table_name,
'select * from olddb.', table_name, ';')
from information_schema.tables
where table_schema = 'olddb'

I have done the following using mysqldump and sed in the past, but I'll admit it may only be effective for one table at a time.
$ mysqldump -u user --password=mypass MyDB MyTable | sed s/MyTable/old_Mytable/ | mysql -u other_user -p NewDB
You could create a shell script with a list of the commands, one for each table, or perhaps another user has a way to modify this to work against multiple tables effectively in one shot.
Peer

I may be misunderstanding the problem, but it sounds like you want to dump the 2 databases into a single SQL file to be used to restore the dbs, with the old tables going into one schema and the new tables going into another.
IF that's what you are trying to do, the simplest approach is just to insert the proper "use database" command before each dump.
Like so:
echo "use old_db;" > /tmp/combined_dump.sql
mysqldump old_db >> /tmp/combined_dump.sql
echo "use new_db;" >> /tmp/combined_dump.sql
mysqldump new_db >> /tmp/combined_dump.sql

Run the following query:
SELECT Concat('ALTER TABLE ', TABLE_NAME, ' RENAME TO my_prefix_', TABLE_NAME, ';') FROM information_schema.tables WHERE table_schema = 'my_database'
The output of which is several queries. Then run those queries.
This won't work if there's constraints, or other complicated things, but for simple DBs this works fine.

Related

Create table across all schemas at once

Is it possible to create a table across every schema in your database?
Specifically in Oracle.
I do not want to run the exact same query for all existing schemas.
Solution i have been using is,
Use below query to get all schema names,
select distinct owner FROM all_tables;
Get the result and use regular expression to append/prepend your table creation query.
^ - create table
$ - .tablename \( column1 varchar2(10)\);
run all the resulting queries in oracle work sheet.
You can use a bit of PL/SQL and execute immediate to do this.
For example you can create a table in all schemas if you connect as SYS User and execute the following Script:
begin
for cUsers in (select * from dba_users where account_status ='OPEN')
loop
execute immediate 'create table '||cUsers.username||'.myTable ( id number )';
end loop;
end;

Failing to understand how to select all rows based on table- and database-name

WHAT I EXPECT:
I want to create a Job in my SQL Server Agent that allows me to fire off a stored procedure to clean up a particular table. The spu would take two parameters: TableName and Days.
TableName would be the name of the table I'm looking for and Days would be how far back I wish to delete records.
WHAT I'VE DONE:
After having looked around online I've found sources on how to see if a User Database holds the supplied TableName:
SELECT *
FROM INFORMATION_SCHEMA.Tables
WHERE TABLE_NAME = #TableName
This results in a few rows looking a bit like this:
TABLE_CATALOG | TABLE_SCHEMA |TABLE_NAME |TABLE_TYPE
Database_A | table_schema |table_A |table_type
WHAT I DON'T UNDERSTAND:
How can I use the resulting rows of the previous query to find all rows of the supplied #TableName in a particular Database? In pseudo:
SELECT * FROM table_A WHERE database = database_A
I know I need to use a cursor somehow, that's not the problem.
What I'm simply struggling to understand is how I can use the database name and the table name to find the rows of the table in a particular database.
In my case I've got 10 or so databases that need to be iterated through to find the initial dataset (all user databases where #TableName exists) and then a secondary query to find all rows of the #TableName in the database that the cursor currently is pointing at.
You have to do select * from ..table_A
but you can't do that in a a simple TSQL. Possibly you could generate a sub script and execute.

Run same script on under different schema - postgres

I have a script called populate.sql which contains create tables.
CREATE TABLE "EXAMPLE" (
.................
..............
);
CREATE TABLE "BlaBla" (
..........
........
);
CREATE TABLE ...
This script creates more than 20 tables. I want to run this populate.sql on top of different schemas. Let's say I want to run this script on schema1, schema2 and schema3.
Then I can write;
CREATE SCHEMA IF NOT EXISTS "schema1";
SET SCHEMA 'schema1';
on populate.sql and create those tables on one schema.
how can I create those tables on all schema within one psql command?
As far as I feel I have to do FOR LOOP on psql and create schema first and create tables on top of that scheme.
Tables will get created in the currently set search_path (if not otherwise specifically set in the create statement).
You could use a loop. In that loop you have to set the searchpath to your schema.
DO
$$
DECLARE schemaname text;
BEGIN
FOR i IN 1..3 LOOP
schemaname := 'schema' || i::text;
execute 'CREATE SCHEMA ' || schemaname;
execute 'SET SCHEMA ' || schemaname;
execute 'SET search_path TO ' || schemaname;
-- conent of populate.sql
END LOOP;
END
$$;
You cannot call external scripts inside this do block as mentioned by a_horse_with_no_name in the comments. Therefore this answer is only relevant if you want to extend your populate.sql file and wrap this do block around it.

Linked Server Insert-Select Performance

Assume that I have a table on my local which is Local_Table and I have another server and another db and table, which is Remote_Table (table structures are the same).
Local_Table has data, Remote_Table doesn't. I want to transfer data from Local_Table to Remote_Table with this query:
Insert into RemoteServer.RemoteDb..Remote_Table
select * from Local_Table (nolock)
But the performance is quite slow.
However, when I use SQL Server import-export wizard, transfer is really fast.
What am I doing wrong? Why is it fast with Import-Export wizard and slow with insert-select statement? Any ideas?
The fastest way is to pull the data rather than push it. When the tables are pushed, every row requires a connection, an insert, and a disconnect.
If you can't pull the data, because you have a one way trust relationship between the servers, the work around is to construct the entire table as a giant T-SQL statement and run it all at once.
DECLARE #xml XML
SET #xml = (
SELECT 'insert Remote_Table values (' + '''' + isnull(first_col, 'NULL') + ''',' +
-- repeat for each col
'''' + isnull(last_col, 'NULL') + '''' + ');'
FROM Local_Table
FOR XML path('')
) --This concatenates all the rows into a single xml object, the empty path keeps it from having <colname> </colname> wrapped arround each value
DECLARE #sql AS VARCHAR(max)
SET #sql = 'set nocount on;' + cast(#xml AS VARCHAR(max)) + 'set nocount off;' --Converts XML back to a long string
EXEC ('use RemoteDb;' + #sql) AT RemoteServer
It seems like it's much faster to pull data from a linked server than to push data to a linked server: Which one is more efficient: select from linked server or insert into linked server?
Update: My own, recent experience confirms this. Pull if possible -- it will be much, much faster.
Try this on the other server:
INSERT INTO Local_Table
SELECT * FROM RemoteServer.RemoteDb.Remote_Table
The Import/Export wizard will be essentially doing this as a bulk insert, where as your code is not.
Assuming that you have a Clustered Index on the remote table, make sure that you have the same Clustered index on the local table, set Trace flag 610 globally on your remote server and make sure remote is in Simple or bulk logged recovery mode.
If you're remote table is a Heap (which will speed things up anyway), make sure your remote database is in simple or bulk logged mode change your code to read as follows:
INSERT INTO RemoteServer.RemoteDb..Remote_Table WITH(TABLOCK)
SELECT * FROM Local_Table WITH (nolock)
The reason why it's so slow to insert into the remote table from the local table is because it inserts a row, checks that it inserted, and then inserts the next row, checks that it inserted, etc.
Don't know if you figured this out or not, but here's how I solved this problem using linked servers.
First, I have a LocalDB.dbo.Table with several columns:
IDColumn (int, PK, Auto Increment)
TextColumn (varchar(30))
IntColumn (int)
And I have a RemoteDB.dbo.Table that is almost the same:
IDColumn (int)
TextColumn (varchar(30))
IntColumn (int)
The main difference is that remote IDColumn isn't set up as as an ID column, so that I can do inserts into it.
Then I set up a trigger on remote table that happens on Delete
Create Trigger Table_Del
On Table
After Delete
AS
Begin
Set NOCOUNT ON;
Insert Into Table (IDColumn, TextColumn, IntColumn)
Select IDColumn, TextColumn, IntColumn from MainServer.LocalDB.dbo.table L
Where not exists (Select * from Table R WHere L.IDColumn = R.IDColumn)
END
Then when I want to do an insert, I do it like this from the local server:
Insert Into LocalDB.dbo.Table (TextColumn, IntColumn) Values ('textvalue', 123);
Delete From RemoteServer.RemoteDB.dbo.Table Where IDColumn = 0;
--And if I want to clean the table out and make sure it has all the most up to date data:
Delete From RemoteServer.RemoteDB.dbo.Table
By triggering the remote server to pull the data from the local server and then do the insert, I was able to turn a job that took 30 minutes to insert 1258 lines into a job that took 8 seconds to do the same insert.
This does require a linked server connection on both sides, but after that's set up it works pretty good.
Update:
So in the last few years I've made some changes, and have moved away from the delete trigger as a way to sync the remote table.
Instead I have a stored procedure on the remote server that has all the steps to pull the data from the local server:
CREATE PROCEDURE [dbo].[UpdateTable]
-- Add the parameters for the stored procedure here
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
--Fill Temp table
Insert Into WebFileNamesTemp Select * From MAINSERVER.LocalDB.dbo.WebFileNames
--Fill normal table from temp table
Delete From WebFileNames
Insert Into WebFileNames Select * From WebFileNamesTemp
--empty temp table
Delete From WebFileNamesTemp
END
And on the local server I have a scheduled job that does some processing on the local tables, and then triggers the update through the stored procedure:
EXEC sp_serveroption #server='REMOTESERVER', #optname='rpc', #optvalue='true'
EXEC sp_serveroption #server='REMOTESERVER', #optname='rpc out', #optvalue='true'
EXEC REMOTESERVER.RemoteDB.dbo.UpdateTable
EXEC sp_serveroption #server='REMOTESERVER', #optname='rpc', #optvalue='false'
EXEC sp_serveroption #server='REMOTESERVER', #optname='rpc out', #optvalue='false'
If you must push data from the source to the target (e.g., for firewall or other permissions reasons), you can do the following:
In the source database, convert the recordset to a single XML string (i.e., multiple rows and columns combined into a single XML string).
Then push that XML over as a single row (as a varchar(max), since XML isn't allowed over linked databases in SQL Server).
DECLARE #xml XML
SET #xml = (select * from SourceTable FOR XML path('row'))
Insert into TempTargetTable values (cast(#xml AS VARCHAR(max)))
In the target database, cast the varchar(max) as XML and then use XML parsing to turn that single row and column back into a normal recordset.
DECLARE #X XML = (select '<toplevel>' + ImportString + '</toplevel>' from TempTargetTable)
DECLARE #iX INT
EXEC sp_xml_preparedocument #ix output, #x
insert into TargetTable
SELECT [col1],
[col2]
FROM OPENXML(#iX, '//row', 2)
WITH ([col1] [int],
[col2] [varchar](128)
)
EXEC sp_xml_removedocument #iX
I've found a workaround. Since I'm not a big fun of GUI tools like SSIS, I've reused a bcp script to load table into csv and vice versa. Yeah, it's an odd case to have the bulk operation support for files, but tables. Feel free to edit the following script to fit your needs:
exec xp_cmdshell 'bcp "select * from YourLocalTable" queryout C:\CSVFolder\Load.csv -w -T -S .'
exec xp_cmdshell 'bcp YourAzureDBName.dbo.YourAzureTable in C:\CSVFolder\Load.csv -S yourdb.database.windows.net -U youruser#yourdb.database.windows.net -P yourpass -q -w'
Pros:
No need to define table structures every time.
I've tested and it worked way faster than inserting directly through
the LinkedServer.
It's easier to manage than XML (which is limited to
varchar(max) length anyway).
No need of an extra layout of abstraction (tools like SSIS).
Cons:
Using the external tool bcp through the xp_cmdshell interface.
Table properties will be lost after ex/im-poring csv (i.e. datatype, nulls,length, separator within value, etc).

How to Exclude Data for Specific Tables

I am using mysqldump to create a canonical installation script for a MySQL database. I would like to dump the data for able half of the tables in the database, but exclude the data from the other tables. I am aware of the following two commands:
--no-data
--ignore-table
But the first applies to all tables, and I believe the second excludes the table entirely from the dump (e.g. create statements) not just the data in the table. Anyone know how to use mysqldump to achieve my goal?
EDIT:
found a near duplicate question: mysqldump entire structure but only data from selected tables in a single command
How about running two separate calls to mysqldump? One to create the database and ignore the tables you don't want data from. The other to just create the remaining tables without data. You could either run the two scripts separately, or concatenate them together to create a final script.
There is one other option to get everything done (in a single call to mysql itself) but it should probably never be attempted.
In tribute to H.P. Lovecraft, (and based upon Anuya's stored procedure to create INSERT statements) here's The Stored Procedure Which Must Not Be Called:
Note: This unholy, arcane stored procedure would only be run by a madman and is presented below purely for educational purposes.
DELIMITER $$
DROP PROCEDURE IF EXISTS `pseudoDump` $$
CREATE DEFINER=`root`#`localhost` PROCEDURE `pseudoDump`(
in_db varchar(20),
in_tables varchar(200),
in_data_tables varchar(200)
)
BEGIN
DECLARE Whrs varchar(500);
DECLARE Sels varchar(500);
DECLARE Inserts varchar(200);
DECLARE tablename varchar(20);
DECLARE ColName varchar(20);
SELECT `information_schema`.`TABLE_NAME` INTO tablename FROM TABLES WHERE TABLE_SCHEMA = in_db AND TABLE_NAME IN ( in_tables );
tabdumploop: LOOP
SHOW CREATE TABLE tablename;
LEAVE tabdumploop;
END LOOP tabdumploop;
SELECT `information_schema`.`TABLE_NAME` INTO tablename FROM TABLES WHERE TABLE_SCHEMA = in_db ;
datdumploop: LOOP
SELECT group_concat(concat('concat(\'"\',','ifnull(',column_name,','''')',',\'"\')')) INTO #Sels from `information_schema`.`COLUMNS` where table_schema=in_db and table_name=tablename;
SELECT group_concat('`',column_name,'`') INTO #Whrs from `information_schema`.`COLUMNS` where table_schema=in_db and table_name=tablename;
SET #Inserts=concat("select concat('insert IGNORE into ", in_db,".",tablename," values(',concat_ws(',',",#Sels,"),');') as MyColumn from ", in_db,".",tablename, " where 1 group by ",#Whrs, ";");
PREPARE Inserts FROM #Inserts;
EXECUTE Inserts;
LEAVE datdumploop;
END LOOP datdumploop;
END $$
DELIMITER ;
... thankfully, I was saved from witnessing the soul-wrenching horror this procedure must surely wreak by MySQL Bug #44009 ...
mysqldump -u user -h host.example.com -p database table1 table2 table3
You might find what you need here:
http://www.electrictoolbox.com/mysqldump-selectively-dump-data/
Using where statements is probably the easiest way to achieve what you are trying to do.

Resources