Create table across all schemas at once - database

Is it possible to create a table across every schema in your database?
Specifically in Oracle.
I do not want to run the exact same query for all existing schemas.
Solution i have been using is,
Use below query to get all schema names,
select distinct owner FROM all_tables;
Get the result and use regular expression to append/prepend your table creation query.
^ - create table
$ - .tablename \( column1 varchar2(10)\);
run all the resulting queries in oracle work sheet.

You can use a bit of PL/SQL and execute immediate to do this.
For example you can create a table in all schemas if you connect as SYS User and execute the following Script:
begin
for cUsers in (select * from dba_users where account_status ='OPEN')
loop
execute immediate 'create table '||cUsers.username||'.myTable ( id number )';
end loop;
end;

Related

How to drop temp tables created in snowflake

I am loading data through ODI into snowflake temp tables created with c$ needs to be dropped after load successful,how to drop those temp tables appreciate your suggestion
If you still need this, I wrote a stored procedure that will take a list of SQL generated dynamically and execute the lines one at a time. You can use it to run any list of generated SQL statements resulting from a select query, including dropping all tables matching a pattern such as c$%. First, here's the stored procedure:
create or replace procedure RunBatchSQL(sqlCommand String)
returns string
language JavaScript
as
$$
/**
* Stored procedure to execute multiple SQL statements generated from a SQL query
* Note that this procedure will always use the column named "SQL_COMMAND"
*
* #param {String} sqlCommand: The SQL query to run to generate one or more SQL commands
* #return {String}: A string containing all the SQL commands executed, each separated by a newline.
*/
cmd1_dict = {sqlText: SQLCOMMAND};
stmt = snowflake.createStatement(cmd1_dict);
rs = stmt.execute();
var s = '';
while (rs.next()) {
cmd2_dict = {sqlText: rs.getColumnValue("SQL_COMMAND")};
stmtEx = snowflake.createStatement(cmd2_dict);
stmtEx.execute();
s += rs.getColumnValue(1) + "\n";
}
return s;
$$
You can use this stored procedure to run any dynamically generated SQL statements in batch using the following script. Run the topmost query and it will be obvious what running the stored procedure with that query test as the parameter will do:
-- This is a select query that will generate a list of SQL commands to execute in batch.
-- This SQL will generate rows to drop all tables starting with c$. With minor edits
-- you could limit it to a specific database or schema.
select 'drop table ' || TABLE_CATALOG || '.' || TABLE_SCHEMA || '.' || "TABLE_NAME" as SQL_COMMAND
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME like 'c$%';
-- As a convenience, this grabs the last SQL run so that it's easier to insert into
-- the parameter used to call the stored procedure.
set query_text = ( select QUERY_TEXT
from table(information_schema.query_history(result_limit => 2))
where SESSION_ID = Current_Session() and QUERY_TYPE = 'SELECT' order by START_TIME desc);
-- Confirm that the query_text variable has the correct SQL query to generate our SQL commands (grants in this case) to run.
select $query_text;
-- Run the stored procedure. Note that to view its output better, double click on the output to see it in multi-line format,
Call RunBatchSQL($query_text);
--Check the last several queries run to make sure it worked.
select QUERY_TEXT
from table(information_schema.query_history(result_limit => 100))
where SESSION_ID = Current_Session() order by START_TIME desc;
The C$ prefixed work tables are a product of ODI use, but they are not created as actual Snowflake temporary tables so they do not benefit from an automatic deletion at JDBC session termination.
The ODI publishers note this about their C$ and I$ work tables:
When a scenario successfully completes, it will automatically delete these tables, as they're transitory and are no longer required. However, where a scenario does not complete successfully, it is possible these tables get left behind and from time to time it maybe desirable to clean up these tables to reclaim space.
For unsuccessful scenarios in your use of ODI that is likely leading to leftover tables on Snowflake, following the link above should help you run a procedure that deletes the leftover work tables (manually or on a schedule). Copying over the relevant procedure here for convenience:
To run the procedure:
Open ODI Studio and connect to the BI Apps ODI Repository.
Navigate to the Designer tab and use the navigator to navigate to: BI Apps Project -> Components -> DW -> Oracle -> Clean Work and Flow Tables folder
In the folder find the Clean Work and Flow Tables package and in this package is the UTILITIES_CLEAN_WORK_AND_FLOW_TABLES scenario.
Right click the scenario and select the 'Execute' option. At the prompt, provide the desired number of days to go back before deleting tables

SQL Server stored procedure returns different with query result

I am new to SQL Server. I wrote a simple stored procedure that returns rows with data condition.
Here is my code:
CREATE PROCEDURE test.newArtists
#LastUpdated smalldatetime
AS
BEGIN
SELECT *
FROM ARTIST
WHERE GENERATED > #LastUpdated OR MODIFIED > #LastUpdated;
END
When I execute this stored procedure, it returns 0 rows. like ...
DECLARE #temp DATETIME;
SET #temp = CONVERT (DATETIME, '2016/12/05');
EXEC test.newArtists #LastUpdated = #temp;
However, when I execute the query without using procedure, it returns about 5,000 rows.
DECLARE #temp DATETIME;
SET #temp = CONVERT (DATETIME, '2016/12/05');
SELECT *
FROM ARTIST
WHERE GENERATED > #temp OR MODIFIED > #temp;
I just do not understand whey those two returns different results.
Thanks for explanations!
===================================================
I find the problem. Thanks.
I'am using test schema.
So I connected to test. However, When I use SELECT * FROM ARTIST it does not search test.ARTIST, but ARTIST table which belongs to dbo.
Summary.
The basic problem here is I got 2 tables with name ARTIST.
However, I still do not understand why it automatically look for dbo schema.
I got little experience with MYSQL, but when I connect to a certain schema, it only find objects inside of the schema. Is it normal or should I do some work to set prefix?
Thanks for answers though b
It is always good practice to refer to database objects by a schema
name and the object name, separated by a period (.).
object referred to without an explicit schema name ... will be located by searching the default schema first, followed by the dbo schema
Source: SQL Server Best Practices – Implementation of Database Object Schemas
Dbo is the default schema, it will always go to that one first if the schema is not specified. It is a good practice to always specify the schema in the query especially if you are using more than one. That way it won't waste time looking in the wrong schema first.

How to search a table in all databases where column value is the given text

I'm able to query all the databases via How to run the same query on all the databases on an instance? but not sure how to run the query by matching a specific column and value.
example, assume column name is code. query + code='THBN'
Thanks.
EXECUTE sp_MSForEachDB
'USE ?;
SELECT * from table where code=''tbhn''';
you could also ignore system databases like below
EXECUTE sp_MSForEachDB
'if db_id()>4
begin
SELECT * from table where code=''tbhn''
end';

How to Exclude Data for Specific Tables

I am using mysqldump to create a canonical installation script for a MySQL database. I would like to dump the data for able half of the tables in the database, but exclude the data from the other tables. I am aware of the following two commands:
--no-data
--ignore-table
But the first applies to all tables, and I believe the second excludes the table entirely from the dump (e.g. create statements) not just the data in the table. Anyone know how to use mysqldump to achieve my goal?
EDIT:
found a near duplicate question: mysqldump entire structure but only data from selected tables in a single command
How about running two separate calls to mysqldump? One to create the database and ignore the tables you don't want data from. The other to just create the remaining tables without data. You could either run the two scripts separately, or concatenate them together to create a final script.
There is one other option to get everything done (in a single call to mysql itself) but it should probably never be attempted.
In tribute to H.P. Lovecraft, (and based upon Anuya's stored procedure to create INSERT statements) here's The Stored Procedure Which Must Not Be Called:
Note: This unholy, arcane stored procedure would only be run by a madman and is presented below purely for educational purposes.
DELIMITER $$
DROP PROCEDURE IF EXISTS `pseudoDump` $$
CREATE DEFINER=`root`#`localhost` PROCEDURE `pseudoDump`(
in_db varchar(20),
in_tables varchar(200),
in_data_tables varchar(200)
)
BEGIN
DECLARE Whrs varchar(500);
DECLARE Sels varchar(500);
DECLARE Inserts varchar(200);
DECLARE tablename varchar(20);
DECLARE ColName varchar(20);
SELECT `information_schema`.`TABLE_NAME` INTO tablename FROM TABLES WHERE TABLE_SCHEMA = in_db AND TABLE_NAME IN ( in_tables );
tabdumploop: LOOP
SHOW CREATE TABLE tablename;
LEAVE tabdumploop;
END LOOP tabdumploop;
SELECT `information_schema`.`TABLE_NAME` INTO tablename FROM TABLES WHERE TABLE_SCHEMA = in_db ;
datdumploop: LOOP
SELECT group_concat(concat('concat(\'"\',','ifnull(',column_name,','''')',',\'"\')')) INTO #Sels from `information_schema`.`COLUMNS` where table_schema=in_db and table_name=tablename;
SELECT group_concat('`',column_name,'`') INTO #Whrs from `information_schema`.`COLUMNS` where table_schema=in_db and table_name=tablename;
SET #Inserts=concat("select concat('insert IGNORE into ", in_db,".",tablename," values(',concat_ws(',',",#Sels,"),');') as MyColumn from ", in_db,".",tablename, " where 1 group by ",#Whrs, ";");
PREPARE Inserts FROM #Inserts;
EXECUTE Inserts;
LEAVE datdumploop;
END LOOP datdumploop;
END $$
DELIMITER ;
... thankfully, I was saved from witnessing the soul-wrenching horror this procedure must surely wreak by MySQL Bug #44009 ...
mysqldump -u user -h host.example.com -p database table1 table2 table3
You might find what you need here:
http://www.electrictoolbox.com/mysqldump-selectively-dump-data/
Using where statements is probably the easiest way to achieve what you are trying to do.

SQL Server 2008: Creating dynamic Synonyms?

in my SQL Server 2008 database I have a number of different tables with the same structure. I query them in different stored procedures. My first try was to pass the table name to the stored procedure, like:
CREATE PROCEDURE MyTest
#tableName nvarchar(255)
AS
BEGIN
SELECT * FROM #tableName
END
But we can't use parameters for table names in SQL. So I asked you and tried the solution with using Synonyms instead of a parameter for the table name:
CREATE PROCEDURE MyTest
#tableName nvarchar(255)
AS
BEGIN
EXEC SetSimilarityTableNameSynonym #tbl = #tableName;
SELECT * FROM dbo.CurrentSimilarityTable
END
SetSimilarityTableNameSynonym is a SP to set the Synonym dbo.CurrentSimilarityTable to the passed value (the specific table name). It looks like:
CREATE PROCEDURE [dbo].[SetSimilarityTableNameSynonym]
#tbl nvarchar(255)
AS
BEGIN
IF object_id('dbo.CurrentSimilarityTable', 'SN') IS NOT NULL
DROP SYNONYM CurrentSimilarityTable;
-- Set the synonym for each existing table
IF #tbl = 'byArticle'
CREATE SYNONYM dbo.CurrentSimilarityTable FOR dbo.similarity_byArticle;
...
END
Now, as you probably see, the problem is with concurrent access to the SPs which will "destroy" each others assigned synonym. So I tried to create dynamic synonyms for each single SP-call with a GUID via NewID()
DECLARE #theGUID uniqueidentifier;
SET #theGUID=NEWID()
SET #theSynonym = 'dbo.SimTabSyn_' + CONVERT(nvarchar(255), #theGUID);
BUT ... I can't use the dynamical created name to create a synonym:
CREATE SYNONYM #theSynonym FOR dbo.similarity_byArticle;
doesn't work.
Has anybody an idea, how to get dynamical synonyms running? Is this even possible?
Thanks in advance,
Frank
All I can suggest is to run the CREATE SYNONYM in dynamic SQL. And this also means your code is running at quite high rights (db_owner or ddl_admin). You may need EXECUTE AS OWNER to allow it when you secure the code.
And how many synonyms will you end up with for the same table? If you have to do it this way, I'd use OBJECT_ID not NEWID and test first so you have one synonym per table.
But if you have one synonym per table then why not use the table name...?
What is the point is there creating 1 or more synonyms for the same table, given the table names are already unique...
I'd fix the database design.
Why would you want multiple concurrent users to overwrite the single resource (synonym)?
If your MyTest procedure is taking a the table name as a parameter, why not simply do dynamic SQL? You can validate the #tableName against against a hardcoded list of tables that this procedure is allowed to select from, or against sys.tables

Resources