We have too many warehouses to drop manually, it would be nice to do something like
drop warehouse like 'TEST_DW%'
You can generate the statements with dynamic sql:
show warehouses like 'TEST_DW%';
select listagg('drop warehouse ' || "name" || ';', '\n')
from table(result_scan(-1)) ;
Then cut and paste.
Related
Does snowflake provide any utility to unload data from multiple tables at the same time? Somewhat similar to expdp (or export) in Oracle? I can write a procedure to fetch table names and copy/unload to stage files, but was wondering if there is any other utility that might be available out of the box in snowflake for this?
Also it would be helpful if someone can point out approach or best practices to use for schema refreshes.
You can do what I mention above with SQL like this:
create table test.test.table_to_export(full_table_name text, write_path text);
create table test.test.table_a(id int);
insert into test.test.table_to_export values ('test.test.table_a', 'table_a/a_files_');
Then running this will use COPY TO to write you my_stage in the form of this example for each row in the at the same time point, so the data is coherent between them (via time travel) thus this method would only work on permanent tables.
declare
c1 cursor for select full_table_name, write_path from test.test.table_to_export;
sql text;
sql_template text;
id text;
begin
select 1;
select last_query_id(-1) into id;
sql_template := $$copy into #my_stage/result/WRITE_PATH FROM (select * FROM TABLE_NAME AT(STATEMENT => '$$ || id || $$'))
file_format=(format_name='myformat' compression='gzip');$$;
for record in c1 do
sql := replace(replace(sql_template, 'TABLE_NAME', record.full_table_name), 'WRITE_PATH', record.write_path);
execute immediate sql;
end for;
end;
What is the simplest way to drop all tasks in Snowflake under a specific schema? I'm hoping this won't require some sort of Javascript loop.
I'm hoping for something like ... :)
DROP ALL TASKS IN <SCHEMA>
Going through it one by one can be cumbersome.
Did not see anything in the documentation that alluded to this: https://docs.snowflake.com/en/sql-reference/sql/drop-task.html
You can use the show tasks, followed by a select statement to generate the DDL you can then run as a SQL script.
use database "mydatabase";
use schema "myschema";
show tasks;
SELECT 'DROP TASK ' || "name" || ';' FROM table(result_scan(last_query_id()));
I had exactly the same requirement once. I ended up with:
-- SHOW TASKS IN SCHEMA ...;
SHOW TASKS IN DATABASE;
SELECT LISTAGG(REPLACE('DROP TASK IF EXISTS <TASK_NAME>;' || CHAR(13),
'<TASK_NAME>'
,CONCAT_WS('.', "database_name","schema_name", "name"))
,'') AS drop_all_task_script
FROM TABLE(result_scan(last_query_id()));
I am working on a mySQL to PostgreSQL database migration using pgloader. One of the issues I am facing is that my application is looking for any tables beginning with "ao_" to be "AO_" which I was able to solve by making them all uppercase, however the corresponding columns also need to be uppercase.
Is there a good way to make JUST the "AO_" table columns be all uppercase. It does not seem very efficient to just do this for 400 tables with approximately 10 columns per table:
ALTER TABLE "AO_54307E_QUEUE" RENAME project_id TO "PROJECT_ID";
Is there maybe some kind of wildcard we could use to just grab the "AO_" tables and then have all the columns be uppercase?
I would recommend you against doing it, and I am quoting from the documentation.
Quoting an identifier also makes it case-sensitive, whereas unquoted
names are always folded to lower case.
If you want to write portable applications you are
advised to always quote a particular name or never quote it.
So, quoting "JUST the "AO_" table columns be all uppercase" seems like a bad idea.
If you still wish to proceed, you may use a loop through information_schema.columns and run dynamic ALTER statements.
DO $$
DECLARE
rec RECORD;
BEGIN
for rec IN ( SELECT column_name,table_name,table_schema
FROM information_schema.columns
WHERE table_name like 'AO_%'
AND column_name like 'ao_%' )
LOOP
EXECUTE format ( 'ALTER TABLE %I.%I RENAME %I TO %I' ,
rec.table_schema,rec.table_name,rec.column_name,
upper(rec.column_name)) ;
RAISE NOTICE 'COLUMN % in Table %.% RENAMED',
rec.column_name,rec.table_schema,rec.table_name;
END LOOP;
END$$;
Demo
This is my first time with SQL Server 2016. I would like to know how to create a db table from json like:
{ "column1":"int","column2":"varchar(255)","column3":"column3Type..." }
It is important to me to use JSON, cause I would like to create generic tables, which has many different columns.
I don't want to use Mongo or other document-oriented database.
What you advise in this situation?
You just need to build create table statement.
Simply:
replace '":"' with space
replace '","' with ', '
replace '{' with
'Create table tbl_name ('
replace '}' with ')'
I'd like to get data from one database..table into an UPDATE script that can be used with another database..table. Rather than doing export from DB1 into DB2, I need to run an UPDATE script against DB2.
If the above is not possible, is there a way to generate the UPDATE syntax from DB1..table1:
col1 = value1,
col2 = value2,
col3 = value3,
...
--- EDIT ---
Looking through the answers, there's an assumption that DB1 is available at the same time that DB2 is available. This is not the case. Each database will know nothing of the other. The two servers/databases will not be available/accessible at the same time.
Is it possible to script the table data into a flat file? Not sure how easy that will be to then get into an UPDATE statement.
Using a linked server and an update statement will really be your easiest solution as stated above, but I do understand that sometimes that isn't possible. The following is an example of dynamically building update statements. I am assuming there is no chance of SQL Injection from the "SourceData" table. If there is that possibility then you will need to use the same technique to build statements that use sp_executesql and parameters.
SELECT 'UPDATE UpdateTable ' +
' SET FieldToUpdate1 = ''' + SourceData.DataToUpdate1 + '''' +
' , FieldToUpdate2 = ' + CAST(SourceData.DataToUpdate2 AS varchar) +
' WHERE UpdateTable.PrimaryKeyField1 = ' + CAST(SourceData.PrimaryKey1 AS varchar) +
' AND UpdateTable.PrimaryKeyField2 = ''' + SourceData.PrimaryKey2 + ''''
FROM SourceData
Also here is a link to a blog I wrote on Generating multiple SQL statements from a query. It's a bit more simplistic than the type of statement you are trying to create, but it should give you an idea. Also here is an article I wrote on using Single Quotation Marks in SQL. Other than that you can go onto Google and search for "SQL Server Dynamic SQL" and you will get hundreds of blogs, articles, forum entries etc on the subject.
Your question needs a little more clarification to be completely understand what you are trying to accomplish, but assuming the databases are on the same server, then you should be able to do something like this using UPDATE and JOIN:
UPDATE a
SET col1 = value1, col2 = value2
FROM database1.schema.table a
JOIN database2.schema.table b
ON a.primaryKey = b.primaryKey
Alternatively, if they are on different servers, you could setup a linked server and it should work similarly.
I think you still want to INSERT from one table into another table of another database. You can use INSERT INTO..SELECT
INSERT INTO DB2.dbo.TableName(Col1, Col2, Col3) -- specify columns
SELECT Col1, Col2, Col3
FROM DB1.dbo.TableName
Assuming dbo is the schema used.
Are both databases on the same SQL-Server? In that case, use fully-qualified table names. I.e.:
Update Database1.Schema.Table
SET ...
FROM
Database2.Schema.Table
If they're not on the same server, then you can use linked servers.
I'm not sure of the SQL server syntax but you can do something like this to generate the update statement.
SELECT 'UPDATE mytable SET col1=' || col1 || ' WHERE pk=' primaryKey ||';' FROM mytable;
Obviously you'll need to escape quotes, etc. depending on the value types.
I assume this is because you can't do a normal UPDATE from a SELECT?