SNOWFLAKE SQL: How to run several select statement in one command - snowflake-cloud-data-platform

I am a newbie in the SQL world, and hoping SMEs in the community can help me.
What I am trying to solve for: Several 'select' statement to run on a weekly basis with one call, and the results gets download to on my computer (bonus if it can be downloaded on specific folder)
How I am doing it right now: I use SNOWFLAKE (that's the approved software we are using), Run each 'select' statement one at a time, then once each result is displayed, I manually download the csv file on my computer
I know there's an efficient way of doing it, so would appreciate the help from this community. Thank you in advance.

You can run multiple queries at once using SnowSQL Client. Here's how
Preparation:
Setup SnowSLQL connection configuration
open file: .snowsql/config
add your connection details:
[connections.my_sample_connection]
accountname = mysnowflakeaccount
username = myusername
password = mypassword
warehousename = mywarehouse
Create your query file.
e.g. my_query.sql
-- Query 1
select 1 col1, 2 col2;
-- Query 2
select 'a' colA, 'b' colB;
Execution:
snowsql -c my_sample_connection -f my_query.sql -o output_file=/path/query_result.csv -o timing=false -o friendly=false -o output_format=csv
Result:
/path/query_result.csv - Containing the result of the queries in my_query.sql

Related

What does '--data' mean in the dbt test command?

A few of our automated pipelines have been using the below dbt test command for a long time.
dbt test --target target_name --data --m test_file_name --vars "{'branch':'branch_name','execdate':'2020/01/01'}" --no-version-check
It worked without trouble until the dbt version was upgraded from 0.17.0 to 1.0.0.
Now, getting an error as,
dbt: error: unrecognized arguments: --data
I can remove the --data from the dbt command now but I am curious to know why the people have added --data in the dbt command during the pipeline development long back and what it is doing in the dbt command?
Any help, please?
This part is described: Test selection examples:
Through the combination of direct and indirect selection, there are many ways to accomplish the same outcome. Let's say we have a data test named assert_total_payment_amount_is_positive that depends on a model named payments. All of the following would manage to select and execute that test specifically:
$ dbt test --select assert_total_payment_amount_is_positive # directly select the test by name
$ dbt test --select payments,test_type:data # indirect selection, v0.18.0
$ dbt test --select payments --data # indirect selection, earlier versions
Syntax --data was supported in version lower than v.0.18.0.

Trying to Export Tables to CSVs from SQL Server

I ran the following script to try to get all tables in my DB exported (trying to backup the data in CSVs).
SELECT 'sqlcmd -S . -d '+DB_NAME()+' -E -s, -W -Q "SET NOCOUNT ON; SELECT * FROM '+table_schema+'.'+TABLE_name+'" > "C:\Temp\'+Table_Name+'.csv"'
FROM [INFORMATION_SCHEMA].[TABLES]
I saved the results as a batch file and ran the batch file as Administrator.
That runs without an error, but I get no data exported. All it does is create blank CSV files.
I ran this as well: 'EXEC sp_configure 'remote access',1 reconfigure'.
Still, nothing is exported. CSVs are created, but no data is exported...
Any thoughts?
I ended up using R to do the task...
library("RODBC")
conn <- odbcDriverConnect('driver={SQL Server};server=Server_Name;DB_Name;trusted_connection=true')
data <- sqlQuery(conn, "SELECT * FROM DB.dbo.TBL#1")
write.csv(data,file=paste("C:/Users/TBL#1.csv",sep=""),row.names=FALSE)
data <- sqlQuery(conn, "SELECT * FROM DB.dbo.TBL#2")
write.csv(data,file=paste("C:/Users/TBL#2.csv",sep=""),row.names=FALSE)
Gotta love the IT teams in corporate America...especially when they lock down your system so tight, you need to come up with all kinds of weird hacks just so you can do the job that you were hired to do...
Is there a word for negative synergy?

Why flexviews test_demo changelog table not created

I testing materialized view for the MariaDB which is flexviews. I'm using a CentOS latest version. I refered
https://www.percona.com/blog/2011/03/25/using-flexviews-part-two-change-data-capture/
In the reference, at the step below, no table with the name “flexviews.test_demo was found to be created.
mysql> select * from flexviews.test_demo\G
Up to that point(following the above reference), every step were done successfully except sometimes the following step generated multiple rows.
$ mysql -e 'select * from flexviews.binlog_consumer_status\G' -uroot -p
What could I have done wrong?
One more thing here, instead of having test_demo table, I found mvlog_d04c... table created with the content shown(see below figure). Is this normal?

Issues using "-f" flag in CQLSH to run a query.cql file

I'm using cqlsh to add data to Cassandra with the BATCH query and I can load the data with a query using the "-e" flag but not from a file using the "-f" flag. I think that's because the file is local and Cassandra is remote. Details below:
This is a sample of my query (there are more rows to insert, obviously):
BEGIN BATCH;
INSERT INTO keyspace.table (id, field1) VALUES ('1','value1');
INSERT INTO keyspace.table (id, field1) VALUES ('2','value2');
APPLY BATCH;
If I enter the query via the "-e" flag then it works no problem:
>cqlsh -e "BEGIN BATCH; INSERT INTO keyspace.table (id, field1) VALUES ('1','value1'); INSERT INTO keyspace.table (id, field1) VALUES ('2','value2'); APPLY BATCH;" -u username -p password -k keyspace 99.99.99.99
But if I save the query to a text file (query.cql) and call as below, I get the following output:
>cqlsh -f query.cql -u username -p password -k keyspace 99.99.99.99
Using 3 child processes
Starting copy of keyspace.table with columns ['id', 'field1'].
Processed: 0 rows; Rate: 0 rows/s; Avg. rate: 0 rows/s
0 rows imported from 0 files in 0.076 seconds (0 skipped).
Cassandra obviously accepts the command but doesn't read the file, I'm guessing that's because the Cassandra is located on a remote server and the file is located locally. The Cassandra instance I'm using is a managed service with other users, so I don't have access to it to copy files into folders.
How do I run this query on a remote instance of Cassandra where I only have CLI access?
I want to be able to use another tool to build the query.cql file and have a batch job run the command with the "-f" flag but I can't work out how I'm going wrong.
You're executing a local cqlsh client so it should be able to access your local query.cql file.
Try to remove the BEGIN BATCH and APPLY BATCH and just let the 2 INSERT statements in the query.cql and retry again.
One other solution to insert data quickly is to provide a csv file and use the COPY command inside cqlsh. Read this blog post: http://www.datastax.com/dev/blog/new-features-in-cqlsh-copy
Scripting insert by generating one cqlsh -e '...' per line is feasible but it will be horribly slow

mysqldump table per *.sql file batch script

I have done some digging around and I can not find a way to make mysqldump create a file per table. I have about 100 tables (and growing) that I would like to be dumped into separate files without having to write a new mysqldump line for each table I have.
E.g. instead of my_huge_database_file.sql which contains all the tables for my DB.
I'd like mytable1.sql, mytable2.sql etc etc
Does mysqldump have a parameter for this or can it be done with a batch file? If so how.
It is for backup purposes.
I think I may have found a work around, and that is to make a small PHP script that fetches the names of my tables and runs mysqldump using exec().
$result = $dbh->query("SHOW TABLES FROM mydb") ;
while($row = $result->fetch()) {
exec('c:\Xit\xampp\mysql\bin\mysqldump.exe -uroot -ppw mydb > c:\dump\\'.$row[0]) ;
}
In my batch file I then simply do:
php mybackupscript.php
Instead of SHOW TABLES command, you could query the INFORMATION_SCHEMA database. This way you could easily dump every table for every database and also know how many tables there are in a given database (i.e. for logging purposes). In my backup, I use the following query:
SELECT DISTINCT CONVERT(`TABLE_SCHEMA` USING UTF8) AS 'dbName'
, CONVERT(`TABLE_NAME` USING UTF8) AS 'tblName'
, (SELECT COUNT(`TABLE_NAME`)
FROM `INFORMATION_SCHEMA`.`TABLES`
WHERE `TABLE_SCHEMA` = dbName
GROUP BY `TABLE_SCHEMA`) AS 'tblCount'
FROM `INFORMATION_SCHEMA`.`TABLES`
WHERE `TABLE_SCHEMA` NOT IN ('INFORMATION_SCHEMA', 'PERFORMANCE_SCHEMA', 'mysql')
ORDER BY 'dbName' ASC
, 'tblName' ASC;
You could also put a syntax in the WHERE clause such as TABLE_TYPE != 'VIEW', to make sure that the views will not get dump.
I can't test this, because I don't have a Windows MySQL installation, but this should point you to the right direction:
#echo off
mysql -u user -pyourpassword database -e "show tables;" > tables_file
for /f "skip=3 delims=|" %%TABLE in (tables_file) do (mysqldump -u user -pyourpassword database %%TABLE > %%TABLE.sql)

Resources