For monitoring & logging purposes - we have several tasks that we wish to add QUERY_TAG to each of them.
AFAIK - QUERY_TAG is only working at the session-level - is there any way to add QUERY_TAG to the snowflake tasks?
Session parameters can be set for Tasks within the CREATE TASK statement, and the QUERY_TAG is no exception.
An example:
CREATE OR REPLACE TASK TASK_TEST_QUERY_TAG
WAREHOUSE = MY_WH
SCHEDULE = '1 MINUTE'
QUERY_TAG = 'My Test Query Tag'
AS
[...]
;
Check the CREATE TASK syntax:
CREATE [ OR REPLACE ] TASK [ IF NOT EXISTS ] <name>
WAREHOUSE = <string>
[ SCHEDULE = '{ <num> MINUTE | USING CRON <expr> <time_zone> }' ]
[ <session_parameter> = <value> [ , <session_parameter> = <value> ... ] ]
[ USER_TASK_TIMEOUT_MS = <num> ]
[ COPY GRANTS ]
[ COMMENT = '<string_literal>' ]
[ AFTER <string> ]
[ WHEN <boolean_expr> ]
AS
<sql>
Reference: https://docs.snowflake.com/en/sql-reference/sql/create-task.html#syntax
Snowflake tasks are comprised of a single command- so alter session + your command would exceed that.
You could create a simple stored procedure that would set the query tab and run your command. Then, have your task call that.
Related
I can use show tables in <database name> to show all tables in a database.
The results returned show if a table has clustering enabled - shows the cluster_by column.
Is there a way to get back a list of all tables that have value in cluster_by ?
The documentation for show-tables shows only:
SHOW [ TERSE ] TABLES [ HISTORY ] [ LIKE '<pattern>' ]
[ IN { ACCOUNT | DATABASE [ <db_name> ] | SCHEMA [ <schema_name> ] } ]
[ STARTS WITH '<name_string>' ]
[ LIMIT <rows> [ FROM '<name_string>' ] ]
You can always ask INFORMATION_SCHEMA:
SELECT TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME, CLUSTERING_KEY
FROM INFORMATION_SCHEMA.TABLES
WHERE CLUSTERING_KEY IS NOT NULL;
or using RESULT_SCAN
SHOW TABLES IN DATABASE TEST;
SELECT *
FROM TABLE(result_scan(last_query_id()))
WHERE "cluster_by" <> '';
Reference: INFORMATION SCHEMA TABLES VIEW, RESULT_SCAN
I have to convert one oracle query to snowflake,which has a where clause LEVEL > 1. Could you please suggest me the best option.
Thanks.
I don't think it's an exact match, but the closest thing is the "start with" clause of Snowflake's connect by:
SELECT <column_list> [ , <level_expression> ]
FROM <data_source>
START WITH <predicate>
CONNECT BY [ PRIOR ] <col1_identifier> = [ PRIOR ] <col2_identifier>
[ , [ PRIOR ] <col3_identifier> = [ PRIOR ] <col4_identifier> ]
...
...
You can provide a where clause on the start with predicate, but without the "where" keyword. You can read more about it here: https://docs.snowflake.com/en/sql-reference/constructs/connect-by.html
There is level in snowflake. The differences from Oracle are:
In snowflake it's neccesary to use prior with connect by expression.
And you can't just select level - there should be any existing column in the select statement.
Example:
SELECT LEVEL, dummy FROM
(select 'X' dummy ) DUAL
CONNECT BY prior LEVEL <= 3;
LEVEL DUMMY
1 X
2 X
3 X
4 X
I want to see the definition of a table in SQL Server.
Running this query from SQLPro for MSSQL is OK
SELECT TOP 100 * FROM dbo.[ATRESMEDIA Resource Time Registr_];
but when I run this one
exec sp_columns dbo.[ATRESMEDIA Resource Time Registr_];
I got this error:
Msg 102, Level 15, State 1.
Incorrect syntax near '.'. (Line 3)
dont use schema dbo.
exec sp_columns [ATRESMEDIA Resource Time Registr_];
why? because, following are the parameters accepted by sp_columns stored proc:
sp_columns [ #table_name = ] object
[ , [ #table_owner = ] owner ]
[ , [ #table_qualifier = ] qualifier ]
[ , [ #column_name = ] column ]
[ , [ #ODBCVer = ] ODBCVer ]
source: msdn
update:
Martin's explanation as in comment:
Strings in SQL Server are delimited by single quotes - as a parameter to a stored proc in very limited circumstances it will allow you to skip the quotes but the dot breaks that. exec sp_columns 'dbo.[ATRESMEDIA Resource Time Registr_]'; wouldn't give the syntax error - but that wouldn't be what the proc expects anyway as the schema would need to be the second param
select the table name in the query window
and press the below key combination
Alt +F1 or
Alt+Fn+F1 will bring the table definition
I have an SSIS project with multiple packages. I would like to create a "Master" package which would run the individual packages in a sequence. The first package contains a Data Flow task which imports data from Excel files, so my Run64BitRuntime setting is set to "false". The following package that needs to be run contains a Fuzzy Lookup, which requires that the Run64BitRuntime setting is set to "true".
Is there a way that I can change this project property setting through a Script Task, so that I can fully automate this process?
You Deploy the Packages in an SSIS Catalog (may be in your same instance of SQL).
In the SSISDB Database, we have several SPs. Some of them are listed below.
[SSISDB].[catalog].[set_execution_parameter_value]
[SSISDB].catalog.start_execution
[SSISDB].catalog.create_execution
Every sp has its own purpose. see here
http://technet.microsoft.com/en-us/library/ff878034.aspx
See syntax of
create_execution [ #folder_name = folder_name
, [ #project_name = ] project_name
, [ #package_name = ] package_name
[ , [ #reference_id = ] reference_id ]
[ , [ #use32bitruntime = ] use32bitruntime ]
, [ #execution_id = ] execution_id OUTPUT
The parameter #use32bitruntime will help you to make changes in 32/64 bit execution.
With the above set of SPs you can have great control over package execution.
I have two database , dbOne(version - 10.50.1600 - locate in office server ) and dbTwo(version - 10.0.1600 - locate in my local server) .
I want to copy dbOne's tables with data to dbTwo .
Is there any way or script to do it ? I don't want to upgrade my local server-version !
"Import and Export Data" tool provided by SQL Server is a good tool to transfer data between two different servers.
How about generating the database scripts like in the following artcles
http://www.codeproject.com/Articles/598148/Generate-insert-statements-from
and
http://msdn.microsoft.com/en-us/library/ms186472(v=sql.105).aspx
Its possible to transfer data from one server to another server using SQL linked server query, if both are in a same network. below are the steps
Copying table structures
Generate script of all tables from server1 database then excute in server2 database. using Generate Script utility
Copying table data
sp_addlinkedserver [ #server= ] 'server' [ , [ #srvproduct= ] 'product_name' ]
[ , [ #provider= ] 'provider_name' ]
[ , [ #datasrc= ] 'data_source' ]
[ , [ #location= ] 'location' ]
[ , [ #provstr= ] 'provider_string' ]
[ , [ #catalog= ] 'catalog' ]
Insert into databaseserver2.db1.table1(columnList)
select columnList
from databaseserver1.db1.table1
Here are general steps you need to take in order for this to work
Migrating tables
Create scripts for tables in db1. Just right click the table and go to “Script table as -> Create to”
Re-order the scripts so that tables that don’t depend on any other tables are executed first
Execute scripts on db2
Migrating data
The most convenient way is to use SQL Server Import/Export wizard