i have requirement where i would like to check if the Procedure that i am going to run is already in running status ;
I am planning to check using following Query
"SELECT EXECUTION_STATUS FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY WHERE EXECUTION_STATUS ='RUNNING'" ;
OR Should I use "
SELECT EXECUTION_STATUS FROM DATABASE.INFORMATION_SCHEMA.QUERY_HISTORY WHERE EXECUTION_STATUS ='RUNNING'"
if Column value is RUNNING then its instance is already running.
Let me know if this approach is correct.
Yes, you can use the EXECUTION_STATUS column of QUERY_HISTORY to determine whether a stored procedure call is still running or not.
However, in this case, the ACCOUNT_USAGE.QUERY_HISTORY view cannot be used because this view has some delay. In other words, it cannot capture the real-time execution status. So, the INFORMATION_SCHEMA.QUERY_HISTORY table function is preferred.
Below is an example of how to use the INFORMATION_SCHEMA.QUERY_HISTORY table function in this use case. Please note that INFORMATION_SCHEMA.QUERY_HISTORY is a table function, not a view, so you have to use the TABLE() wrapper to use it as below:
Worksheet 1:
-- Sample stored procedure just waiting for 60 seconds
create or replace procedure sp1 ()
returns varchar
language javascript
as 'snowflake.createStatement({sqlText: "select system$wait(60);"}).execute();'
;
call sp1();
Worksheet 2:
select start_time, query_id, query_text, execution_status
from table(information_schema.query_history())
where execution_status = 'RUNNING'
;
/*
START_TIME QUERY_ID QUERY_TEXT EXECUTION_STATUS
2021-10-26 08:25:56.969 +0200 019fdc41-0000-2c1d-0000-3f8100091e56 select start_time, query_id, query_text, execution_status from table(information_schema.query_history()) where execution_status = 'RUNNING' ; RUNNING
2021-10-26 08:25:53.869 +0200 019fdc41-0000-2c5c-0000-3f81000935aa select system$wait(60); RUNNING
2021-10-26 08:25:53.515 +0200 019fdc41-0000-2c5c-0000-3f81000935a6 call sp1(); RUNNING
*/
Please note that the above result from the QUERY_HISTORY table function with EXECUTION_STATUS = 'RUNNING' includes the QUERY_HISTORY query itself. So, if you only collect the EXECUTION_STATUS column as your example, it's difficult to distinguish whether the running query is the stored procedure call or not.
Therefore, if a human uses the status check query to check visually, the query should include other columns like QUERY_ID, START_TIME and QUERY_TEXT to distinguish the stored procedure call.
Otherwise, if any automation uses the status check query, the query should have another filter (WHERE clause) to distinguish the stored procedure call as below:
select query_id
from table(information_schema.query_history())
where execution_status = 'RUNNING'
and query_text ilike 'call%'
;
/*
QUERY_ID
019fdc48-0000-2c1d-0000-3f8100091e6e
*/
You can change the pattern in the new filter to distinguish different stored procedure calls, and also you can use a query tag:
Worksheet 1:
alter session set query_tag = 'ws1';
call sp1();
Worksheet 2:
alter session set query_tag = 'ws2';
call sp1();
Worksheet 3:
select query_tag, query_id
from table(information_schema.query_history())
where execution_status = 'RUNNING'
and query_text ilike 'call%'
;
/*
QUERY_TAG QUERY_ID
ws2 019fdc4b-0000-2c5c-0000-3f8100093642
ws1 019fdc4b-0000-2c1d-0000-3f8100091ec6
*/
select query_tag, query_id
from table(information_schema.query_history())
where execution_status = 'RUNNING'
and query_text ilike 'call%'
and query_tag = 'ws2'
;
/*
QUERY_TAG QUERY_ID
ws2 019fdc4b-0000-2c5c-0000-3f8100093642
*/
Related
I read in the document that select count(*) is a Metadata operation in snowflake. SO no computation is warehouse used. But without any warehouse assigned the query select count(*) cannot be run. And once warehouse start it will start using credits. Can any one please explain this.
The count(*) does not require a running warehouse. You can see this behavior using this script:
-- Shut down a warehouse and do not allow auto-resume to test this
alter warehouse test suspend;
alter warehouse test set auto_resume = false;
use warehouse test;
-- This fails because it needs a running warehouse
select * from "SNOWFLAKE_SAMPLE_DATA"."TPCH_SF1"."ORDERS" limit 1;
-- This works because it's a metadata query
select count(*) from "SNOWFLAKE_SAMPLE_DATA"."TPCH_SF1"."ORDERS";
-- Simple aritemetic math on metadata queries is okay
select count(*) + 1 from "SNOWFLAKE_SAMPLE_DATA"."TPCH_SF1"."ORDERS";
-- Running functions requires a warehouse
select sqrt(count(*)) from "SNOWFLAKE_SAMPLE_DATA"."TPCH_SF1"."ORDERS";
--Remember to alter your warehouse back to auto-resume:
alter warehouse test set auto_resume = true;
The query can be run without any warehouse assigned - easy enough to test using the WebUI
(How) Can I Sql Server create a parameter that is automatically populated from data - in other words equivalent to SAS' "Proc Sql; into:" - functionality
In SAS I can store the number of rows in a table, mytable, into a macro variable, n_rows. I am looking for something like this in sql server.
proc sql noprint;
select count(*) into :n_rows
from mytable
quit;
%put &n_rows.;
Maybe I am missing something, but is this what you're looking for?
declare #numberOfRows int = (select count(*) from myTable)
select #numberOfRows
I am using SQL Server 2012 and I have the following procedure
CREATE PROCEDURE A
AS
BEGIN
SELECT * FROM table1
DECLARE #v1 INT = 1
SELECT #v1
END
If select * from table1 command takes more than 3 seconds, I want to ignore it and stop the command and move to execute next command in the procedure.
How to handle that? I mean putting specific period for command execution
Have a look at ##LOCK_TIMEOUT - see: ##LOCK_TIMEOUT (Transact-SQL)
so I have some commands which I want to put in a stored procedure(then stored procedure executed by a job) to automate. What I need is to make some sort of log(file or table?!) where to have the affected rows a select or insert did and also how long they took to execute. So can you help me with some ideas? thanks
Examples below:
truncate table table_xyz
insert into table_aaa
select * from (select * from table_dsd union all select * from table_dsdf)
ex: "40234 rows affected" ; 00:00:35
some selects
one simple way would be to do like this
declare #startdate datetime=getdate()
select * from sometbl
--log data
select ##rowcount,datediff(minute,#startdate,getdate())
When I execute a sql statement like "Select ...", I can only see "...100%" completed...
I want to log the number of rows affected.
How can we do that?
run your SELECT from within a stored procedure, where you can log the rowcount into a table, or do anything else to record it...
CREATE PROCEDURE SSIS_TaskA
AS
DECLARE #Rows int
SELECT ... --your select goes here
SELECT #Rows=##ROWCOUNT
INSERT INTO YourLogTable
(RunDate,Message)
VALUES
(GETDATE(),'Selected '+CONVERT(varchar(10),ISNULL(#Rows,0))+' rows in SSIS_TaskA')
GO
When you use a SQL Task for a select most of the time you give as destination a DataSet Object, you can count the number of ligne from the DataSet
I believe you could leverage a t-sql output clause on your update or insert statement and capture that as an ssis variable....or just drop it into a sql table.
here is an example...its crappy, but it is an example
UPDATE TOP (10) HumanResources.Employee
SET VacationHours = VacationHours * 1.25
OUTPUT INSERTED.EmployeeID,
DELETED.VacationHours,
INSERTED.VacationHours,
INSERTED.ModifiedDate
INTO #MyTableVar;
You could output ##ROWCOUNT anyplace you need it to be.
Here is output syntax
http://technet.microsoft.com/en-us/library/ms177564.aspx