I'm using RDS with PostgresSql 12.6, trying to figure out why set statement_timeout not work in cron.job
This is my cron.job query:
SELECT cron.schedule('* * * * *', $$set statement_timeout = '1min';select pg_sleep(5 * 60);$$);
When I check job status from:
select * from cron.job_run_details
The output are below:
jobbed
runid
job_pid
database
username
command
status
return_message
start_time
end_time
8
1318621
3255
postgres
postgres
set statement_timeout = '1min';select pg_sleep(5 * 60);
failed
ERROR: canceling statement due to statement timeout
2022-05-27 04:05:00.006857 +00:00
2022-05-27 04:05:30.009318 +00:00
I found range between end_time and start_time is 30s. Same as global parameter.
SELECT current_setting('statement_timeout'); -- show 30s
Is there any solution for making cron.job work without change global parameter?
Related
I want to a cron in database using pg_cron to update the value every 90 seconds.
i am looking at this solutions
Can a cron job run every 'x' seconds
but this is not correct way.
also i am looking at Cron job to run a PHP script every 90 seconds between 5AM and 10PM? but runing two crons is not a viable solution if i got with this solutions and then i have to find a way to sleep the database jon for 90 secods.
If anyone has any idea please suggest me.
I want to run a cron that will run every secods in the database and it will update the value in a certain table after querying it.
An example of a function that I mentioned in comment:
create table update_status(id
insert into update_status values (1, now()); integer, last_update timestamp);
CREATE OR REPLACE FUNCTION public.update_check()
RETURNS void
LANGUAGE plpgsql
AS $function$
DECLARE
_last_update timestamp;
_update_togo interval;
_update_interval interval := '20 secs ';
_sleep_interval interval;
_select_val varchar;
BEGIN
select into _last_update last_update from update_status where id = 1;
select into _update_togo now() - _last_update;
if _update_togo < _update_interval then
RAISE NOTICE 'Now start %', clock_timestamp();
RAISE NOTICE 'Interval %', _update_togo;
select into _sleep_interval _update_interval - _update_togo;
RAISE NOTICE 'Sleep interval %', _sleep_interval;
perform pg_sleep_for(_sleep_interval);
select into _select_val 'Run at';
RAISE NOTICE '% %', _select_val,clock_timestamp();
end if;
END;
$function$
;
update update_status set last_update = now();
select update_check();
NOTICE: Now start 2023-01-16 11:53:15.936465-08
NOTICE: Interval 00:00:07.314329
NOTICE: Sleep interval 00:00:12.685671
NOTICE: Run at 2023-01-16 11:53:28.634306-08
update_check
--------------
This is a simplistic proof of concept and would need more work to deal with exceptions.
I have to write a snowflake task to run everyday at every 2 minutes from 5:00 EST to 5:00 PM EST.
The code I wrote is not working, the task didnt stop running even after 5:00 PM:
CREATE OR REPLACE TASK tsk_master
WAREHOUSE = XS_WH
SCHEDULE = 'USING CRON * 5-17 * * * America/New_York'
TIMESTAMP_INPUT_FORMAT = 'YYYY-MM-DD HH24'
COMMENT = 'Master task job to trigger all other tasks'
AS call pntinsight_lnd.SP_ACCT_DIM_1();
Please suggest what did I do wrong, how to stop it from running after 5 PM, and how can i set it to run every 2 minutes?
You have to define all trigger minutes, it looks ugly but it should work:
CREATE OR REPLACE TASK tsk_master
WAREHOUSE = XS_WH
SCHEDULE = 'USING CRON 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58 5-17 * * * America/New_York'
TIMESTAMP_INPUT_FORMAT = 'YYYY-MM-DD HH24'
COMMENT = 'Master task job to trigger all other tasks'
AS call pntinsight_lnd.SP_ACCT_DIM_1();
snowflake task to run everyday at every 2 minutes from 5:00 EST to 5:00 PM EST.
Optional parameters:
/n
Indicates the nth instance of a given unit of time. Each quanta of time is computed independently
So every 2 minutes will be:
SCHEDULE = 'USING CRON */2 5-17 * * * America/New_York'
I have this scenario:
SPID = 100 (A SSMS tab for example)
SELECT TOP 1 * FROM SOME_TABLE
GO
SELECT TOP 1 * FROM SOME_TABLE2
GO
SELECT TOP 1 * FROM SOME_TABLE3
When I run (DBCC INPUTBUFFER, sys.sysprocesses), I got only the last query executed:
SELECT TOP 1 * FROM SOME_TABLE3.
I need to get all queries from that session (in this case spid 100), not only the last. Is there a way to do this?
I'm searching for a way to do this using TSQL, get a trace sql-server Profiler is not a option.
Thanks!
You need to capture the queries using Extended Events or Profiler. It will be better to use XE. Create a session like this one:
CREATE EVENT SESSION [Capture_Queries]
ON SERVER
ADD EVENT sqlserver.sql_statement_completed(
ACTION
(
sqlserver.sql_text
)
WHERE
(
session_id = 100
)
)
ADD TARGET package0.event_file
(
SET filename = 'D:\CaptureQueries.xel',
max_file_size = 5,
max_rollover_files = 1
)
After that you can start and stop it with these commands:
ALTER EVENT SESSION [Capture_Queries] ON SERVER STATE = START
ALTER EVENT SESSION [Capture_Queries] ON SERVER STATE = STOP
Start the session, execute the queries and then stop it. You can see the captured queries in SSMS using Management \ Extended Events \ Sessions \ Capture_Queries node in Object Explorer - there is a package0.event_file node under the session. Double click it to see the collected data.
I'm trying to find a better way of finding duplicates in SQL Server. This took over 20 minutes to run with just over 300 million records before results started showing in the results window within SSMS. Another 22 minutes elapsed before it crashed.
Then SSMS threw this error after displaying 16,777,216 records:
An error occurred while executing batch. Error message is: Exception of type 'System.OutOfMemoryException' was thrown.
Schema:
ENCOUNTER_NUM - numeric(22,0)
CONCEPT_CD - varchar(50)
PROVIDER_ID - varchar(50)
START_DATE - datetime
MODIFIER_CD - varchar(100)
INSTANCE_NUM - numeric(18,0)
SELECT
ROW_NUMBER() OVER (ORDER BY f1.[ENCOUNTER_NUM],f1.[CONCEPT_CD],f1.[PROVIDER_ID],f1.[START_DATE],f1.[MODIFIER_CD],f1.[INSTANCE_NUM]),
f1.[ENCOUNTER_NUM],
f1.[CONCEPT_CD],
f1.[PROVIDER_ID],
f1.[START_DATE],
f1.[MODIFIER_CD],
f1.[INSTANCE_NUM]
FROM
[dbo].[I2B2_OBSERVATION_FACT] f1
INNER JOIN [dbo].[I2B2_OBSERVATION_FACT] f2 ON
f1.[ENCOUNTER_NUM] = f2.[ENCOUNTER_NUM]
AND f1.[CONCEPT_CD] = f2.[CONCEPT_CD]
AND f1.[PROVIDER_ID] = f2.[PROVIDER_ID]
AND f1.[START_DATE] = f2.[START_DATE]
AND f1.[MODIFIER_CD] = f2.[MODIFIER_CD]
AND f1.[INSTANCE_NUM] = f2.[INSTANCE_NUM]
Not sure how much faster this is, but worth a try.
SELECT
COUNT(*) AS Dupes,
f1.[ENCOUNTER_NUM],
f1.[CONCEPT_CD],
f1.[PROVIDER_ID],
f1.[START_DATE],
f1.[MODIFIER_CD],
f1.[INSTANCE_NUM]
FROM
[dbo].[I2B2_OBSERVATION_FACT] f1
GROUP BY
f1.[ENCOUNTER_NUM],
f1.[CONCEPT_CD],
f1.[PROVIDER_ID],
f1.[START_DATE],
f1.[MODIFIER_CD],
f1.[INSTANCE_NUM]
HAVING
COUNT(*) > 1
I have a table in Postgres database "logs" which holds the error logs with their creation date
sample query : Select creation_date from logs
returns
"2011-09-20 11:27:34.836"
"2011-09-20 11:27:49.799"
"2011-09-20 11:28:04.799"
"2011-09-20 11:28:19.802"
I can find out the latest error using the command
SELECT max(creation_date) from log;
which will return "2012-02-06 12:19:28.448"
Now what I am looking for a query which could return the errors occured in last 15 minutes.
Any help on this will be appreciated
This should do the trick:
SELECT * FROM logs WHERE creation_date >= NOW() - INTERVAL '15 minutes'