Does Snowflake have log messages which are similar to the log messages provided by Teradata like the one below which contains the following details
Query / statement executed on the database
Result summary of running the query
Elapsed time
Actual result which shows the rows
Error codes, if any
.Logon e/fml, password, acctid
*** Logon successfully completed.
*** Total elapsed time was 3 seconds.
.SET SEPARATOR ' | '
SELECT * FROM department;
*** Query completed. 5 rows found. 4 columns returned.
*** Total elapsed time was 3 seconds.
DeptNo | DeptName | Loc | MgrNo
500 | Engineering | ATL | 10012
700 | Marketing | NYC | 10021
300 | Exec Office | NYC | 10018
600 | Manufacturing | CHI | 10007
100 | Administration| NYC | 10011
.LOGOFF
*** You are now logged off from the DBC.
.EXIT;
You can find this information in QUERY_HISTORY:
https://docs.snowflake.com/en/sql-reference/functions/query_history.html
Retrieve up to the last 100 queries run by the current user (or run by any user on any warehouse on which the current user has the MONITOR privilege):
select *
from table(information_schema.query_history())
order by start_time;
Related
Is it possible to see how a table look going back 10 days, provided the retention period is 30 days but the table is dropped and recreated on a daily basis?
If the table is truncated, instead of recreate, will going back to the 30th day possible?
Undrop probably restores the latest version of the table before it is dropped. Can it restore any version within the retention period?
This was an interesting question. We can do UNDROP if table was deleted multiple times.
A good explanation for this can be found here -
https://community.snowflake.com/s/article/Solution-Unable-to-access-Deleted-time-travel-data-even-within-retention-period
https://docs.snowflake.com/en/user-guide/data-time-travel.html?_ga=2.118857801.110877935.1647736580-83170813.1644772168&_gac=1.251330994.1646009703.EAIaIQobChMIuZ3o2Zeh9gIVjR-tBh3PvQUIEAAYASAAEgKYevD_BwE#example-dropping-and-restoring-a-table-multiple-times
I tested the scenario too, as shown below -
Refer below history
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>select query_id,query_text,start_time from table(information_schema.query_history()) where query_text like '%test_undrop_1%';
+--------------------------------------+-------------------------------------------------------------------------------------------------------------------------------+-------------------------------+
| QUERY_ID | QUERY_TEXT | START_TIME |
|--------------------------------------+-------------------------------------------------------------------------------------------------------------------------------+-------------------------------|
| 01a31a99-0000-81fe-0001-fa120003d75e | select query_id,query_text,start_time from table(information_schema.query_history()) where query_text like '%test_undrop_1%'; | 2022-03-22 14:13:58.953 -0700 |
| 01a31a99-0000-81c6-0001-fa120003f7ee | drop table test_undrop_1; | 2022-03-22 14:13:55.098 -0700 |
| 01a31a99-0000-81fe-0001-fa120003d73e | create or replace table test_undrop_1(id number, name varchar2(10)); | 2022-03-22 14:13:53.425 -0700 |
| 01a31a99-0000-81fe-0001-fa120003d72a | drop table test_undrop_1; | 2022-03-22 14:13:46.968 -0700 |
| 01a31a99-0000-81c6-0001-fa120003f79e | create or replace table test_undrop_1(id1 number, name varchar2(10)); | 2022-03-22 14:13:44.002 -0700 |
| 01a31a99-0000-81fe-0001-fa120003d70e | drop table test_undrop_1; | 2022-03-22 14:13:36.078 -0700 |
| 01a31a99-0000-81c6-0001-fa120003f77e | select query_id,query_text,start_time from table(information_schema.query_history()) where query_text like '%test_undrop_1%'; | 2022-03-22 14:13:14.711 -0700 |
| 01a31a99-0000-81fe-0001-fa120003d70a | select count(*) from test_undrop_1; | 2022-03-22 14:13:04.640 -0700 |
| 01a31a98-0000-81fe-0001-fa120003d706 | select * from test_undrop_1; | 2022-03-22 14:12:52.230 -0700 |
| 01a31a98-0000-81c6-0001-fa120003f75e | create or replace table test_undrop_1(id1 number, name1 varchar2(10)); | 2022-03-22 14:12:43.734 -0700 |
+--------------------------------------+-------------------------------------------------------------------------------------------------------------------------------+-------------------------------+
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>select * from test_undrop_1;
+----+------+
| ID | NAME |
|----+------|
+----+------+
0 Row(s) produced. Time Elapsed: 0.760s
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>alter table TEST_UNDROP_1 rename to test_undrop_1_1;
+----------------------------------+
| status |
|----------------------------------|
| Statement executed successfully. |
+----------------------------------+
1 Row(s) produced. Time Elapsed: 0.142s
UNDROP-1
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>undrop table test_undrop_1;
+--------------------------------------------+
| status |
|--------------------------------------------|
| Table TEST_UNDROP_1 successfully restored. |
+--------------------------------------------+
1 Row(s) produced. Time Elapsed: 0.155s
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>select * from test_undrop_1;
+----+------+
| ID | NAME |
|----+------|
+----+------+
0 Row(s) produced. Time Elapsed: 0.223s
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>alter table TEST_UNDROP_1 rename to test_undrop_1_2;
+----------------------------------+
| status |
|----------------------------------|
| Statement executed successfully. |
+----------------------------------+
1 Row(s) produced. Time Elapsed: 0.191s
UNDROP-2
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>undrop table test_undrop_1;
+--------------------------------------------+
| status |
|--------------------------------------------|
| Table TEST_UNDROP_1 successfully restored. |
+--------------------------------------------+
1 Row(s) produced. Time Elapsed: 0.155s
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>select * from test_undrop_1;
+-----+------+
| ID1 | NAME |
|-----+------|
+-----+------+
0 Row(s) produced. Time Elapsed: 0.140s
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>alter table TEST_UNDROP_1 rename to test_undrop_1_3;
+----------------------------------+
| status |
|----------------------------------|
| Statement executed successfully. |
+----------------------------------+
1 Row(s) produced. Time Elapsed: 0.396s
UNDROP-3 (Yey! got my table version)
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>undrop table test_undrop_1;
+--------------------------------------------+
| status |
|--------------------------------------------|
| Table TEST_UNDROP_1 successfully restored. |
+--------------------------------------------+
1 Row(s) produced. Time Elapsed: 0.149s
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>select * from test_undrop_1;
+-----+-------+
| ID1 | NAME1 |
|-----+-------|
+-----+-------+
0 Row(s) produced. Time Elapsed: 0.178s
Yes, you can query a table at any point in its time travel period.
Undrop will restore the table as it was at the point it was dropped.
To restore a table at a previous point in time you would need to use CREATE TABLE… CLONE…
Is it possible to see how a table look going back 10 days, provided the retention period is 30 days but the table is dropped and recreated on a daily basis?
The UNDROP command restores the most recent version of the table only. But #Pankaj's test shows that you can still restore any dropped table that hasn't been purged yet.
To check all version of the table in history (dropped), you can use the following command
SHOW TABLES HISTORY LIKE '<table_name>';
And then do a series of UNDROP and RENAME until you recover the version of the table that you want to restore.
However this is not a good practice. If you recreate the table daily, reduce it's retention period. If you want to recover 1 month of history, do a truncate instead of drop.
If the table is truncated, instead of recreate, will going back to the 30th day possible?
Yes.
create table tbl_clone clone tbl at (timestamp => dateadd('day',-30,current_timestamp()));
Undrop probably restores the latest version of the table before it is dropped. Can it restore any version within the retention period?
Yes, as already mentioned and pointed out by #Pankaj .
I have a table which has records of sessions a players have played in a group music play. (music instruments)
so if a user joins a session, and leaves, there is one row created. If they join even the same session 2x, then two rows are created.
Table: music_sessions_user_history
| Column | Type | Default|
| --- | --- | ---|---
| id | character varying(64) | uuid_generate_v4()|
| user_id | user_id | |
| created_at | timestamp without time zone | now()|
| session_removed_at | timestamp without time zone | |
| max_concurrent_connections | integer |
| music_session_id|character varying(64)|
This table is basically the amount of time a user was in a given session. So you can think of it as a timerange or tsrange in PG. The max_concurrent_connections which is a count of the number of users who were in the session at once.
so the query at it's heart needs to find overlapping time ranges for different users in the same session; and to then count them up as a pair that played together.
The query needs to do this: It tries to report each user that played in a music session with others - and who those users were
So for example, if a userA played with userB, and that's the only data in the database, then two rows would be returned like:
| User | Other users in the session |
| --- | --- |
|userA | [userB] |
|userB | [userA] |
But if userA played with both userB and UserC, then three rows would be like:
| User | Other users in the session |
| --- | --- |
|userA | [userB, userC]|
|userB | [userA, userC]|
|userC | [userA, userB]|
Any help of constructing this query is much appreciated.
update:
I am able to get overlapping records using this query.
select m1.user_id, m1.created_at, m1.session_removed_at, m1.max_concurrent_connections, m1.music_session_id
from music_sessions_user_history m1
where exists (select 1
from music_sessions_user_history m2
where tsrange(m2.created_at, m2.session_removed_at, '[]') && tsrange(m1.created_at, m1.session_removed_at, '[]')
and m2.music_session_id = m1.music_session_id
and m2.id <> m1.id);
Need to find a way to convert these results in to pairs.
create a cursor and for each fetched record determine which other records intersect using a between time of start and end time.
append the intersecting results into a temporary table
select the results of the temporary table
My question is a bit simple, there are many answers to it but I have a question more about the query itself for certain conditions.
I have a table like this :
Client | Date | Employee | Last Record | Trained
JOE | April 2020 | John Doe | May 2019 | TRUE
JOE |February 2020| John Doe | May 2019 | TRUE
JOE | May 2 019 | John Doe | May 2019 | FALSE
Now I watn to make a simple SQL summary table saying :
Client | Date | Inactive | Trained
JOE | April 2020 | 1 | 1
JOE |February 2020 | 1 | 1
JOE | May 2019 | 0 | 0
So basically do a count of Employees grouped by client and date, with the condition that the difference of date and last record is greater than, lets say 1 month and also in another column count the number of employees with a TRUE condition.
So my question is basically that, hwo would I go about creating a summary table where I want to set conditions per column, such as a date difference or if its true in a column.
Before you say Use a view, I need to create this table for performance reason since I am querying the first table which has millions of rows for a report program. However it is simple and better to query instead a table that holds a summary or counts with conditions.
So I have a transaction table (postgres) that inserts a new row whenever a user renews their subscription for our service. The table subscription looks like this:
+--------+--------+------------+
| userId | prodId | renew_date |
+--------+--------+------------+
| 1 | 1 | 2018-05-01 |
| 1 | 1 | 2018-06-01 |
| 1 | 1 | 2018-07-01 |
| 2 | 3 | 2017-04-16 |
| 2 | 3 | 2017-05-16 |
+--------+--------+------------+
If the analysts want to figure out the Nth renewal or latest renewal for a particular user or product, I have two solutions to give them that:
1.) During my ETL process, I truncate the DW warehouse target table and re-populate it with:
select *
, row_number() over (partition by userId, productId order by renew_date asc) as nth_renewal
from subscription
I can't think of a way where i can +1 to the previous renewal if I were to do incremental updates, what if this is the customers first renewal?
2.) I just copy the exact OLTP table over to the data warehouse and do incremental updates every day. This way, I let the analysts calculate the nth renewal themselves. (also as a follow up question: is it ever OK to have a duplicate copy of a transactional table in my data warehouse?)
I am working on a project where I'm trying to have the results of a SQL query emailed out when a certain log message appears in the SQL Server database.
My first goal is to isolate the data I need. The relevant tables are as follows:
System | Time | Log | Index
1001 |7/16/2015 7:22 |Fail |1729943
1002 |7/17/2015 10:26|Success |1743789
1002 |7/18/2015 10:26|Success |1743799
1003 |7/22/2015 6:14 |Timeout |1771793
What I'm interested in specifically is the last Time when system 1002 generates Success in Log. Seems simple enough but System and Log are not unique records and the following:
SELECT *
FROM DB.LogFiles
WHERE System ='1002' and Log ='Success'
Returns 2 rows:
System | Time | Log | Index
1002 | 7/17/2015 09:43 | Success | 1743789
1002 | 7/18/2015 10:26 | Success | 1743799
I'm in just interested in the last Time this condition occurred so the last row:
1002 | 7/18/2015 10:26 | Success | 1743799
That process will repeat everyday so the next day I would see the following records:
System | Time | Log | Index
1002 | 7/17/2015 09:43 | Success | 1743789
1002 | 7/18/2015 10:26 | Success | 1743799
1002 | 7/9/2015 11:42 | Success | 1748752
Of which I would like to be notified of again only the new and last record
1002 | 7/9/2015 11:42 | Success | 1749261
The end goal of the project is to have the query scheduled to run every few hours and looking to see if a new ‘Success’ record has been entered. If it has than generate an email. I’m not sure if that portion can be done in SQL however, and I may need to look at something outside of that to accomplish this. Any assistance or insight on at least the SQL portion would be most helpful.
If I understand correctly that the index is unique and the higher the value the newer the record then you can do something like this.
SELECT a.system, a.time, a.log, a.index
FROM log_files a
WHERE a.System ='1002' and a.Log ='Success'
AND a.index = (SELECT MAX(z.index) FROM log_files z
WHERE z.System = a.System and z.Log = a.Log)
WITH X AS
(
SELECT [System]
,[Time]
,[Log]
,[Index]
,ROW_NUMBER() OVER (PARTITION BY [System]
ORDER BY CAST([Time] AS DATETIME) DESC) rn
FROM TableName
)
SELECT [System]
,[Time]
,[Log]
,[Index]
FROM X
WHERE rn = 1
Add a new field (Flag) to your log table with default to FALSE. Whenever you send an email regarding to a record, set the Flag field to TRUE/
every time you want to send the email, just check the record with highest date and only send email if the Flag field is False. If the Flag field is TRUE, just ignore it.