how to completely stop a continuous query in TDengine database? - tdengine

I created a continuous query in TDengine database by following the example in official website as follows:
taos> create table meters (ts timestamp, current float, voltage int, phase float) tags (location binary(64), groupId int);
Query OK, 0 of 0 row(s) in database (0.002309s)
taos> create table D1001 using meters tags ("Beijing.Chaoyang", 2);
Query OK, 0 of 0 row(s) in database (0.002737s)
taos> create table D1002 using meters tags ("Beijing.Haidian", 2);
Query OK, 0 of 0 row(s) in database (0.004740s)
taos> create table avg_vol as select avg(voltage) from meters interval(1m) sliding(30s);
Query OK, 0 of 0 row(s) in database (0.005207s)
taos> show streams;
streamId | user | dest table | ip:port | created time | exec time | time(us) | sql | cycles |
=======================================================================================================================================================================================================================================
3:1 | _root | avg_vol | 127.0.0.1:37643 | 2022-01-21 16:20:32.538 | NULL | 0 | select avg(voltage) from me... | 0 |
Query OK, 1 row(s) in set (0.002328s)
taos> kill stream 3:1;
Query OK, 0 row(s) in set (0.000418s)
taos> show streams;
Query OK, 0 row(s) in set (0.001015s)
It looks good, but when I restart the database, I still see this continuous query, how to kill it completely?

try drop the destination table you created
drop table avg_vol;

Related

Time travelling a 'drop & recreate' table for any past day within retention period

Is it possible to see how a table look going back 10 days, provided the retention period is 30 days but the table is dropped and recreated on a daily basis?
If the table is truncated, instead of recreate, will going back to the 30th day possible?
Undrop probably restores the latest version of the table before it is dropped. Can it restore any version within the retention period?
This was an interesting question. We can do UNDROP if table was deleted multiple times.
A good explanation for this can be found here -
https://community.snowflake.com/s/article/Solution-Unable-to-access-Deleted-time-travel-data-even-within-retention-period
https://docs.snowflake.com/en/user-guide/data-time-travel.html?_ga=2.118857801.110877935.1647736580-83170813.1644772168&_gac=1.251330994.1646009703.EAIaIQobChMIuZ3o2Zeh9gIVjR-tBh3PvQUIEAAYASAAEgKYevD_BwE#example-dropping-and-restoring-a-table-multiple-times
I tested the scenario too, as shown below -
Refer below history
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>select query_id,query_text,start_time from table(information_schema.query_history()) where query_text like '%test_undrop_1%';
+--------------------------------------+-------------------------------------------------------------------------------------------------------------------------------+-------------------------------+
| QUERY_ID | QUERY_TEXT | START_TIME |
|--------------------------------------+-------------------------------------------------------------------------------------------------------------------------------+-------------------------------|
| 01a31a99-0000-81fe-0001-fa120003d75e | select query_id,query_text,start_time from table(information_schema.query_history()) where query_text like '%test_undrop_1%'; | 2022-03-22 14:13:58.953 -0700 |
| 01a31a99-0000-81c6-0001-fa120003f7ee | drop table test_undrop_1; | 2022-03-22 14:13:55.098 -0700 |
| 01a31a99-0000-81fe-0001-fa120003d73e | create or replace table test_undrop_1(id number, name varchar2(10)); | 2022-03-22 14:13:53.425 -0700 |
| 01a31a99-0000-81fe-0001-fa120003d72a | drop table test_undrop_1; | 2022-03-22 14:13:46.968 -0700 |
| 01a31a99-0000-81c6-0001-fa120003f79e | create or replace table test_undrop_1(id1 number, name varchar2(10)); | 2022-03-22 14:13:44.002 -0700 |
| 01a31a99-0000-81fe-0001-fa120003d70e | drop table test_undrop_1; | 2022-03-22 14:13:36.078 -0700 |
| 01a31a99-0000-81c6-0001-fa120003f77e | select query_id,query_text,start_time from table(information_schema.query_history()) where query_text like '%test_undrop_1%'; | 2022-03-22 14:13:14.711 -0700 |
| 01a31a99-0000-81fe-0001-fa120003d70a | select count(*) from test_undrop_1; | 2022-03-22 14:13:04.640 -0700 |
| 01a31a98-0000-81fe-0001-fa120003d706 | select * from test_undrop_1; | 2022-03-22 14:12:52.230 -0700 |
| 01a31a98-0000-81c6-0001-fa120003f75e | create or replace table test_undrop_1(id1 number, name1 varchar2(10)); | 2022-03-22 14:12:43.734 -0700 |
+--------------------------------------+-------------------------------------------------------------------------------------------------------------------------------+-------------------------------+
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>select * from test_undrop_1;
+----+------+
| ID | NAME |
|----+------|
+----+------+
0 Row(s) produced. Time Elapsed: 0.760s
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>alter table TEST_UNDROP_1 rename to test_undrop_1_1;
+----------------------------------+
| status |
|----------------------------------|
| Statement executed successfully. |
+----------------------------------+
1 Row(s) produced. Time Elapsed: 0.142s
UNDROP-1
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>undrop table test_undrop_1;
+--------------------------------------------+
| status |
|--------------------------------------------|
| Table TEST_UNDROP_1 successfully restored. |
+--------------------------------------------+
1 Row(s) produced. Time Elapsed: 0.155s
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>select * from test_undrop_1;
+----+------+
| ID | NAME |
|----+------|
+----+------+
0 Row(s) produced. Time Elapsed: 0.223s
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>alter table TEST_UNDROP_1 rename to test_undrop_1_2;
+----------------------------------+
| status |
|----------------------------------|
| Statement executed successfully. |
+----------------------------------+
1 Row(s) produced. Time Elapsed: 0.191s
UNDROP-2
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>undrop table test_undrop_1;
+--------------------------------------------+
| status |
|--------------------------------------------|
| Table TEST_UNDROP_1 successfully restored. |
+--------------------------------------------+
1 Row(s) produced. Time Elapsed: 0.155s
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>select * from test_undrop_1;
+-----+------+
| ID1 | NAME |
|-----+------|
+-----+------+
0 Row(s) produced. Time Elapsed: 0.140s
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>alter table TEST_UNDROP_1 rename to test_undrop_1_3;
+----------------------------------+
| status |
|----------------------------------|
| Statement executed successfully. |
+----------------------------------+
1 Row(s) produced. Time Elapsed: 0.396s
UNDROP-3 (Yey! got my table version)
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>undrop table test_undrop_1;
+--------------------------------------------+
| status |
|--------------------------------------------|
| Table TEST_UNDROP_1 successfully restored. |
+--------------------------------------------+
1 Row(s) produced. Time Elapsed: 0.149s
SNOWFLAKE1#COMPUTE_WH#TEST_DB.PUBLIC>select * from test_undrop_1;
+-----+-------+
| ID1 | NAME1 |
|-----+-------|
+-----+-------+
0 Row(s) produced. Time Elapsed: 0.178s
Yes, you can query a table at any point in its time travel period.
Undrop will restore the table as it was at the point it was dropped.
To restore a table at a previous point in time you would need to use CREATE TABLE… CLONE…
Is it possible to see how a table look going back 10 days, provided the retention period is 30 days but the table is dropped and recreated on a daily basis?
The UNDROP command restores the most recent version of the table only. But #Pankaj's test shows that you can still restore any dropped table that hasn't been purged yet.
To check all version of the table in history (dropped), you can use the following command
SHOW TABLES HISTORY LIKE '<table_name>';
And then do a series of UNDROP and RENAME until you recover the version of the table that you want to restore.
However this is not a good practice. If you recreate the table daily, reduce it's retention period. If you want to recover 1 month of history, do a truncate instead of drop.
If the table is truncated, instead of recreate, will going back to the 30th day possible?
Yes.
create table tbl_clone clone tbl at (timestamp => dateadd('day',-30,current_timestamp()));
Undrop probably restores the latest version of the table before it is dropped. Can it restore any version within the retention period?
Yes, as already mentioned and pointed out by #Pankaj .

update not working during batch insert in TDengine database

I created a database with update=1, so that I can update some records based on the timestamp. but I found the update is not working when I use batch insert. I also tried normal insert, updating record is working.
Welcome to the TDengine shell from Linux, Client Version:2.1.7.2
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
taos> create database test update 1;
Query OK, 0 of 0 row(s) in database (0.007977s)
taos> use test;
Database changed.
taos> create table tb(ts timestamp, c1 int);
Query OK, 0 of 0 row(s) in database (0.015282s)
taos> insert into tb values(now, 1)(now, null);
Query OK, 1 of 1 row(s) in database (0.000797s)
taos> select * from tb;
ts | c1 |
========================================
2021-09-28 11:37:32.339 | 1 |
Query OK, 1 row(s) in set (0.002671s)
taos> insert into tb values("2021-09-28 11:37:32.339", null);
Query OK, 1 of 1 row(s) in database (0.000611s)
taos> select * from tb;
ts | c1 |
========================================
2021-09-28 11:37:32.339 | NULL |
Query OK, 1 row(s) in set (0.002591s)
what is the difference between batch insert and normal insert in TDengine?
Batch insertion uses the same "now". It leads two records to use the same timestamp which is not possible to be a valid record as the time-series database needs each record to use a different timestamp.

Execute Stored Procedure using Trigger on VIEW when multiple columns are updated with specific values

I have a VIEW (in SQL SERVER) containing the following columns:
itemID[vachar(50)]|itemStatus [vachar(20)]|itemCode[vachar(20)]|itemTime[varchar(5)]
The itemID column contains id values that do not change. The remaining 3 rows however get updated periodically. I understand it is more difficult create a trigger on a VIEW.
An example of the table containing data would be:
|itemID|imtemStatus|itemCode|itemTime|
|------|-----------|--------|--------|
| 1 | OK | 30 | 00:10 |
|------|-----------|--------|--------|
| 2 | OK | 40 | 02:30 |
|------|-----------|--------|--------|
| 3 | STOPPED | 30 | 00:01 |
|------|-----------|--------|--------|
When itemStatus = STOPPED & itemCode = 30
I would like to execute a stored procedure (sp_Alert) passing the itemID as a parameter
Any help would be greatly appreciated
Since a trigger is at least "not easy", I'd like to propose an ugly but functional way out. You can create a stored procedure that checks ItemCode and ItemStatus. If they match your criteria you can start the sp_Alert from this procedure.
create procedure check_status as
if (select 1
from vw_itemstatus
where itemStatus = 'STOPPED'
and itemCode = 30) is not null
begin
declare #item_id int
set #item_id = (select itemID
from vw_itemstatus
where itemStatus = 'STOPPED'
and itemCode = 30)
exec sp_Alert #item_id
end
Depending on how critical this functionality is and how many resources you can use for it, you can schedule this procedure via the SQL Server Agent. If you run this with a short interval, it will work "similar" to what you had in mind.

PostgreSQL 9.6 with pgAdmin 4 - numeric data type

I have columns with numeric(5,2) data type.
When the actual data is 1.25, it displays correctly as 1.25.
When the actual data is 1.00, it displays as 1.
Can anyone tell me why? Is there something that I need to set to have it so the two decimal 0's display?
I think this may be an issue specific to pgadmin4. Consider:
> createdb test
> psql -d test
psql (9.4.9)
Type "help" for help.
test=# create table mytest(id serial not null primary key, name varchar(30),salary numeric(5,2));
CREATE TABLE
test=# select * from mytest;
id | name | salary
----+------+--------
(0 rows)
test=# insert into mytest(name,salary) values('fred',10.3);
INSERT 0 1
test=# insert into mytest(name,salary) values('mary',11);
INSERT 0 1
test=# select * from mytest;
id | name | salary
----+------+--------
1 | fred | 10.30
2 | mary | 11.00
(2 rows)
test=# select salary from mytest where name = 'mary';
salary
--------
11.00
(1 row)
This example is with version 9.4 as you can see, but would be a simple test to see if the problem is with 9.6 or pgadmin4. In pgadmin3 the value is displayed correctly with decimal places.
Last time I tried pgadmin4 it had a number of annoying issues that sent me scurrying back to pgadmin3 for the time being. However there is a list where you can seek confirmation of the bug: https://redmine.postgresql.org/projects/pgadmin4
This is a bug with pgAdmin4 and already reported https://redmine.postgresql.org/issues/2039

Query Plan Sybase and Tree Datastructure

This is regarding the query plan of sybase and how the tree is formed based on the query plan
1)
How this query plan is formed into proper tree?
Starting with emit.Insert is the child of Emit and Restrict is the child of Insert and so on. It doesnt tally with the explanation.
2)May I know how the actual processing takes and how the interim result is flown to achieve the final outcome? and what is the maximum number of child a node can have?
Sorry for such long example.
text delete operator
Another type of query plan where a DML operator can have more than one
child operator is the alter table drop textcol command, where textcol is the name
of a column whose datatype is text, image, or unitext. The following queries and
query plan are an example of the use of the text delete operator:
1> use tempdb
1> create table t1 (c1 int, c2 text, c3 text)
1> set showplan on
1> alter table t1 drop c2
QUERY PLAN FOR STATEMENT 1 (at line 1).
Optimized using the Abstract Plan in the PLAN clause.
5 operator(s) under root
The type of query is ALTER TABLE.
ROOT:EMIT Operator
|INSERT Operator
| The update mode is direct.
|
| |RESTRICT Operator
| |
| | |SCAN Operator
| | | FROM TABLE
| | | t1
| | | Table Scan.
| | | Forward Scan.
| | | Positioning at start of table.
| | | Using I/O Size 2 Kbytes for data pages.
| | | With LRU Buffer Replacement Strategy for data pages.
| |TEXT DELETE Operator
| | The update mode is direct.
| |
| | |SCAN Operator
| | | FROM TABLE
| | | t1
| | | Table Scan.
| | | Forward Scan.
| | | Positioning at start of table.
| | | Using I/O Size 2 Kbytes for data pages.
| | | With LRU Buffer Replacement Strategy for data pages.
| TO TABLE
| #syb__altab
| Using I/O Size 2 Kbytes for data pages.
The below is the explantion
Explanation:
One of the two text columns in t1 is dropped, using the alter table command.
The showplan output looks like a select into query plan because alter table
internally generated a select into query plan. The insert operator calls on its left
child operator, the scan of t1, to read the rows of t1, and builds new rows with
only the c1 and c3 columns inserted into #syb_altab. When all the new rows
have been inserted into #syb_altab, the insert operator calls on its right child,
the text delete operator, to delete the text page chains for the c2 columns that
have been dropped from t1. Post-processing replaces the original pages of t1
with those of #syb_altab to complete the alter table command.

Resources