I created a database with update=1, so that I can update some records based on the timestamp. but I found the update is not working when I use batch insert. I also tried normal insert, updating record is working.
Welcome to the TDengine shell from Linux, Client Version:2.1.7.2
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
taos> create database test update 1;
Query OK, 0 of 0 row(s) in database (0.007977s)
taos> use test;
Database changed.
taos> create table tb(ts timestamp, c1 int);
Query OK, 0 of 0 row(s) in database (0.015282s)
taos> insert into tb values(now, 1)(now, null);
Query OK, 1 of 1 row(s) in database (0.000797s)
taos> select * from tb;
ts | c1 |
========================================
2021-09-28 11:37:32.339 | 1 |
Query OK, 1 row(s) in set (0.002671s)
taos> insert into tb values("2021-09-28 11:37:32.339", null);
Query OK, 1 of 1 row(s) in database (0.000611s)
taos> select * from tb;
ts | c1 |
========================================
2021-09-28 11:37:32.339 | NULL |
Query OK, 1 row(s) in set (0.002591s)
what is the difference between batch insert and normal insert in TDengine?
Batch insertion uses the same "now". It leads two records to use the same timestamp which is not possible to be a valid record as the time-series database needs each record to use a different timestamp.
Related
I created a continuous query in TDengine database by following the example in official website as follows:
taos> create table meters (ts timestamp, current float, voltage int, phase float) tags (location binary(64), groupId int);
Query OK, 0 of 0 row(s) in database (0.002309s)
taos> create table D1001 using meters tags ("Beijing.Chaoyang", 2);
Query OK, 0 of 0 row(s) in database (0.002737s)
taos> create table D1002 using meters tags ("Beijing.Haidian", 2);
Query OK, 0 of 0 row(s) in database (0.004740s)
taos> create table avg_vol as select avg(voltage) from meters interval(1m) sliding(30s);
Query OK, 0 of 0 row(s) in database (0.005207s)
taos> show streams;
streamId | user | dest table | ip:port | created time | exec time | time(us) | sql | cycles |
=======================================================================================================================================================================================================================================
3:1 | _root | avg_vol | 127.0.0.1:37643 | 2022-01-21 16:20:32.538 | NULL | 0 | select avg(voltage) from me... | 0 |
Query OK, 1 row(s) in set (0.002328s)
taos> kill stream 3:1;
Query OK, 0 row(s) in set (0.000418s)
taos> show streams;
Query OK, 0 row(s) in set (0.001015s)
It looks good, but when I restart the database, I still see this continuous query, how to kill it completely?
try drop the destination table you created
drop table avg_vol;
I found that tdengine has a parameter will create database. this definition "The KEEP parameter refers to the number of days to save a modified data file. " from tdengine's website https://www.taosdata.com/en/documentation/taos-sql#management. I think this parameter is very useful that there is no need to delete the history data. so i create a database which only keep 10 days.
CREATE DATABASE IF NOT EXISTS db_keep KEEP 10 PRECISION 'ms' ;
create table test_keep(ts timestamp,desc nchar(20));
after create db and table , I tried to insert some data into the table. the follows are my insert sqls.
insert into test_keep values(now,'now');
insert into test_keep values('2021-08-31 10:28:53.521','yesterday');
insert into test_keep values('2021-09-02 10:28:53.521','tomorrow');
insert into test_keep values('2021-08-25 10:28:53.521','6 days before');
insert into test_keep values('2021-09-20 12:28:53.521','20 days later');
insert into test_keep values('2021-08-21 10:28:53.521','10 days before');
insert into test_keep values('2021-08-11 10:28:53.521','20 days before');
While the lass three sql had execute error "DB error: Timestamp data out of range"
taos> insert into test_keep values(now,'now'); Query OK, 1 of 1 row(s)
in database (1.024000s)
taos> insert into test_keep values('2021-08-31
10:28:53.521','yesterday'); Query OK, 1 of 1 row(s) in database
(0.006000s)
taos> insert into test_keep values('2021-09-02
10:28:53.521','tomorrow'); Query OK, 1 of 1 row(s) in database
(0.004000s)
taos> insert into test_keep values('2021-08-25 10:28:53.521','6 days
before'); Query OK, 1 of 1 row(s) in database (0.004000s)
taos> insert into test_keep values('2021-09-20 12:28:53.521','20 days
later');
DB error: Timestamp data out of range (0.005000s) taos> insert into
test_keep values('2021-08-21 10:28:53.521','10 days before');
DB error: Timestamp data out of range (0.004000s) taos> insert into
test_keep values('2021-08-11 10:28:53.521','20 days before');
DB error: Timestamp data out of range (0.004000s) taos>
I thought this because of my keep is to small, so i made it larger.
ALTER DATABASE db_keep KEEP 365;
and the I tried to insert the failed sql again I found cannot insert data some days later from now .
taos> insert into test_keep values('2021-09-20 12:28:53.521','20 days
later');
DB error: Timestamp data out of range (0.005000s) taos> insert into
test_keep values('2021-08-21 10:28:53.521','10 days before'); Query
OK, 1 of 1 row(s) in database (0.004000s)
taos> insert into test_keep values('2021-08-11 10:28:53.521','20 days
before'); Query OK, 1 of 1 row(s) in database (0.004000s)
I want to ask how to use keep and the how dose it limit the data's timestamp?
For timestamp in a database, the two configuration options is most reasonable:
keep: the longest days to keep in the database refer to current timestamp(3650 days by default). When the "newest" timestamp in a persist file goes out of the range, the persist file will be deleted. Timestamp older than now - keep will be treated as Timestamp out of range.
days: the time range to store data in a file, 10 days by default. Newer timestamp would not be larger than now+ days.
So, the acceptale timestamp range in a database will be [now - keep, now + days].
For past data, the timestamp value cannot exceed (current_time - keep). Meanwhile, for future data, the timestamp value cannot exceed (current time + days).
Since my reputation is not enough for inserting images here, please refer to reference link below for details (grey means can NOT be inserted).
Reference:
https://segmentfault.com/a/1190000040617572
Table emp with columns:
id | name | sal | deptno | location
One user is inserting/updating 20 million records into that table.
Another user is retrieving data using select statement from the same emp table.
insert / update statement will take 2 hours and at the same time another user retrieve data in the mid of insertion/update process
How many records will come for execute the select statement in SQL?
How to check or what steps we need to follow to achieve this task in SQL Server?
I have columns with numeric(5,2) data type.
When the actual data is 1.25, it displays correctly as 1.25.
When the actual data is 1.00, it displays as 1.
Can anyone tell me why? Is there something that I need to set to have it so the two decimal 0's display?
I think this may be an issue specific to pgadmin4. Consider:
> createdb test
> psql -d test
psql (9.4.9)
Type "help" for help.
test=# create table mytest(id serial not null primary key, name varchar(30),salary numeric(5,2));
CREATE TABLE
test=# select * from mytest;
id | name | salary
----+------+--------
(0 rows)
test=# insert into mytest(name,salary) values('fred',10.3);
INSERT 0 1
test=# insert into mytest(name,salary) values('mary',11);
INSERT 0 1
test=# select * from mytest;
id | name | salary
----+------+--------
1 | fred | 10.30
2 | mary | 11.00
(2 rows)
test=# select salary from mytest where name = 'mary';
salary
--------
11.00
(1 row)
This example is with version 9.4 as you can see, but would be a simple test to see if the problem is with 9.6 or pgadmin4. In pgadmin3 the value is displayed correctly with decimal places.
Last time I tried pgadmin4 it had a number of annoying issues that sent me scurrying back to pgadmin3 for the time being. However there is a list where you can seek confirmation of the bug: https://redmine.postgresql.org/projects/pgadmin4
This is a bug with pgAdmin4 and already reported https://redmine.postgresql.org/issues/2039
I know that syscat.tables exists in db2.
I also tried to find the count in user_tables and I got the output this way:
db2 => select count(*) from user_tables
1
-----------
999
1 record(s) selected.
but I couldn't describe the table user_tables while I could describe any other table.
Example:
db2 => describe table user_tables
Data type Column
Column name schema Data type name Length Scale Nulls
------------------------------- --------- ------------------- ---------- ----- ------
0 record(s) selected.
SQL0100W No row was found for FETCH, UPDATE or DELETE; or the result of a
query is an empty table. SQLSTATE=02000
Could you help me understand why this is happening?
DB2 has an Oracle compatibility mode which needs to be enabled for a database. As part of this users can opt to have Oracle data dictionary-compatible views created. One of the views is user_tables.
Could you try the following (not tested):
describe select * from user_tables
This should return the schema for the result table which is that view.
SELECT * FROM systables WHERE SYSTEM_TABLE_SCHEMA ='YOURSCHEMA'