shall I update a value to null in TDengine database? - tdengine

as below: I want to insert a row with null to achieve above .but seems not work.
create table t1 (ts timestamp,num1 int,num2 int);
insert into t1 (ts,num1,num2) values (“xxxx”,1,2) ;
insert into t1 values (“xxxxx”,1,2,NULL) ;
no matter I use update 1 or 2

sorry. TDengine database 2.0 doesn't support this but 3.0 is okay and no need any configuration.
it would be "upsert" automatically .

Related

T-SQL Increment Id after Insert

I'm currently working on a stored procedure in SQL Server 2012 using T-SQL. My problem: I have several SWOTs (e.g. for a specific client) holding several SWOTParts (strengths, weaknesses, opportunities, and threats). I store the values in a table Swot as well as in another table SwotPart.
My foreign Key link is SwotId in SwotPart, thus 1 Swot can hold N SwotParts. Hence, I store the SwotId in every SwotPart.
I can have many Swots and now need to set the SwotId correctly to create the foreign key. I set the SwotId using SCOPE_IDENTITY() unfortunately it only takes the last SwotId from the DB.I'm looking for something like a for loop to increment the SwotId after each insert on the 1st insert.
DECLARE #SwotId INT = 1;
-- 1st insert
SET NOCOUNT ON
INSERT INTO [MySchema].[SWOT]([SwotTypeId]) // Type can be e.g. a sepcific client
SELECT SwotTypeId
FROM #SWOTS
SET #SwotId = SCOPE_IDENTITY(); // currently e.g. 7, but should increment: 1, 2, 3...
-- 2nd insert
SET NOCOUNT ON
INSERT INTO [MySchema].[SwotPart]([SwotId], [FieldTypeId], [Label]) // FieldType can be e.g. Streangh
SELECT #SwotId, FieldTypeId, Label
FROM #SWOTPARTS
Do you know how to solve this issue? What could I use instead of SCOPE_IDENTITY()?
Thank you very much!
You can output the inserted rows into a temporary table, then join your #swotparts to the temporary table based on the natural key (whatever unique column set ties them together beyond the SwotId). This would solve the problem with resorting to loops or cursors, while also overcoming the obstacle of doing a single swot at a time.
set nocount, xact_abort on;
create table #swot (SwotId int, SwotTypeId int);
insert into MySchema.swot (SwotTypeId)
output inserted.SwotId, inserted.SwotTypeId into #swot
select SwotTypeId
from #swots;
insert into MySchema.SwotPart(SwotId, FieldTypeId, Label)
select s.SwotId, p.FieldTypeId, p.Label
from #swotparts p
inner join #swot s
on p.SwotTypeId = p.SwotTypeId;
Unfortunately I cant comment so I`ll leave you an answer hopefully to clarify some things:
Since you need to create the correct foreign key I don`t understand
why do you need to increment a value instead of using the id inserted
into the SWOT table.
I suggest returning the inserted id using the SCOPE_IDENTITY right after the insert statement and use it for you insert into the swot parts (there is plenty of info about it and how to use it)
DECLARE #SwotId INT;
-- 1st insert
INSERT INTO [MySchema].[SWOT]([SwotTypeId]) // Type can be e.g. a sepcific client
SET #SwotId = SCOPE_IDENTITY();
-- 2nd insert
INSERT INTO [MySchema].[SwotPart]([SwotId], [FieldTypeId], [Label])
SELECT #SwotId, FieldTypeId, Label
FROM #SWOTPARTS

Using `BEFORE INSERT` trigger to change the datatype of incoming data to match the column datatype in PostgreSQL

I have a postgres table, with a column C which has type T. People will be using COPY to insert data into this table. However sometimes they try to insert a value for C that isn't of type T, however I have a postgres function which can convert the value to T.
I'm trying to write a BEFORE INSERT trigger on the table which will call this function on the data so that I can ensure that I get no insert type errors. However it doesn't appear to work, I'm getting errors when trying to insert the data, even with the trigger there.
Before I spend too much time investigating, I want to find out if this is possible. Can I use triggers in this way to change the type of incoming data?
I want this to run on postgresql 9.3, but I have noticed the error and non-functioning trigger on postgres 9.5.
As Patrick stated you have to specify a permissive target so that Postgres validation doesn't reject the data before you get a chance to manipulate it.
Another way without a second table, is to create a view on your base table that casts everything to varchar, and then have an INSTEAD OF trigger that populates the base table whenever an insert is tried on the view.
For example, the table tab1 below has an integer column. The view v_tab1 has a varchar instead so any insert will work for the view. The instead of trigger then checks to see if the entered value is numeric and if not uses a 0 instead.
create table tab1 (i1 int, v1 varchar);
create view v_tab1 as select cast(i1 as varchar) i1, v1 from tab1;
create or replace function v_tab1_insert_trgfun() returns trigger as
$$
declare
safe_i1 int;
begin
if new.i1 ~ '^([0-9]+)$' then
safe_i1 = new.i1::int;
else
safe_i1 = 0;
end if;
insert into tab1 (i1, v1) values (safe_i1, new.v1);
return new;
end;
$$
language plpgsql;
create trigger v_tab1_insert_trigger instead of insert on v_tab1 for each row execute procedure v_tab1_insert_trgfun();
Now the inserts will work regardless of the value
insert into v_tab1 values ('12','hello');
insert into v_tab1 values ('banana','world');
select * from tab1;
Giving
|i1 |v1 |
+-----+-----+
|12 |hello|
|0 |world|
Fiddle at: http://sqlfiddle.com/#!15/9af5ab/1
No, you can not use this approach. The reason is that the backend already populates a record with the values that are to be inserted into the table. That is in the form of the NEW parameter that is available in the trigger. So the error is thrown even before the trigger fires.
The same applies to rules, incidentally, so Kevin's suggestion in his comment won't work.
Probably your best solution is to create a staging table with "permissive" column data types (such as text) and then put a BEFORE INSERT trigger on that table that casts all column values to their correct type before inserting them in the final table. If that second insertion is successful you can even RETURN NULL from the insert so the row won't go into the table (not sure, though, what COPY thinks about that...). Those records that do end up in the table have some weird data in them and you can then deal with those rows manually.

How to check if a value is not (null) in sqlite database

I want to know how can I match if any value is not null in sqlite database. I am trying with column_name IS NOT NULL but it is not giving me correct result. Actually somewhere it is NULL and somewhere it is (null) in the table. I think it will check for NULL not for (null). How can I get the result which is not (null) ?
Please Have a look on the picture
Table definition is
Or what is the difference between both null values?
from sqlite datatype
Any column can still store any type of data. It is just that some
columns, given the choice, will prefer to use one storage class over
another. The preferred storage class for a column is called its
"affinity".
A column with TEXT affinity stores all data using storage classes
NULL, TEXT or BLOB. If numerical data is inserted into a column with
TEXT affinity it is converted into text form before being stored.
A column with NUMERIC affinity may contain values using all five
storage classes. When text data is inserted into a NUMERIC column, the
storage class of the text is converted to INTEGER or REAL (in order of
preference) if such conversion is lossless and reversible.
So somewhere along the line this columns where written with the literal value '(null)'. And this values can e.g. not converted to INTEGER witout loss.
I recommend to update your table(s) with
UPDATE yourtable SET name IS NULL WHERE name = '(null)';
UPDATE yourtable SET desc IS NULL WHERE desc = '(null)';
UPDATE ... a.s.o
for all relevant tables and columns, so that you get consistent tables.
You'd match a not null column with IS NOT NULL keywords.
The (null) you see is just how your editor is indicating that the column is devoid of value.
Take this SQLite (WebSQL) SQLFiddle for example: http://sqlfiddle.com/#!7/178db/3
create table test (
field1 int,
field2 int
);
insert into test (field1) values (1);
insert into test (field1) values (2);
-- below statement will result in no results
select * from test where field2 is not null;
-- below statement will result in 2 records
select * from test where field2 is null;
field1 field2
------ ------
1 (null)
2 (null)
Testing on Mac
Open Terminal in Mac and type the following commands.
$> sqlite testfile.sqlite.db
sqlite> select * from testing;
sqlite> insert into testing (field1) values (1);
sqlite> insert into testing (field1) values (2);
-- Notice the results
sqlite> select * from testing;
1|
2|
-- Notice the results when requesting field2 is null
sqlite> select * from testing where field2 is null;
1|
2|
-- Notice the results when requesting field2 is NOT null
sqlite> select * from testing where field2 is not null;
sqlite> .quit
Now, go to the directory in which your SQLite file sits
$> cd /Users/<you>/Documents
$> sqlite <myfile.db>
-- retrieve your table's CREATE TABLE statement
-- and add it to your answer
$> .dump ServiceObjects
-- check to see if your NULL values are appearing similar to the example above
-- if feasible, paste a portion of your output in your answer as well
$> select * from ServiceObjects
-- try running queries with WHERE <field-of-interest> IS NOT NULL
-- try running queries with WHERE <field-of-interest> IS NULL
Are you getting results similar to the results I see? Please also include your SQLite version in your edited answer. I am using 3.8.10.2.

SEQUENCE in SQL Server 2008 R2

I need to know if there is any way to have a SEQUENCE or something like that, as we have in Oracle. The idea is to get one number and then use it as a key to save some records in a table. Each time we need to save data in that table, first we get the next number from the sequence and then we use the same to save some records. Is not an IDENTITY column.
For example:
[ID] [SEQUENCE ID] [Code] [Value]
1 1 A 232
2 1 B 454
3 1 C 565
Next time someone needs to add records, the next SEQUENCE ID should be 2, is there any way to do it? the sequence could be a guid for me as well.
As Guillelon points out, the best way to do this in SQL Server is with an identity column.
You can simply define a column as being identity. When a new row is inserted, the identity is automatically incremented.
The difference is that the identity is updated on every row, not just some rows. To be honest, think this is a much better approach. Your example suggests that you are storing both an entity and detail in the same table.
The SequenceId should be the primary identity key in another table. This value can then be used for insertion into this table.
This can be done using multiple ways, Following is what I can think of
Creating a trigger and there by computing the possible value
Adding a computed column along with a function that retrieves the next value of the sequence
Here is an article that presents various solutions
One possible way is to do something like this:
-- Example 1
DECLARE #Var INT
SET #Var = Select Max(ID) + 1 From tbl;
INSERT INTO tbl VALUES (#var,'Record 1')
INSERT INTO tbl VALUES (#var,'Record 2')
INSERT INTO tbl VALUES (#var,'Record 3')
-- Example 2
INSERT INTO #temp VALUES (1,2)
INSERT INTO #temp VALUES (1,2)
INSERT INTO ActualTable (col1, col2, sequence)
SELECT temp.*, (SELECT MAX(ID) + 1 FROM ActualTable)
FROM #temp temp
-- Example 3
DECLARE #var int
INSERT INTO ActualTable (col1, col2, sequence) OUTPUT #var = inserted.sequence VALUES (1, 2, (SELECT MAX(ID) + 1 FROM ActualTable))
The first two examples rely on batch updating. But based on your comment, I have added example 3 which is a single input initially. You can then use the sequence that was inserted to insert the rest of the records. If you have never used an output, please reply in comments and I will expand further.
I would isolate all of the above inside of a transactions.
If you were using SQL Server 2012, you could use the SEQUENCE operator as shown here.
Forgive me if syntax errors, don't have SSMS installed

Update a part of column value in SQL Server

I have a database in SQL Server with its data. I need change a part of some columns value in some conditions.
Imagine the value as "0010020001".
002 belongs to another value in my database and whenever I want to change it to 005, I must update the previous 10-digits code to "001005001".
Actually, I need to update just a part of columns value using UPDATE statement. How can I do it (in this example)?
While everyone else is correct that if you have control of the schema you should definitely not store your data this way, this is how I would solve the issue you as you described it if I couldn't adjust the schema.
IF OBJECT_ID('tempdb..#test') IS NOT NULL
DROP TABLE #test
create table #test
(
id int,
multivaluecolumn varchar(20)
)
insert #Test
select 1,'001002001'
UNION
select 2,'002004002'
UNION
select 3,'003006003'
GO
declare #oldmiddlevalue char(3)
set #oldmiddlevalue= '002'
declare #newmiddlevalue char(3)
set #newmiddlevalue = '005'
select * from #Test
Update #Test set multivaluecolumn =left(multivaluecolumn,3) + #newmiddlevalue + right(multivaluecolumn,3)
where substring(multivaluecolumn,4,3) = #oldmiddlevalue
select * from #Test
Why dont you use CSV(comma separated values) or use any other symbol like ~ to store tha values. Once you need to update a part of it use php explode function and then update it. After your work is done, concat all the values again to get the desired string to be stored in your column.
In that case your column will have values VARCHAR like 001~002~0001

Resources