How to start id on max and auto decrement in postgres - database

How can i start a table with an id on max value and auto increment -1 on PostgreSQL?
I need to create a table with initial value on max int, and have to be an auto increment, but the auto increment have to be -1.
I know how to create the table:
CREATE TABLE "external_sequence" ("id" int NOT NULL, "created_at" timestamp NOT NULL DEFAULT now(), "updated_at" timestamp NOT NULL DEFAULT now(), CONSTRAINT "PK_119c50ca5604d166b77b8585a2c" PRIMARY KEY ("id"))
But in this case the table starts with id 1 and increment every row, i must start without set the number i don't know, mey a postgres function to start on max value and increment -1 every row.
Is this possible?
I'm trying to do this with node.js and typeorm, but i can use the query if it's necessary. I hope you can understand my poor english.

CREATE SEQUENCE my_custom_sequence START WITH 2147483647 INCREMENT BY -1 MAXVALUE 2147483647 MINVALUE 1;
-- if you already have sequence want to modify:
-- alter sequence my_custom_sequence INCREMENT BY -1;
-- ALTER SEQUENCE my_custom_sequence RESTART WITH 2147483647;
CREATE TABLE tests (
id integer not null default nextval('my_custom_sequence'),
name VARCHAR ( 255 ) NOT NULL
);
insert into tests(name) values('first');
insert into tests(name) values('second');
insert into tests(name) values('third');
select * from tests;
id | name
------------+--------
2147483647 | first
2147483646 | second
2147483645 | third
More info: https://www.postgresql.org/docs/current/sql-altersequence.html

Related

Identify if a column is Virtual in Snowflake

Snowflake does not document its Virtual Column capability that uses the AS clause. I am doing a migration and needing to filter out virtual columns programatically.
Is there any way to identify that a column is virtual? The Information Schema.COLLUMNS view shows nothing different between a virtual and non-virtual column definition.
There is a difference between column defined as DEFAULT and VIRTUAL COLUMN(aka computed, generated column):
Virtual column
CREATE OR REPLACE TABLE T1(i INT, calc INT AS (i*i));
INSERT INTO T1(i) VALUES (2),(3),(4);
SELECT * FROM T1;
When using AS (expression) syntax the expression is not visible inCOLUMN_DEFAULT:
DEFAULT Expression
In case of the defintion DEFAULT (expression):
CREATE OR REPLACE TABLE T2(i INT, calc INT DEFAULT (i*i));
INSERT INTO T2(i) VALUES (2),(3),(4);
SELECT * FROM T2;
It is visible in COLUMN_DEFAULT:
SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'T2';
Comparing side-by-side with SHOW COLUMNS:
SHOW COLUMNS LIKE 'CALC';
-- kind: VIRTUAL_COLUMN
One notable difference between them is that virtual column cannot be updated:
UPDATE T1
SET calc = 1;
-- Virtual column 'CALC' is invalid target.
UPDATE T2
SET calc = 1;
-- success
How about using SHOW COLUMNS ? you should identify them when expression
field is not null.
create table foo (id bigint, derived bigint as (id * 10));
insert into foo (id) values (1), (2), (3);
SHOW COLUMNS IN TABLE foo;
SELECT "table_name", "column_name", "expression" FROM table(result_scan(last_query_id()));
| table_name | column_name | expression |
| ---------- | ----------- | -------------- |
| FOO | ID | null |
| FOO | DERIVED | ID*10 |
I normally use the desc table option.
First lets create the table with some example data:
create or replace temporary table ColumnTypesTest (
id int identity(1,1) primary key,
userName varchar(30),
insert_DT datetime default CAST(CONVERT_TIMEZONE('UTC', CAST(CURRENT_TIMESTAMP() AS TIMESTAMP_TZ(9))) AS TIMESTAMP_NTZ(9)) not null,
nextDayAfterInsert datetime as dateadd(dd,1,insert_DT)
);
insert into ColumnTypesTest (userName) values
('John'),
('Cris'),
('Anne');
select * from ColumnTypesTest;
ID
USERNAME
INSERT_DT
NEXTDAYAFTERINSERT
1
John
2021-10-04 19:11:21.069
2021-10-05 19:11:21.069
2
Cris
2021-10-04 19:11:21.069
2021-10-05 19:11:21.069
3
Anne
2021-10-04 19:11:21.069
2021-10-05 19:11:21.069
Now the answer to your question
Using the 'desc table <table_name>;' you will get a column named kind which will tell you if it is virtual or not, also separately there is the default with NULL if it has no default value.
name
type
kind
null?
default
primary key
unique key
check
expression
comment
policy name
ID
NUMBER(38,0)
COLUMN
N
IDENTITY START 1 INCREMENT 1
Y
N
USERNAME
VARCHAR(30)
COLUMN
Y
N
N
INSERT_DT
TIMESTAMP_NTZ(9)
COLUMN
N
CAST(CONVERT_TIMEZONE('UTC', CAST(CURRENT_TIMESTAMP() AS TIMESTAMP_TZ(9))) AS TIMESTAMP_NTZ(9))
N
N
NEXTDAYAFTERINSERT
TIMESTAMP_NTZ(9)
VIRTUAL
Y
N
N
DATE_ADDDAYSTOTIMESTAMP(1, INSERT_DT)
A/
With 'desc table <table_name>' you get meta data of the table with a column named kind, which will say VIRTUAL or COLUMN. In case it is VIRTUAL, then in the column expression you get how that column is calculated.
This is used in Stored Procedures, and saved in an array of arrays with javascript, from there the next query in the stored procedure is created dynamically. A while loop is used to go through the resultSet and push each row intho the array of arrays. You can then use javascript filter to just get the virtual columns. This is part of the advantage of having a mix of javascript and SQL in Snowflake Stored Procedures.
Here the documentation which doesn't say much.

At what point in the query processing lifecycle are runtime constant functions evaluated?

I have a table that holds data about events in my application and I want to process these events in order, one at a time. Rows are created (inserted into the table) from a trigger on a different table. Rows are picked for processing using an UPDATE TOP 1...ORDER BY Id style query. Common sense says that a row must be created before it can be picked, but during load testing very occasionally the datetime recorded for the picking is BEFORE the datetime recorded for the create.
After Googling for a while my best guess as to what is going on (based mainly on a blog from Connor Cunningham linked from Using function in where clause: how many times is the function evaluated?) is that the execution of the create and the pick queries overlap and sysutcdatetime() is evaluated at the start of query execution before waits causes the queries to finish in the opposite order to which they started. Something roughly like this (time moving downwards)
---------------------------------------------------
|Create Query |Pick Query |
===================================================
| |query start |
---------------------------------------------------
| |evaluate sysutcdatetime |
---------------------------------------------------
|query start |wait/block |
---------------------------------------------------
|evaluate sysutcdatetime |wait/block |
---------------------------------------------------
|insert rows using |wait/block |
|sysutcdatetime value | |
|as Create timestamp | |
---------------------------------------------------
|transaction commits |wait/block |
---------------------------------------------------
| |update top 1 using |
| |sysutcdatetime value as |
| |Pick timestamp |
---------------------------------------------------
Can anyone confirm when runtime constant functions are evaluated? Or provide an alternative explanation for how the datetime recorded for the picking could be BEFORE the datetime recorded for the create?
Just to be clear, I'm looking to understand the behaviour I'm seeing, not for ways to change my schema/code to make the problem go away. My fix for now is to remove the (PickedAt >= CreatedAt) check constraint.
For completeness, the relevant parts of my event table are;
create table dbo.JobInstanceEvent (
Id int identity not null constraint PK_JobInstanceEvent primary key,
JobInstanceId int not null constraint FK_JobInstanceEvent_JobInstance foreign key references dbo.JobInstance (Id),
JobInstanceStateCodeOld char(4) not null constraint FK_JobInstanceEvent_JobInstanceState1 foreign key references ref.JobInstanceState (Code),
JobInstanceStateCodeNew char(4) not null constraint FK_JobInstanceEvent_JobInstanceState2 foreign key references ref.JobInstanceState (Code),
JobInstanceEventStateCode char(4) not null constraint FK_JobInstance_JobInstanceEventState foreign key references ref.JobInstanceEventState (Code),
CreatedAt datetime2 not null,
PickedAt datetime2 null,
FinishedAt datetime2 null,
constraint CK_JobInstanceEvent_PickedAt check (PickedAt >= CreatedAt),
constraint CK_JobInstanceEvent_FinishedAt check (FinishedAt >= PickedAt),
constraint CK_JobInstanceEvent_PickedAt_FinishedAt check (PickedAt is null and FinishedAt is null or
PickedAt is not null) -- this covers the allowable combinations of PickedAt/FinishedAt
)
The SQL statement that creates the new rows is;
insert dbo.JobInstanceEvent (JobInstanceId, JobInstanceStateCodeOld, JobInstanceStateCodeNew, JobInstanceEventStateCode, CreatedAt)
select
i.Id as JobInstanceId,
d.JobInstanceStateCode as JobInstanceStateCodeOld,
i.JobInstanceStateCode as JobInstanceStateCodeNew,
'CRTD' as JobInstanceEventStateCode,
sysutcdatetime() as CreatedAt
from
inserted i
inner join deleted d on d.Id = i.Id
where
i.JobInstanceStateCode <> d.JobInstanceStateCode and -- the state has changed and
i.JobInstanceStateCode in ('SUCC', 'FAIL') -- the new state is either success or failure.
The SQL statement that picks a row is;
; with cte as (
select top 1
jie.Id,
jie.JobInstanceId,
jie.JobInstanceStateCodeOld,
jie.JobInstanceStateCodeNew,
jie.JobInstanceEventStateCode,
jie.PickedAt
from
dbo.JobInstanceEvent jie
where
jie.JobInstanceEventStateCode = 'CRTD'
order by
jie.Id
)
update cte set
JobInstanceEventStateCode = 'PICK',
PickedAt = sysutcdatetime()
output
inserted.Id,
inserted.JobInstanceId,
inserted.JobInstanceStateCodeOld,
inserted.JobInstanceStateCodeNew
into
#PickedJobInstanceEvent
I'm using SQL Server 2016 but I don't think this is a version specific issue.
explanation for how the datetime recorded for the picking could be
BEFORE the datetime recorded for the create?
You could simulate the behavior of the create/pick query diagram by using the following (two ssms windows, for the create and pickup queries)
Another contributing factor is the time accuracy of windows. In a highly concurrent system, blocking and waits will definitely occur and picked dates could be at least the same or a few millisecs before the creation dates (if pickup queries have to wait for the creation of new rows).
create table dbo.atest
(
id int identity primary key clustered,
colA char(500) default('a'),
createddate datetime2(4) default(sysdatetime()),
pickeddate datetime2(4)
)
go
--rows already picked up
insert into dbo.atest(colA, createddate,pickeddate)
values
('a', '20200405 12:00', '20200406 10:00'),
('b', '20200405 12:00', '20200406 10:10'),
('c', '20200405 12:00', '20200406 10:20'),
('d', '20200405 12:00', '20200406 10:30');
--create a new row..to be picked up
begin transaction -- ...
update dbo.atest --..query start | wait block
set colA = colA
waitfor delay '00:00:40'
--during the waitfor delay, in another window(SSMS)
/*
--this will wait(blocking) for the delay and the insert and commit...
update a
set pickeddate = sysdatetime()
from
(
select top (1) *
from dbo.atest
where pickeddate is null
order by id
) as a;
--insertion happened after the update was fired, picked<created
select *
from dbo.atest
where pickeddate < createddate;
*/
--create new row
insert into dbo.atest(colA) values('e')
commit transaction
go
--drop table dbo.atest
You could prevent pickdate < createdate by incorporating a condition in the select/pickup query:
from
dbo.JobInstanceEvent jie
where
jie.JobInstanceEventStateCode = 'CRTD'
and jie.CreatedAt < /*= ?*/ sysutcdatetime()
order by
jie.Id

Select a large volume of data with like SQL server

I have a table with ID column
ID column is like this : IDxxxxyyy
x will be 0 to 9
I have to select row with ID like ID0xxx% to ID3xxx%, there will be around 4000 ID with % wildcard from ID0000% to ID3999%.
It is like combining LIKE with IN
Select * from TABLE where ID in (ID0000%,ID0001%,...,ID3999%)
I cannot figure out how to select with this condition.
If you have any idea, please help.
Thank you so much!
You can use pattern matching with LIKE. e.g.
WHERE ID LIKE 'ID[0-3][0-9][0-9][0-9]%'
Will match an string that:
Starts with ID (ID)
Then has a third character that is a number between 0 and 3 [0-3]
Then has 3 further numbers ([0-9][0-9][0-9])
This is not likely to perform well at all. If it is not too late to alter your table design, I would separate out the components of your Identifier and store them separately, then use a computed column to store your full id e.g.
CREATE TABLE T
(
NumericID INT NOT NULL,
YYY CHAR(3) NOT NULL, -- Or whatever type makes up yyy in your ID
FullID AS CONCAT('ID', FORMAT(NumericID, '0000'), YYY),
CONSTRAINT PK_T__NumericID_YYY PRIMARY KEY (NumericID, YYY)
);
Then your query is a simple as:
SELECT FullID
FROM T
WHERE NumericID >= 0
AND NumericID < 4000;
This is significantly easier to read and write, and will be significantly faster too.
This should do that, it will get all the IDs that start with IDx, with x that goes form 0 to 4
Select * from TABLE where ID LIKE 'ID[0-4]%'
You can try :
Select * from TABLE where id like 'ID[0-3][0-9]%[a-zA-Z]';

Change the Value in a Column to Change from a String to a Number on Insery

I have a table which I have cut down into basic fields, it is called Customer
ID | Name | Type
1 | Smith | 2
I want to create a trigger on INSERT that will change the value of the inserting Type, into a number example:
INSERT INTO Customer (Name,Type) VALUES ('Jones', 'Recommended')
The Type field should be a number and it is set as an INT column. I do not want to change this away from INT.
How can I force the word Recommended to be changed to ‘0’ a zero?
In Theory :
This is the trigger :
ALTER TRIGGER [dbo].[CustomersInsert] ON [dbo].[Customer]
INSTEAD OF INSERT
AS BEGIN
INSERT Into Customer (Name,Type)
SELECT
inserted.[Name],
Isnull([Types].Id, 0)
FROM inserted
Left Join [dbo].[Types] on inserted.[Type] = [Types].Caption
END
The [dbo].[Types] is for keep Types.
But in real :
You can't execute INSERT INTO Customer (Name,Type) VALUES ('Jones', 'Recommended')
because Type is INT and 'Recommended' is not.

SQL Server check constraint logic

I've got a table that has such kind of structure:
CREATE TABLE #Mine
(
ProductID INT
, CountryID INT
, ApplicationID INT
);
Let's assume it has data as follows:
ProductID CountryID ApplicationID
1 2 -1
1 3 -1
1 3 2
I'd like to enforce such logic that there's no other ProductID/CountryID combination in entire table if it exists with ApplicationID = -1. In My example 2nd and 3rd row wouldn't pass this.
I could create a custom function to validate that and make a CHECK constraint out of it. Is there perhaps a more elegant way to do it?
I would split your task. First, assign unique constraint (this can be table Key):
CREATE UNIQUE INDEX IX_UQ ON Mine(ProductId, CountryId, ApplicationId)
This is for trivial validations and to improve trigger query.
Second, your check requires many records involved (no CHECK constraint possible). This is task for trigger:
CREATE TRIGGER trMine
ON Mine FOR INSERT,UPDATE
IF (EXISTS(
SELECT Mark FROM
(
SELECT MAX(CASE WHEN M.ApplicationId=-1 THEN 1 ELSE 0 END)*(COUNT(*)-1) Mark
FROM Mine M
JOIN inserted I ON M.ProductId=I.ProductId AND M.CountryId=I.CountryId
GROUP BY M.ProductId,M.CountryId
) Q
WHERE Mark != 0
)) THROW 50000, 'Validation error', 1
When there are 2 or more records (COUNT(*)-1>0) and there is any record with ApplicationId=-1, Mark evaluates to something != 0. This is your violation rule.
You can just use a Unique Filtered Index:
CREATE UNIQUE INDEX IX_UniqueNegativeApp ON Mine(ProductID, CountryID) WHERE ApplicationID = -1

Resources