I have one table in database which should contain sequence number.
create table SequenceNumber(
number int indentity(1,1) primary key
)
Now I want to store number from 1 to 1448 without setting IDENTITY_INSERT ON/OFF and without counter variable.
I need values from 1 to 1448 in 'number' column
can anyone tell me how can I do it?
yes you can do it as follow
just change the value 1448 as per your need
idea from here : http://www.codeproject.com/Tips/780441/Tricky-SQL-Questions
CREATE TABLE SequenceNumber(
NUMBER BIGINT IDENTITY(1,1) PRIMARY KEY
)
WHILE(1=1)
BEGIN
INSERT INTO SequenceNumber
DEFAULT VALUES
IF EXISTS(SELECT 1 FROM SequenceNumber WHERE NUMBER = 1448)
BREAK
END
SELECT NUMBER FROM SequenceNumber
Related
For example, there is a table
int type
int number
int value
How to make that when inserting a value into a table
indexing started from 1 for different types.
type 1 => number 1,2,3...
type 2 => number 1,2,3...
That is, it will look like this.
type
number
value
1
1
-
1
2
-
1
3
-
2
1
-
1
4
-
2
2
-
3
1
-
6
1
-
1
5
-
2
3
-
6
2
-
Special thanks to #Larnu.
As a result, in my case, the best solution would be to create a table for each type.
As I mentioned in the comments, neither IDENTITY nor SEQUENCE support the use of another column to denote what "identity set" they should use. You can have multiple SEQUENCEs which you could use for a single table, however, this doesn't scale. If you are specific limited to 2 or 3 types, for example, you might choose to create 3 SEQUENCE objects, and then use a stored procedure to handle your INSERT statements. Then, when a user/application wants to INSERT data, they call the procedure and that procedure has logic to use the SEQUENCE based on the value of the parameter for the type column.
As mentioned, however, this doesn't scale well. If you have an undeterminate number of values of type then you can't easily handle getting the right SEQUENCE and handling new values for type would be difficult too. In this case, you would be better off using a IDENTITY and then a VIEW. The VIEW will use ROW_NUMBER to create your identifier, while IDENTITY gives you your always incrementing value.
CREATE TABLE dbo.YourTable (id int IDENTITY(1,1),
[type] int NOT NULL,
number int NULL,
[value] int NOT NULL);
GO
CREATE VIEW dbo.YourTableView AS
SELECT ROW_NUMBER() OVER (PARTITION BY [type] ORDER BY id ASC) AS Identifier,
[type],
number,
[value]
FROM dbo.YourTable;
Then, instead, you query the VIEW, not the TABLE.
If you need consistency of the column (I name identifier) you'll need to also ensure row(s) can't be DELETEd from the table. Most likely by adding an IsDeleted column to the table defined as a bit (with 0 for no deleted, and 1 for deleted), and then you can filter to those rows in the VIEW:
CREATE VIEW dbo.YourTableView AS
WITH CTE AS(
SELECT id,
ROW_NUMBER() OVER (PARTITION BY [type] ORDER BY id ASC) AS Identifier,
[type],
number,
[value],
IsDeleted
FROM dbo.YourTable)
SELECT id,
Identifier,
[type],
number,
[value]
FROM CTE
WHERE IsDeleted = 0;
You could, if you wanted, even handle the DELETEs on the VIEW (the INSERT and UPDATEs would be handled implicitly, as it's an updatable VIEW):
CREATE TRIGGER trg_YourTableView_Delete ON dbo.YourTableView
INSTEAD OF DELETE AS
BEGIN
SET NOCOUNT ON;
UPDATE YT
SET IsDeleted = 1
FROM dbo.YourTable YT
JOIN deleted d ON d.id = YT.id;
END;
GO
db<>fiddle
For completion, if you wanted to use different SEQUENCE object, it would look like this. Notice that this does not scale easily. I have to CREATE a SEQUENCE for every value of Type. As such, for a small, and known, range of values this would be a solution, but if you are going to end up with more value for type or already have a large range, this ends up not being feasible pretty quickly:
CREATE TABLE dbo.YourTable (identifier int NOT NULL,
[type] int NOT NULL,
number int NULL,
[value] int NOT NULL);
CREATE SEQUENCE dbo.YourTable_Type1
START WITH 1 INCREMENT BY 1;
CREATE SEQUENCE dbo.YourTable_Type2
START WITH 1 INCREMENT BY 1;
CREATE SEQUENCE dbo.YourTable_Type3
START WITH 1 INCREMENT BY 1;
GO
CREATE PROC dbo.Insert_YourTable #Type int, #Number int = NULL, #Value int AS
BEGIN
DECLARE #Identifier int;
IF #Type = 1
SELECT #Identifier = NEXT VALUE FOR dbo.YourTable_Type1;
IF #Type = 2
SELECT #Identifier = NEXT VALUE FOR dbo.YourTable_Type2;
IF #Type = 3
SELECT #Identifier = NEXT VALUE FOR dbo.YourTable_Type3;
INSERT INTO dbo.YourTable (identifier,[type],number,[value])
VALUES(#Identifier, #Type, #Number, #Value);
END;
I have a table which uses a sequence to auto-generate the Primary Key when inserting a record. However, the sequence is generating negative values.
How do I enforce that only positive values are generated and is there a way to generated the ids randomly (especially a varchar type)
questionnaries.sql #
CREATE TABLE public.questionnaries
(
id integer NOT NULL DEFAULT nextval('questionnaries_id_seq'::regclass),
personname character varying(255) NOT NULL,
question character varying(255) NOT NULL,
response character varying(255) NOT NULL,
CONSTRAINT questionnaries_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE public.questionnaries
OWNER TO postgres;
questionnaries_id_seq
CREATE SEQUENCE public.questionnaries_id_seq
INCREMENT 1
MINVALUE 1
MAXVALUE 9223372036854775807
START 6
CACHE 1;
ALTER TABLE public.questionnaries_id_seq
OWNER TO postgres;
First Create a Sequence like below .Whichever number you wanna start give that for e.g. 0 or 100.
CREATE SEQUENCE questionnaries_id_seq START 0;
you can query also
SELECT nextval('questionnaries_id_seq');
The Sequence generate the negative value in two scenarios,
1# you have created the sequence and specify the INCREMENT BY values in a negative("-1").
2# The sequence INCREMENT BY is in positive and correct form but, sequence reached to their MAX value and that's the reason it started generating the MIN value of the sequence.
There will two solutions for this,
Use the "NO MAXVALUE" with "NO CYCLE" parameter of the sequence as specified below.
CREATE SEQUENCE <>
NO MAXVALUE
START WITH 0
INCREMENT BY 1
NO CYCLE;
Use the "SERIAL" to generate the numerical values by PostgreSQL.
CREATE TABLE table_name (
column_1 integer PRIMARY KEY DEFAULT nextval('serial'),
column_2 varchar(40) NOT NULL
);
I have a table like this :
create table ReceptionR1
(
numOrdre char(20) not null,
dateDepot datetime null,
...
)
I want to increment my id field (numOrdre) like '225/2015','226/2015',...,'1/2016' etc. What should I have to do for that?
2015 means the actual year.
Please let me know any possible way.
You really, and I mean Really don't want to do such a thing, especially as your primary key. You better use a simple int identity column for you primary key and add a non nullable create date column of type datetime2 with a default value of sysDateTime().
Create the increment number by year either as a calculated column or by using an instead of insert trigger (if you don't want it to be re-calculated each time). This can be done fairly easy with the use of row_number function.
As everyone else has said - don't use this as your primary key! But you could do the following, if you're on SQL Server 2012 or newer:
-- step 1 - create a sequence
CREATE SEQUENCE dbo.SeqOrderNo AS INT
START WITH 1001 -- start with whatever value you need
INCREMENT BY 1
NO CYCLE
NO CACHE;
-- create your table - use INT IDENTITY as your primary key
CREATE TABLE dbo.ReceptionR1
(
ID INT IDENTITY
CONSTRAINT PK_ReceptionR1 PRIMARY KEY CLUSTERED,
dateDepot DATE NOT NULL,
...
-- add a colum called "SeqNumber" that gets filled from the sequence
SeqNumber INT,
-- you can add a *computed* column here
OrderNo = CAST(YEAR(dateDepot) AS VARCHAR(4)) + '/' + CAST(SeqNumber AS VARCHAR(4))
)
So now, when you insert a row, it has a proper and well defined primary key (ID), and when you fill the SeqNumber with
INSERT INTO dbo.ReceptionR1 (dateDepot, SeqNumber)
VALUES (SYSDATETIME(), NEXT VALUE FOR dbo.SeqOrderNo)
then the SeqNumber column gets the next value for the sequence, and the OrderNo computed column gets filled with 2015/1001, 2015/1002 and so forth.
Now when 2016 comes around, you just reset the sequence back to a starting value:
ALTER SEQUENCE dbo.SeqOrderNo RESTART WITH 1000;
and you're done - the rest of your solution works as before.
If you want to make sure you never accidentally insert a duplicate value, you can even put a unique index on your OrderNo column in your table.
Once more, you cannot use the combo field as your primary key. This solution sort or works on earlier versions of SQL and calculates the new annual YearlySeq counter automatically - but you had better have an index on dateDepot and you might still have issues if there are many, many (100's of thousands) of rows per year.
In short: fight the requirement.
Given
create table dbo.ReceptionR1
(
ReceptionR1ID INT IDENTITY PRIMARY KEY,
YearlySeq INT ,
dateDepot datetime DEFAULT (GETDATE()) ,
somethingElse varchar(99) null,
numOrdre as LTRIM(STR(YearlySeq)) + '/' + CONVERT(CHAR(4),dateDepot,111)
)
GO
CREATE TRIGGER R1Insert on dbo.ReceptionR1 for INSERT
as
UPDATE tt SET YearlySeq = ISNULL(ii.ReceptionR1ID - (SELECT MIN(ReceptionR1ID) FROM dbo.ReceptionR1 xr WHERE DATEPART(year,xr.dateDepot) = DATEPART(year,ii.dateDepot) and xr.ReceptionR1ID <> ii.ReceptionR1ID ),0) + 1
FROM dbo.ReceptionR1 tt
JOIN inserted ii on ii.ReceptionR1ID = tt.ReceptionR1ID
GO
insert into ReceptionR1 (somethingElse) values ('dumb')
insert into ReceptionR1 (somethingElse) values ('requirements')
insert into ReceptionR1 (somethingElse) values ('lead')
insert into ReceptionR1 (somethingElse) values ('to')
insert into ReceptionR1 (somethingElse) values ('big')
insert into ReceptionR1 (somethingElse) values ('problems')
insert into ReceptionR1 (somethingElse) values ('later')
select * from ReceptionR1
Suppose we have class of 100 student Limit, we make a column StudentId Column between (1-100) beyond this limit Student id is not generates
Create Table Class
(
StudentId Int Primary Key Identity(1,1)
StudentName Varchar(25)
)
insert into Class values('Jhon')
/* 2 ..
..
..
To 100 (Column) */
insert into Class values('Joy')
Record 101
insert into Class values('Joy') --- When We insert 101 row a error will occur
CREATE TABLE RegTable
(StudentId NUMBER(8,0),
CONSTRAINT CheckRegNumber CHECK (StudentId <=100 and StudentId >0 )
);
Add a CONSTRAINT to your int column
CREATE TRIGGER LimitCount
For INSERT On Student
AS
If (SELECT COUNT(StudentId) From STUDENT) > 100
--DO SOMETHING OR ROLLBACK
One draw back about this is it doesn't guarantee, if the row are actually 100 if there is deletion. So you will have to do more in DO SOMETHING section
If you want to limit your ClassTable with 100 rows, you could create an after insert trigger. Since your column is identity column, you cannot rely on Id, because, you could be in a situation where you can't insert rows but you have less than 100 students. This usually occurs when you insert and delete rows. One way to solve this problem is by resetting identity column with DBCC CHECKIDENT command, which you do not want to be doing every time.
CREATE TRIGGER LimitRows
on ClassTable
after insert
as
declare #rowsCount int
select #rowsCount = Count(*) from ClassTable
if #rowsCount > 100
begin
rollback
end
Do the statistics (which help decide whether an index is to be used) take into account the number of rows per actual column value, or does it just use the average number of rows per value.
Suppose I have a table with an bit column called active which has a million of rows, but with 99.99% set to false. If I have an index on this column, then is Sql smart enough to know to use the index if searching for active=1 but that there is no point if searching for active=0.
Another example, if I have a table which has say 1,000,000 records with a indexed column which contains about 50,000 different values with an average number of rows per value of 10, but then one special value which has 500,000 rows. The index may not be useful if searching for this special record, but would be very useful when looking for any of the other codes.
But does this special case ruin the effectiveness of the index.
You can see for yourself:
CREATE TABLE IndexTest (
Id int not null primary key identity(1,1),
Active bit not null default(0),
IndexedValue nvarchar(10) not null
)
CREATE INDEX IndexTestActive ON IndexTest (Active)
CREATE INDEX IndexTestIndexedValue ON IndexTest (IndexedValue)
DECLARE #values table
(
Id int primary key IDENTITY(1, 1),
Value nvarchar(10)
)
INSERT INTO #values(Value) VALUES ('1')
INSERT INTO #values(Value) VALUES ('2')
INSERT INTO #values(Value) VALUES ('3')
INSERT INTO #values(Value) VALUES ('4')
INSERT INTO #values(Value) VALUES ('5')
INSERT INTO #values(Value) VALUES ('Many')
INSERT INTO #values(Value) VALUES ('Many')
INSERT INTO #values(Value) VALUES ('Many')
INSERT INTO #values(Value) VALUES ('Many')
INSERT INTO #values(Value) VALUES ('Many')
DECLARE #rowCount int
SET #rowCount = 100000
WHILE(#rowCount > 0)
BEGIN
DECLARE #valueIndex int
SET #valueIndex = CAST(RAND() * 10 + 1 as int)
DECLARE #selectedValue nvarchar(10)
SELECT #selectedValue = Value FROM #values WHERE Id = #valueIndex
DECLARE #isActive bit
SELECT #isActive = CASE
WHEN RAND() < 0.001 THEN 1
ELSE 0
END
INSERT INTO IndexTest(Active, IndexedValue) VALUES (#isActive, #selectedValue)
SET #rowCount = #rowCount - 1
END
SELECT count(*) FROM IndexTest WHERE Active = 1
SELECT count(*) FROM IndexTest WHERE Active = 0
SELECT count(*) FROM IndexTest WHERE IndexedValue = '1'
SELECT count(*) FROM IndexTest WHERE IndexedValue = 'Many'
It looks to me like it always uses the indexes on this query plan:
It creates a histogramm and will thus use that.
With a bit column it will have a good idea how many are 0 and 1
With a string column, it iwll have a rough idea of "bands" (value starting a, b, c etc.). Same for numbers (it creates x bands of value ranges).
Just look up how statistics look in your management studio - you can actually access the histograms.
You can simply look at the statistics and see for yourself :) DBCC SHOW_STATISTICS. See the Remarks section, it has a nice explanation of how the histograms are actually stored and used:
To create the histogram, the query
optimizer sorts the column values,
computes the number of values that
match each distinct column value and
then aggregates the column values into
a maximum of 200 contiguous histogram
steps. Each step includes a range of
column values followed by an upper
bound column value. The range includes
all possible column values between
boundary values, excluding the
boundary values themselves. The lowest
of the sorted column values is the
upper boundary value for the first
histogram step.
For each histogram step:
Bold line represents the upper boundary value (RANGE_HI_KEY) and the
number of times it occurs (EQ_ROWS)
Solid area left of RANGE_HI_KEY represents the range of column values
and the average number of times each
column value occurs (AVG_RANGE_ROWS).
The AVG_RANGE_ROWS for the first
histogram step is always 0.
Dotted lines represent the sampled values used to estimate total
number of distinct values in the range
(DISTINCT_RANGE_ROWS) and total number
of values in the range (RANGE_ROWS).
The query optimizer uses RANGE_ROWS
and DISTINCT_RANGE_ROWS to compute
AVG_RANGE_ROWS and does not store the
sampled values.
The query optimizer defines the
histogram steps according to their
statistical significance. It uses a
maximum difference algorithm to
minimize the number of steps in the
histogram while maximizing the
difference between the boundary
values. The maximum number of steps is
200. The number of histogram steps can be fewer than the number of distinct
values, even for columns with fewer
than 200 boundary points. For example,
a column with 100 distinct values can
have a histogram with fewer than 100
boundary points.