I want to update the column ItemValue of table Items with a decimal value generated randomly within 1 and 100 (a different value for each row). Each value should have two (random) digits.
CREATE TABLE Items
(
ItemID int IDENTITY(1,1) NOT NULL,
ItemValue decimal(13, 4) NULL,
CONSTRAINT PK_Items PRIMARY KEY CLUSTERED (ItemID ASC)
)
INSERT INTO Items(ItemValue) VALUES (0)
INSERT INTO Items(ItemValue) VALUES (0)
INSERT INTO Items(ItemValue) VALUES (0)
INSERT INTO Items(ItemValue) VALUES (0)
-- Now, I want to update the table
You can use RAND to generate random number. But there is one problem - RAND is executed only once per query, so all your rows will contain same random value. You can use CHECKSUM(NEWID()) to make it random per row, like this
UPDATE items
SET itemValue = ROUND(RAND(CHECKSUM(NEWID())) * (100), 2)
You could use this snippet to generate random decimal values:
CONVERT( DECIMAL(13, 4), 10 + (30-10)*RAND(CHECKSUM(NEWID()))
This will generate random decimal numbers between 10 and 30.
Related
I have a table which uses a sequence to auto-generate the Primary Key when inserting a record. However, the sequence is generating negative values.
How do I enforce that only positive values are generated and is there a way to generated the ids randomly (especially a varchar type)
questionnaries.sql #
CREATE TABLE public.questionnaries
(
id integer NOT NULL DEFAULT nextval('questionnaries_id_seq'::regclass),
personname character varying(255) NOT NULL,
question character varying(255) NOT NULL,
response character varying(255) NOT NULL,
CONSTRAINT questionnaries_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE public.questionnaries
OWNER TO postgres;
questionnaries_id_seq
CREATE SEQUENCE public.questionnaries_id_seq
INCREMENT 1
MINVALUE 1
MAXVALUE 9223372036854775807
START 6
CACHE 1;
ALTER TABLE public.questionnaries_id_seq
OWNER TO postgres;
First Create a Sequence like below .Whichever number you wanna start give that for e.g. 0 or 100.
CREATE SEQUENCE questionnaries_id_seq START 0;
you can query also
SELECT nextval('questionnaries_id_seq');
The Sequence generate the negative value in two scenarios,
1# you have created the sequence and specify the INCREMENT BY values in a negative("-1").
2# The sequence INCREMENT BY is in positive and correct form but, sequence reached to their MAX value and that's the reason it started generating the MIN value of the sequence.
There will two solutions for this,
Use the "NO MAXVALUE" with "NO CYCLE" parameter of the sequence as specified below.
CREATE SEQUENCE <>
NO MAXVALUE
START WITH 0
INCREMENT BY 1
NO CYCLE;
Use the "SERIAL" to generate the numerical values by PostgreSQL.
CREATE TABLE table_name (
column_1 integer PRIMARY KEY DEFAULT nextval('serial'),
column_2 varchar(40) NOT NULL
);
I have a table like this :
create table ReceptionR1
(
numOrdre char(20) not null,
dateDepot datetime null,
...
)
I want to increment my id field (numOrdre) like '225/2015','226/2015',...,'1/2016' etc. What should I have to do for that?
2015 means the actual year.
Please let me know any possible way.
You really, and I mean Really don't want to do such a thing, especially as your primary key. You better use a simple int identity column for you primary key and add a non nullable create date column of type datetime2 with a default value of sysDateTime().
Create the increment number by year either as a calculated column or by using an instead of insert trigger (if you don't want it to be re-calculated each time). This can be done fairly easy with the use of row_number function.
As everyone else has said - don't use this as your primary key! But you could do the following, if you're on SQL Server 2012 or newer:
-- step 1 - create a sequence
CREATE SEQUENCE dbo.SeqOrderNo AS INT
START WITH 1001 -- start with whatever value you need
INCREMENT BY 1
NO CYCLE
NO CACHE;
-- create your table - use INT IDENTITY as your primary key
CREATE TABLE dbo.ReceptionR1
(
ID INT IDENTITY
CONSTRAINT PK_ReceptionR1 PRIMARY KEY CLUSTERED,
dateDepot DATE NOT NULL,
...
-- add a colum called "SeqNumber" that gets filled from the sequence
SeqNumber INT,
-- you can add a *computed* column here
OrderNo = CAST(YEAR(dateDepot) AS VARCHAR(4)) + '/' + CAST(SeqNumber AS VARCHAR(4))
)
So now, when you insert a row, it has a proper and well defined primary key (ID), and when you fill the SeqNumber with
INSERT INTO dbo.ReceptionR1 (dateDepot, SeqNumber)
VALUES (SYSDATETIME(), NEXT VALUE FOR dbo.SeqOrderNo)
then the SeqNumber column gets the next value for the sequence, and the OrderNo computed column gets filled with 2015/1001, 2015/1002 and so forth.
Now when 2016 comes around, you just reset the sequence back to a starting value:
ALTER SEQUENCE dbo.SeqOrderNo RESTART WITH 1000;
and you're done - the rest of your solution works as before.
If you want to make sure you never accidentally insert a duplicate value, you can even put a unique index on your OrderNo column in your table.
Once more, you cannot use the combo field as your primary key. This solution sort or works on earlier versions of SQL and calculates the new annual YearlySeq counter automatically - but you had better have an index on dateDepot and you might still have issues if there are many, many (100's of thousands) of rows per year.
In short: fight the requirement.
Given
create table dbo.ReceptionR1
(
ReceptionR1ID INT IDENTITY PRIMARY KEY,
YearlySeq INT ,
dateDepot datetime DEFAULT (GETDATE()) ,
somethingElse varchar(99) null,
numOrdre as LTRIM(STR(YearlySeq)) + '/' + CONVERT(CHAR(4),dateDepot,111)
)
GO
CREATE TRIGGER R1Insert on dbo.ReceptionR1 for INSERT
as
UPDATE tt SET YearlySeq = ISNULL(ii.ReceptionR1ID - (SELECT MIN(ReceptionR1ID) FROM dbo.ReceptionR1 xr WHERE DATEPART(year,xr.dateDepot) = DATEPART(year,ii.dateDepot) and xr.ReceptionR1ID <> ii.ReceptionR1ID ),0) + 1
FROM dbo.ReceptionR1 tt
JOIN inserted ii on ii.ReceptionR1ID = tt.ReceptionR1ID
GO
insert into ReceptionR1 (somethingElse) values ('dumb')
insert into ReceptionR1 (somethingElse) values ('requirements')
insert into ReceptionR1 (somethingElse) values ('lead')
insert into ReceptionR1 (somethingElse) values ('to')
insert into ReceptionR1 (somethingElse) values ('big')
insert into ReceptionR1 (somethingElse) values ('problems')
insert into ReceptionR1 (somethingElse) values ('later')
select * from ReceptionR1
I am trying to generate a unique varchar ID, that should contain 4 digits of Alphabets and 4 digits of Numbers. They can be random, but the ID should start with alphabets and then numbers should follow. This is in SQL Server 2008 R2.
Eg:
ABCD1234
rtfd8798
tyry8745
Could anyone help?
If you are looking to produce exactly 4 random alphabetical characters followed by four random numbers, the following will accomplish it.
It's not pretty or flexible, but should be enough to lead you in the right direction.
DECLARE #Data VARCHAR(8)
SET #Data = ''
-- Build first four characters
WHILE (LEN(#Data) < 4)
BEGIN
SET #Data = #Data + SUBSTRING('abcdefghijkmnopqrstuvwxyzABCDEFGHIJKLMNPQRSTUVWXYZ', CAST(RAND() * 52 AS INT), 1)
END
-- Build next for numbers
WHILE (LEN(#Data) < 8)
BEGIN
SET #Data = #Data + SUBSTRING('0123456789', CAST(RAND() * 10 AS INT), 1)
END
PRINT #Data
The only solution in my opinion is to use ID INT IDENTITY(1,1)
CREATE TABLE dbo.tblProducts
(ID INT IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED,
ProductID AS 'PUID' + RIGHT('0000' + CAST(ID AS VARCHAR(8)), 8) PERSISTED,
.... your other columns here....
)
Then when you insert new data into table values like PUID0001, PUID0002
I have one table in database which should contain sequence number.
create table SequenceNumber(
number int indentity(1,1) primary key
)
Now I want to store number from 1 to 1448 without setting IDENTITY_INSERT ON/OFF and without counter variable.
I need values from 1 to 1448 in 'number' column
can anyone tell me how can I do it?
yes you can do it as follow
just change the value 1448 as per your need
idea from here : http://www.codeproject.com/Tips/780441/Tricky-SQL-Questions
CREATE TABLE SequenceNumber(
NUMBER BIGINT IDENTITY(1,1) PRIMARY KEY
)
WHILE(1=1)
BEGIN
INSERT INTO SequenceNumber
DEFAULT VALUES
IF EXISTS(SELECT 1 FROM SequenceNumber WHERE NUMBER = 1448)
BREAK
END
SELECT NUMBER FROM SequenceNumber
Do the statistics (which help decide whether an index is to be used) take into account the number of rows per actual column value, or does it just use the average number of rows per value.
Suppose I have a table with an bit column called active which has a million of rows, but with 99.99% set to false. If I have an index on this column, then is Sql smart enough to know to use the index if searching for active=1 but that there is no point if searching for active=0.
Another example, if I have a table which has say 1,000,000 records with a indexed column which contains about 50,000 different values with an average number of rows per value of 10, but then one special value which has 500,000 rows. The index may not be useful if searching for this special record, but would be very useful when looking for any of the other codes.
But does this special case ruin the effectiveness of the index.
You can see for yourself:
CREATE TABLE IndexTest (
Id int not null primary key identity(1,1),
Active bit not null default(0),
IndexedValue nvarchar(10) not null
)
CREATE INDEX IndexTestActive ON IndexTest (Active)
CREATE INDEX IndexTestIndexedValue ON IndexTest (IndexedValue)
DECLARE #values table
(
Id int primary key IDENTITY(1, 1),
Value nvarchar(10)
)
INSERT INTO #values(Value) VALUES ('1')
INSERT INTO #values(Value) VALUES ('2')
INSERT INTO #values(Value) VALUES ('3')
INSERT INTO #values(Value) VALUES ('4')
INSERT INTO #values(Value) VALUES ('5')
INSERT INTO #values(Value) VALUES ('Many')
INSERT INTO #values(Value) VALUES ('Many')
INSERT INTO #values(Value) VALUES ('Many')
INSERT INTO #values(Value) VALUES ('Many')
INSERT INTO #values(Value) VALUES ('Many')
DECLARE #rowCount int
SET #rowCount = 100000
WHILE(#rowCount > 0)
BEGIN
DECLARE #valueIndex int
SET #valueIndex = CAST(RAND() * 10 + 1 as int)
DECLARE #selectedValue nvarchar(10)
SELECT #selectedValue = Value FROM #values WHERE Id = #valueIndex
DECLARE #isActive bit
SELECT #isActive = CASE
WHEN RAND() < 0.001 THEN 1
ELSE 0
END
INSERT INTO IndexTest(Active, IndexedValue) VALUES (#isActive, #selectedValue)
SET #rowCount = #rowCount - 1
END
SELECT count(*) FROM IndexTest WHERE Active = 1
SELECT count(*) FROM IndexTest WHERE Active = 0
SELECT count(*) FROM IndexTest WHERE IndexedValue = '1'
SELECT count(*) FROM IndexTest WHERE IndexedValue = 'Many'
It looks to me like it always uses the indexes on this query plan:
It creates a histogramm and will thus use that.
With a bit column it will have a good idea how many are 0 and 1
With a string column, it iwll have a rough idea of "bands" (value starting a, b, c etc.). Same for numbers (it creates x bands of value ranges).
Just look up how statistics look in your management studio - you can actually access the histograms.
You can simply look at the statistics and see for yourself :) DBCC SHOW_STATISTICS. See the Remarks section, it has a nice explanation of how the histograms are actually stored and used:
To create the histogram, the query
optimizer sorts the column values,
computes the number of values that
match each distinct column value and
then aggregates the column values into
a maximum of 200 contiguous histogram
steps. Each step includes a range of
column values followed by an upper
bound column value. The range includes
all possible column values between
boundary values, excluding the
boundary values themselves. The lowest
of the sorted column values is the
upper boundary value for the first
histogram step.
For each histogram step:
Bold line represents the upper boundary value (RANGE_HI_KEY) and the
number of times it occurs (EQ_ROWS)
Solid area left of RANGE_HI_KEY represents the range of column values
and the average number of times each
column value occurs (AVG_RANGE_ROWS).
The AVG_RANGE_ROWS for the first
histogram step is always 0.
Dotted lines represent the sampled values used to estimate total
number of distinct values in the range
(DISTINCT_RANGE_ROWS) and total number
of values in the range (RANGE_ROWS).
The query optimizer uses RANGE_ROWS
and DISTINCT_RANGE_ROWS to compute
AVG_RANGE_ROWS and does not store the
sampled values.
The query optimizer defines the
histogram steps according to their
statistical significance. It uses a
maximum difference algorithm to
minimize the number of steps in the
histogram while maximizing the
difference between the boundary
values. The maximum number of steps is
200. The number of histogram steps can be fewer than the number of distinct
values, even for columns with fewer
than 200 boundary points. For example,
a column with 100 distinct values can
have a histogram with fewer than 100
boundary points.