how to get number containing rows in snowflake - snowflake-cloud-data-platform

I have tried regexp, regexp_like and like but didn't work
select * from b
where regexp_like(col1, '\d')
where regexp_like(col1, '[0-9]')
....etc
we have this table
Col1
avr100000
adfdsgwr
20170910020359.761
Enterprise
adf56ds76gwr
0+093000
080000
adfdsgwr
output should be these 5 rows
col1
avr100000
20170910020359.761
adf56ds76gwr
0+093000
1080000
Thanks

You can use regexp_instr in the where clause to see if it finds a digit anywhere in the string:
create temp table b(col1 string);
insert into b (col1) values ('avr100000'), ('adfdsgwr'),
('20170910020359.761'),
('Enterprise'),
('adf56ds76gwr'),
('0+093000'),
('080000'),
('adfdsgwr')
;
select col1 from b where regexp_instr(col1, '\\d') > 0;
I'm updating my answer to note that regexp_instr is going to perform about 3.8 times faster than using regexp_count for this requirement.
The reason is that regexp_instr will stop and report the location of the first digit it encounters. In contrast, regexp_count will continue examining the string until it reaches its end. If we only want to know if a digit exists in a string, we can stop as soon as we encounter the first one.
If it is a small data set, this won't matter much. For large data sets, that 3.8 times faster makes a big difference. Here is a mini test harness that shows the performance difference:
create or replace transient table RANDOM_STRINGS as
select RANDSTR(50, random()) as RANDSTR from table(generator (rowcount => 10000000));
alter session set use_cached_result = false;
-- Run these statements multiple times on an X-Small warehouse to test performance
-- Run both to warm the cache, then note the times after the initial runs to warm the cache
-- Average over 10 times with warm cache: 3.315s
select count(*) as ROWS_WITH_NUMBERS
from RANDOM_STRINGS where regexp_count(randstr, '\\d') > 0;
-- Average over 10 times with warm cache: 0.8686s
select count(*) as ROWS_WITH_NUMBERS
from RANDOM_STRINGS where regexp_instr(randstr, '\\d') > 0;

One method is to count how many alpha tokens there are:
select column1 as input
,regexp_count(column1, '[A-Za-z]') as alpha_count
from values
('0-100000'),
('adfdsgwr'),
('20170910020359.761'),
('Enterprise'),
('adfdsgwr'),
('0+093000'),
('1-080000'),
('adfdsgwr')
INPUT
ALPHA_COUNT
0-100000
0
adfdsgwr
8
20170910020359.761
0
Enterprise
10
adfdsgwr
8
0+093000
0
1-080000
0
adfdsgwr
8
and thus exclude those where it is not zero:
select column1 as input
from values
('0-100000'),
('adfdsgwr'),
('20170910020359.761'),
('Enterprise'),
('adfdsgwr'),
('0+093000'),
('1-080000'),
('adfdsgwr')
where regexp_count(column1, '[A-Za-z]') = 0
gives:
INPUT
0-100000
20170910020359.761
0+093000
1-080000

All you need to do is find 1 or more instances of a numeric value:
select
column1 as input
from
values
('avr100000'),
('adfdsgwr'),
('20170910020359.761'),
('Enterprise'),
('adf56ds76gwr'),
('0+093000'),
('1-080000'),
('adfdsgwr')
where
regexp_count (column1, '\\d') > 0;
Results:
avr100000
20170910020359.800
adf56ds76gwr
0+093000
1-080000

Related

Varbinary search with min and max pattern type

So I'm trying to make a query which goes through varbinary data. The issue is that I can't really finish what I'm trying to achieve. What you should know about the column is varbinary(50) and the patterns that occur have no specific order in writing, meaning every prefix could be anywhere as long it has 3 bytes(0x000000) First one is the prefix second and third are value data that I'm looking to check if its within the range i like. All the data is written like this.
What I've tried:
DECLARE #t TABLE (
val VARBINARY(MAX)
)
INSERT INTO #t SELECT 0x00000100000000000000000000000000000000000000000000000000
INSERT INTO #t SELECT 0x00001000000000000000000000000000000000000000000000000000
INSERT INTO #t SELECT 0x00010000000000000000000000000000000000000000000000000000
INSERT INTO #t SELECT 0x00100000000000000000000000000000000000000000000000000000
INSERT INTO #t SELECT 0x00000f00000000000000000000000000000000000000000000000000
declare #pattern varbinary(max)
declare #pattern2 varbinary(max)
set #pattern = 0x0001
set #pattern2 = #pattern+0xFF
select #pattern,#pattern2
SELECT
*
FROM #t
WHERE val<#pattern
OR val>#pattern2
This was total bust the patterns were accurate up to 2 symbols if I were to use 4 symbols as pattern it would work only if the pattern is in predefined position. I've tried combination of this and everything below.
WHERE CONVERT(varbinary(2), val) = 0xdata
also this:
select *
from table
where CONVERT(varchar(max),val,2) like '%data%'
Which works great for searching exact patterns, but not for ranges, I need some combination of both.
I'm aware I could technically add every possible outcome 1 by 1 and let it cycle through all the listed possibilities, but there has to be a smarter way.
Goals:
Locating the prefix(first binary data pair)
Defining a max value after the prefix, everything above that threshold to be listed in the results. Let's say '26' is the prefix, the highest allowed number after is '9600' or '269600'. Basically any data that exceeds this pattern '269600' should be detected example '269700'.
or query result would post this:
select * from table where CONVERT(varchar(max),attr,2) like
'%269700%'
I need something that would detect this on its own while i just give it start and end to look in between like the highest number variation would be '26ffff', but limiting it to something like 'ff00' is acceptable for what I'm looking for.
My best guess is 2 defined numbers, 1 being the allowed max range
and 2nd for a cap, so it doesn't go through every possible outcome.
But I would be happy to whatever works.
I'm aware this explanation is pretty dire, but bear with me, thanks.
*Update after the last suggestion
SELECT MIN(val), MAX(val) FROM #t where CONVERT(varchar(max),val,2) like '%26%'
This is pretty close, but its not sufficient i need to cycle through alot of data and use it after this would select only min or max even with the prefix filter. I believe i need min and max defined as a start and end range where the query should look for.
**Update2
I'm afraid you would end up disappointed, its nothing that interesting.
The data origin is related to a game server which stores the data like this. There's the predefined prefixes which are the stat type and the rest of the data is the actual numeric value of the stat. The data is represented by 6 characters data intervals. Here is a sample of the data stream. Its always 6-6-6-6-6 as long there's space to record the data on since its capped at 50 characters.
0x0329000414000B14000C14000D0F00177800224600467800473C00550F00000000000000000000000000
**Update3
Yes the groups are always in 3byte fashion, yes my idea was exactly that use the first byte to narrow down the search and use then use the second 2 bytes to filter it. I just don't know how to pull it off in an effective way. I'm not sure if i understood what u meant by predictively aligned assuming you meant if stat/prefix/header would always end up at the same binary location, if that's correct the answer is no. If the 3byte pattern is violated the data becomes unreadable meaning even if you don't need the extra byte you have to count it otherwise the data breaks example of a working data.
0x032900'041400'
example of a broken data:
0x0329'041400'
The only issue i could think is when the prefix and part of the value are both true example:
0x262600
Unless the query is specifically ordered to read the data in 3byte sequence meaning it knows that the first byte is always a prefix and the other 2 bytes are value.
Q:Can that be used as an alignment indicator so that the first non-zero byte after at least 3 zero bytes indicates the start of a group?
A:Yes, but that's unlikely I mean it although possible it would be written in order like:
0x260000'270000'
It wouldn't skip forward an entire 3byte group filled with no data. This type of entry would occur if someone were to manually insert it to the db, the server doesn't make records with gaps like those as far I'm aware:
0x260000'000000'270000'
To address your last comment that's something I don't know how to express it in a working query, except for the boneheaded version which would be me manually adding every possible number within my desired range with +1bit after that number. As you can imagine the query would look terrible. That's why I'm looking for a smarter solution that I cannot figure out how to do so by my self.
select * from #t
where (CONVERT(varchar(max),val,2) like '%262100%' or
CONVERT(varchar(max),attr,2) like '%262200%' or
etc...)
This may be a partial answer from which you can build.
The following will split the input data up into 3-byte (6 hex character) groups. It then extracts the first byte as the key, and several representations of the remaining two bytes as values.
SELECT S.*, P.*
FROM #t T
CROSS APPLY (
SELECT
N.Offset,
SUBSTRING(T.val, N.Offset + 1, 3) AS Segment
FROM (
VALUES
(0), (3), (6), (9), (12), (15), (18), (21), (24), (27),
(30), (33), (36), (39)
) N(Offset)
WHERE N.Offset < LEN(T.val) - 3
) S
CROSS APPLY(
SELECT
CONVERT(TINYINT, SUBSTRING(S.Segment, 1, 1)) AS [Key],
CONVERT(TINYINT, SUBSTRING(S.Segment, 2, 1)) AS [Value1],
CONVERT(TINYINT, SUBSTRING(S.Segment, 3, 1)) AS [Value2],
CONVERT(SMALLINT, SUBSTRING(S.Segment, 2, 2)) AS [Value12],
CONVERT(SMALLINT, SUBSTRING(S.Segment, 3, 1) + SUBSTRING(S.Segment, 2, 1)) AS [Value21]
) P
Given the following input data
0x0329000414000B14000C14000D0F00177800224600467800473C00550F00000000000000000000000000
--^-----^-----^-----^-----^-----^-----^-----^-----^-----^-----^-----^-----^-----^-----
The following results are extracted:
Offset
Segment
Key
Value1
Value2
Value12
Value21
0
0x032900
3
41
0
10496
41
3
0x041400
4
20
0
5120
20
6
0x0B1400
11
20
0
5120
20
9
0x0C1400
12
20
0
5120
20
12
0x0D0F00
13
15
0
3840
15
15
0x177800
23
120
0
30720
120
18
0x224600
34
70
0
17920
70
21
0x467800
70
120
0
30720
120
24
0x473C00
71
60
0
15360
60
27
0x550F00
85
15
0
3840
15
30
0x000000
0
0
0
0
0
33
0x000000
0
0
0
0
0
36
0x000000
0
0
0
0
0
See this db<>fiddle.
DECLARE #YourTable table
(
Id INT PRIMARY KEY,
Val VARBINARY(50)
)
INSERT #YourTable
VALUES (1, 0x0329000414000B14000C14000D0F00177800224600467800473C00550F00000000000000000000000000),
(2, 0x0329002637000B14000C14000D0F00177800224600467800473C00550F00000000000000000000000000);
SELECT Id, Triplet
FROM #YourTable T
CROSS APPLY GENERATE_SERIES(1,DATALENGTH(T.Val),3) s
CROSS APPLY (VALUES (SUBSTRING(T.Val, s.value, 3))) V(Triplet)
WHERE Triplet BETWEEN 0x263700 AND 0x2637FF
This works only with '22 sql server because of 'generate_series'
DECLARE #YourTable table
(
Id INT PRIMARY KEY,
Val VARBINARY(50)
)
INSERT #YourTable
VALUES (1, 0x0329000414000B14000C14000D0F00177800224600467800473C00550F00000000000000000000000000),
(2, 0x0329002637000B14000C14000D0F00177800224600467800473C00550F00000000000000000000000000);
SELECT Id, Triplet
FROM #YourTable T
JOIN (VALUES (1),(4),(7),(10),(13),(16),(19),(22),(25),(28),(31),(34),(37),(40),(43),(46),(49)) Nums(Num) ON Num <= DATALENGTH(T.Val)
CROSS APPLY (VALUES (SUBSTRING(T.Val, Num, 3))) V(Triplet)
WHERE Triplet BETWEEN 0x263700 AND 0x2637FF
This one works on older versions without "generate_series"
The credit is to #Martin Smith from stackexchange
https://dba.stackexchange.com/questions/323235/varbinary-pattern-search

Need to generate from and to numbers based on the result set with a specified interval

I have below requirement.
Input is like as below.
Create table Numbers
(
Num int
)
Insert into Numbers
values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12),(13),(14),(15)
Create table FromTo
(
FromNum int
,ToNum int
)
Select * From FromTo
Output should be as below.
FromNum ToNum
1 5
6 10
11 15
Actual Requirement is as below.
I need to load the data for a column into a table which will have thousands of records with different no's.
Consider like below.
1,2,5,7,9,11,15,34,56,78,98,123,453,765 etc..
I need to load these into other table which is having FROM and TO columns with the intervals of 5000. For example in the first 5000 if i have the no's till 3000, my 1st row should have FromNo as 1 and ToNum as 3000. second row: if the data is not having till 10000 and the next no started as 12312(This is the 2nd Row FromNum) the ToNum value should be +5000 i.e 17312. Here also if we don't have the no's data till 17312 it need to consider the ToNum between the 12312 and 17312
Output should be as below.
FromNum ToNum
1 3205
1095806 1100805
1100808 1105806
1105822 1110820
Can you guys please help me with the solution for the above.
Thanks in advance.
What you may try in this situation is to group data and get the expected results:
DECLARE #interval int = 5
INSERT INTO FromTo (FromNum, ToNum)
SELECT MIN(Num) AS FromNum, MAX(Num) AS ToNum
FROM Numbers
GROUP BY (Num - 1) / #interval

SQL Server - poor performance during Insert transaction

I have a stored procedure which executes a query and return the line into variables like below:
SELECT #item_id = I.ID, #label_ID = SL.label_id,
FROM tb_A I
LEFT JOIN tb_B SL ON I.ID = SL.item_id
WHERE I.NUMBER = #VAR
I have a IF to check if #label_ID is null or not. If it is null, it goes to INSERT statement, otherwise it goes to UPDATE statement. Let's focus on INSERT where I know I'm having problems. The INSERT part is like below:
IF #label_ID IS NULL
BEGIN
INSERT INTO tb_B (item_id, label_qrcode, label_barcode, data_leitura, data_inclusao)
VALUES (#item_id, #label_qrcode, #label_barcode, #data_leitura, GETDATE())
END
So, tb_B has a PK in ID column and a FK in item_ID column which refers to column ID in tb_A table.
I ran SQL Server Profiler and I saw that sometimes the duration for this stored procedure takes around 2300ms and the normal average for this is 16ms.
I ran the "Execution Plan" and the biggest cost is in the "Clustered Index Insert" component. Showing below:
Estimated Execution Plan
Actual Execution Plan
Details
More details about the tables:
tb_A Storage:
Index space: 6.853,188 MB
Row count: 45988842
Data space: 5.444,297 MB
tb_B Storage:
Index space: 1.681,688 MB
Row count: 15552847
Data space: 1.663,281 MB
Statistics for INDEX 'PK_tb_B'.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Name Updated Rows Rows Sampled Steps Density Average Key Length String Index
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
PK_tb_B Sep 23 2018 2:30AM 15369616 15369616 5 1 4 NO 15369616
All Density Average Length Columns
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
6.506343E-08 4 id
Histogram Steps
RANGE_HI_KEY RANGE_ROWS EQ_ROWS DISTINCT_RANGE_ROWS AVG_RANGE_ROWS
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1 0 1 0 1
8192841 8192198 1 8192198 1
8270245 65535 1 65535 1
15383143 7111878 1 7111878 1
15383144 0 1 0 1
Statistics for INDEX 'IDX_tb_B_ITEM_ID'.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Name Updated Rows Rows Sampled Steps Density Average Key Length String Index
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
IDX_tb_B_ITEM_ID Sep 23 2018 2:30AM 15369616 15369616 12 1 7.999424 NO 15369616
All Density Average Length Columns
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
6.50728E-08 3.999424 item_id
6.506343E-08 7.999424 item_id, id
Histogram Steps
RANGE_HI_KEY RANGE_ROWS EQ_ROWS DISTINCT_RANGE_ROWS AVG_RANGE_ROWS
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
0 2214 0 1
16549857 0 1 0 1
29907650 65734 1 65734 1
32097131 131071 1 131071 1
32296132 196607 1 196607 1
32406913 98303 1 98303 1
40163331 7700479 1 7700479 1
40237216 65535 1 65535 1
47234636 6946815 1 6946815 1
47387143 131071 1 131071 1
47439431 31776 1 31776 1
47439440 0 1 0 1
PK_tb_B Index fragmentation
IDX_tb_B_Item_ID
Is there any best practices where I can apply and make this execution duration stable?
Hope you can help me !!!
Thanks in advance...
It's probably that the problem is the DbType of the clustered index. Clustered indexes store the data in the table based on the key values. By default, your primary key is created with a clustered index. This is often the best place to have it,
but not always. If you have, for example, a clustered index over a NVARCHAR column, every time that an INSERT is performed, needs to find the right place to insert the new record. For example, if your table have one million rows, with registers ordered alphabetically, and your new register starts with A, then your clustered index needs to move registers from B to Z to put your new register in the A group. If your new register stars with Z, then moves a smaller number of records, but this doesn't mean that its fine too. If you donĀ“t have a column that let you insert new register sequentially, then you can create an identity column for this or have another column that logically is sequential to any transaction entered regardless of the system, for example, a datetime column that registers the time at the insert ocurrs.
If you want more info, please check this Microsoft documentation

In SSRS, how can I add a row to aggregate all the rows that don't match a filter?

I'm working on a report that shows transactions grouped by type.
Type Total income
------- --------------
A 575
B 244
C 128
D 45
E 5
F 3
Total 1000
I only want to provide details for transaction types that represent more than 10% of the total income (i.e. A-C). I'm able to do this by applying a filter to the group:
Type Total income
------- --------------
A 575
B 244
C 128
Total 1000
What I want to display is a single row just above the total row that has a total for all the types that have been filtered out (i.e. the sum of D-F):
Type Total income
------- --------------
A 575
B 244
C 128
Other 53
Total 1000
Is this even possible? I've tried using running totals and conditionally hidden rows within the group. I've tried Iif inside Sum. Nothing quite seems to do what I need and I'm butting up against scope issues (e.g. "the value expression has a nested aggregate that specifies a dataset scope").
If anyone can give me any pointers, I'd be really grateful.
EDIT: Should have specified, but at present the dataset actually returns individual transactions:
ID Type Amount
---- ------ --------
1 A 4
2 A 2
3 B 6
4 A 5
5 B 5
The grouping is done using a row group in the tablix.
One solution is to solve that in the SQL source of your dataset instead of inside SSRS:
SELECT
CASE
WHEN CAST([Total income] AS FLOAT) / SUM([Total income]) OVER (PARTITION BY 1) >= 0.10 THEN [Type]
ELSE 'Other'
END AS [Type]
, [Total income]
FROM Source_Table
See also SQL Fiddle
Try to solve this in SQL, see SQL Fiddle.
SELECT I.*
,(
CASE
WHEN I.TotalIncome >= (SELECT Sum(I2.TotalIncome) / 10 FROM Income I2) THEN 10
ELSE 1
END
) AS TotalIncomePercent
FROM Income I
After this, create two sum groups.
SUM(TotalIncome * TotalIncomePercent) / 10
SUM(TotalIncome * TotalIncomePercent)
Second approach may be to use calculated column in SSRS. Try to create a calculated column with above case expression. If it allows you to create it, you may use it in the same way as SQL approach.
1) To show income greater than 10% use row visibility condition like
=iif(reportitems!total_income.value/10<= I.totalincome,true,false)
here reportitems!total_income.value is total of all income textbox value which will be total value of detail group.
and I.totalincome is current field value.
2)add one more row to outside of detail group to achieve other income and use expression as
= reportitems!total_income.value-sum(iif(reportitems!total_income.value/10<= I.totalincome,I.totalincome,nothing))

How do I generate a random number for each row in a T-SQL select?

I need a different random number for each row in my table. The following seemingly obvious code uses the same random value for each row.
SELECT table_name, RAND() magic_number
FROM information_schema.tables
I'd like to get an INT or a FLOAT out of this. The rest of the story is I'm going to use this random number to create a random date offset from a known date, e.g. 1-14 days offset from a start date.
This is for Microsoft SQL Server 2000.
Take a look at SQL Server - Set based random numbers which has a very detailed explanation.
To summarize, the following code generates a random number between 0 and 13 inclusive with a uniform distribution:
ABS(CHECKSUM(NewId())) % 14
To change your range, just change the number at the end of the expression. Be extra careful if you need a range that includes both positive and negative numbers. If you do it wrong, it's possible to double-count the number 0.
A small warning for the math nuts in the room: there is a very slight bias in this code. CHECKSUM() results in numbers that are uniform across the entire range of the sql Int datatype, or at least as near so as my (the editor) testing can show. However, there will be some bias when CHECKSUM() produces a number at the very top end of that range. Any time you get a number between the maximum possible integer and the last exact multiple of the size of your desired range (14 in this case) before that maximum integer, those results are favored over the remaining portion of your range that cannot be produced from that last multiple of 14.
As an example, imagine the entire range of the Int type is only 19. 19 is the largest possible integer you can hold. When CHECKSUM() results in 14-19, these correspond to results 0-5. Those numbers would be heavily favored over 6-13, because CHECKSUM() is twice as likely to generate them. It's easier to demonstrate this visually. Below is the entire possible set of results for our imaginary integer range:
Checksum Integer: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Range Result: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 0 1 2 3 4 5
You can see here that there are more chances to produce some numbers than others: bias. Thankfully, the actual range of the Int type is much larger... so much so that in most cases the bias is nearly undetectable. However, it is something to be aware of if you ever find yourself doing this for serious security code.
When called multiple times in a single batch, rand() returns the same number.
I'd suggest using convert(varbinary,newid()) as the seed argument:
SELECT table_name, 1.0 + floor(14 * RAND(convert(varbinary, newid()))) magic_number
FROM information_schema.tables
newid() is guaranteed to return a different value each time it's called, even within the same batch, so using it as a seed will prompt rand() to give a different value each time.
Edited to get a random whole number from 1 to 14.
RAND(CHECKSUM(NEWID()))
The above will generate a (pseudo-) random number between 0 and 1, exclusive. If used in a select, because the seed value changes for each row, it will generate a new random number for each row (it is not guaranteed to generate a unique number per row however).
Example when combined with an upper limit of 10 (produces numbers 1 - 10):
CAST(RAND(CHECKSUM(NEWID())) * 10 as INT) + 1
Transact-SQL Documentation:
CAST(): https://learn.microsoft.com/en-us/sql/t-sql/functions/cast-and-convert-transact-sql
RAND(): http://msdn.microsoft.com/en-us/library/ms177610.aspx
CHECKSUM(): http://msdn.microsoft.com/en-us/library/ms189788.aspx
NEWID(): https://learn.microsoft.com/en-us/sql/t-sql/functions/newid-transact-sql
Random number generation between 1000 and 9999 inclusive:
FLOOR(RAND(CHECKSUM(NEWID()))*(9999-1000+1)+1000)
"+1" - to include upper bound values(9999 for previous example)
Answering the old question, but this answer has not been provided previously, and hopefully this will be useful for someone finding this results through a search engine.
With SQL Server 2008, a new function has been introduced, CRYPT_GEN_RANDOM(8), which uses CryptoAPI to produce a cryptographically strong random number, returned as VARBINARY(8000). Here's the documentation page: https://learn.microsoft.com/en-us/sql/t-sql/functions/crypt-gen-random-transact-sql
So to get a random number, you can simply call the function and cast it to the necessary type:
select CAST(CRYPT_GEN_RANDOM(8) AS bigint)
or to get a float between -1 and +1, you could do something like this:
select CAST(CRYPT_GEN_RANDOM(8) AS bigint) % 1000000000 / 1000000000.0
The Rand() function will generate the same random number, if used in a table SELECT query. Same applies if you use a seed to the Rand function. An alternative way to do it, is using this:
SELECT ABS(CAST(CAST(NEWID() AS VARBINARY) AS INT)) AS [RandomNumber]
Got the information from here, which explains the problem very well.
Do you have an integer value in each row that you could pass as a seed to the RAND function?
To get an integer between 1 and 14 I believe this would work:
FLOOR( RAND(<yourseed>) * 14) + 1
If you need to preserve your seed so that it generates the "same" random data every time, you can do the following:
1. Create a view that returns select rand()
if object_id('cr_sample_randView') is not null
begin
drop view cr_sample_randView
end
go
create view cr_sample_randView
as
select rand() as random_number
go
2. Create a UDF that selects the value from the view.
if object_id('cr_sample_fnPerRowRand') is not null
begin
drop function cr_sample_fnPerRowRand
end
go
create function cr_sample_fnPerRowRand()
returns float
as
begin
declare #returnValue float
select #returnValue = random_number from cr_sample_randView
return #returnValue
end
go
3. Before selecting your data, seed the rand() function, and then use the UDF in your select statement.
select rand(200); -- see the rand() function
with cte(id) as
(select row_number() over(order by object_id) from sys.all_objects)
select
id,
dbo.cr_sample_fnPerRowRand()
from cte
where id <= 1000 -- limit the results to 1000 random numbers
select round(rand(checksum(newid()))*(10)+20,2)
Here the random number will come in between 20 and 30.
round will give two decimal place maximum.
If you want negative numbers you can do it with
select round(rand(checksum(newid()))*(10)-60,2)
Then the min value will be -60 and max will be -50.
try using a seed value in the RAND(seedInt). RAND() will only execute once per statement that is why you see the same number each time.
If you don't need it to be an integer, but any random unique identifier, you can use newid()
SELECT table_name, newid() magic_number
FROM information_schema.tables
You would need to call RAND() for each row. Here is a good example
https://web.archive.org/web/20090216200320/http://dotnet.org.za/calmyourself/archive/2007/04/13/sql-rand-trap-same-value-per-row.aspx
The problem I sometimes have with the selected "Answer" is that the distribution isn't always even. If you need a very even distribution of random 1 - 14 among lots of rows, you can do something like this (my database has 511 tables, so this works. If you have less rows than you do random number span, this does not work well):
SELECT table_name, ntile(14) over(order by newId()) randomNumber
FROM information_schema.tables
This kind of does the opposite of normal random solutions in the sense that it keeps the numbers sequenced and randomizes the other column.
Remember, I have 511 tables in my database (which is pertinent only b/c we're selecting from the information_schema). If I take the previous query and put it into a temp table #X, and then run this query on the resulting data:
select randomNumber, count(*) ct from #X
group by randomNumber
I get this result, showing me that my random number is VERY evenly distributed among the many rows:
It's as easy as:
DECLARE #rv FLOAT;
SELECT #rv = rand();
And this will put a random number between 0-99 into a table:
CREATE TABLE R
(
Number int
)
DECLARE #rv FLOAT;
SELECT #rv = rand();
INSERT INTO dbo.R
(Number)
values((#rv * 100));
SELECT * FROM R
select ABS(CAST(CAST(NEWID() AS VARBINARY) AS INT)) as [Randomizer]
has always worked for me
Use newid()
select newid()
or possibly this
select binary_checksum(newid())
If you want to generate a random number between 1 and 14 inclusive.
SELECT CONVERT(int, RAND() * (14 - 1) + 1)
OR
SELECT ABS(CHECKSUM(NewId())) % (14 -1) + 1
DROP VIEW IF EXISTS vwGetNewNumber;
GO
Create View vwGetNewNumber
as
Select CAST(RAND(CHECKSUM(NEWID())) * 62 as INT) + 1 as NextID,
'abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ'as alpha_num;
---------------CTDE_GENERATE_PUBLIC_KEY -----------------
DROP FUNCTION IF EXISTS CTDE_GENERATE_PUBLIC_KEY;
GO
create function CTDE_GENERATE_PUBLIC_KEY()
RETURNS NVARCHAR(32)
AS
BEGIN
DECLARE #private_key NVARCHAR(32);
set #private_key = dbo.CTDE_GENERATE_32_BIT_KEY();
return #private_key;
END;
go
---------------CTDE_GENERATE_32_BIT_KEY -----------------
DROP FUNCTION IF EXISTS CTDE_GENERATE_32_BIT_KEY;
GO
CREATE function CTDE_GENERATE_32_BIT_KEY()
RETURNS NVARCHAR(32)
AS
BEGIN
DECLARE #public_key NVARCHAR(32);
DECLARE #alpha_num NVARCHAR(62);
DECLARE #start_index INT = 0;
DECLARE #i INT = 0;
select top 1 #alpha_num = alpha_num from vwGetNewNumber;
WHILE #i < 32
BEGIN
select top 1 #start_index = NextID from vwGetNewNumber;
set #public_key = concat (substring(#alpha_num,#start_index,1),#public_key);
set #i = #i + 1;
END;
return #public_key;
END;
select dbo.CTDE_GENERATE_PUBLIC_KEY() public_key;
Update my_table set my_field = CEILING((RAND(CAST(NEWID() AS varbinary)) * 10))
Number between 1 and 10.
Try this:
SELECT RAND(convert(varbinary, newid()))*(b-a)+a magic_number
Where a is the lower number and b is the upper number
If you need a specific number of random number you can use recursive CTE:
;WITH A AS (
SELECT 1 X, RAND() R
UNION ALL
SELECT X + 1, RAND(R*100000) --Change the seed
FROM A
WHERE X < 1000 --How many random numbers you need
)
SELECT
X
, RAND_BETWEEN_1_AND_14 = FLOOR(R * 14 + 1)
FROM A
OPTION (MAXRECURSION 0) --If you need more than 100 numbers

Resources