Getting Random Number for each row - sql-server

I have a table with some names in a row. For each row I want to generate a random name. I wrote the following query to:
BEGIN transaction t1
Create table TestingName
(NameID int,
FirstName varchar(100),
LastName varchar(100)
)
INSERT INTO TestingName
SELECT 0,'SpongeBob','SquarePants'
UNION
SELECT 1, 'Bugs', 'Bunny'
UNION
SELECT 2, 'Homer', 'Simpson'
UNION
SELECT 3, 'Mickey', 'Mouse'
UNION
SELECT 4, 'Fred', 'Flintstone'
SELECT FirstName from TestingName
WHERE NameID = ABS(CHECKSUM(NEWID())) % 5
ROLLBACK Transaction t1
The problem is the "ABS(CHECKSUM(NEWID())) % 5" portion of this query sometime returns more than 1 row and sometimes returns 0 rows. I must be missing something but I can't see it.
If I change the query to
DECLARE #n int
set #n= ABS(CHECKSUM(NEWID())) % 5
SELECT FirstName from TestingName
WHERE NameID = #n
Then everything works and I get a random number per row.
If you take the query above and paste it into SQL management studio and run the first query a bunch of times you will see what I am attempting to describe.
The final update query will look like
Update TableWithABunchOfNames
set [FName] = (SELECT FirstName from TestingName
WHERE NameID = ABS(CHECKSUM(NEWID())) % 5)
This does not work because sometimes I get more than 1 row and sometimes I get no rows.
What am I missing?

The problem is that you are getting a different random value for each row. That is the problem. This query is probably doing a full table scan. The where clause is executed for each row -- and a different random number is generated.
So, you might get a sequence of random numbers where none of the ids match. Or a sequence where more than one matches. On average, you'll have one match, but you don't want "on average", you want a guarantee.
This is when you want rand(), which produces only one random number per query:
SELECT FirstName
from TestingName
WHERE NameID = floor(rand() * 5);
This should get you one value.

Why not use top 1?
Select top 1 firstName
From testingName
Order by newId()

This worked for me:
WITH
CTE
AS
(
SELECT
ID
,FName
,CAST(5 * (CAST(CRYPT_GEN_RANDOM(4) as int) / 4294967295.0 + 0.5) AS int) AS rr
FROM
dbo.TableWithABunchOfNames
)
,CTE_ForUpdate
AS
(
SELECT
CTE.ID
, CTE.FName
, dbo.TestingName.FirstName AS RandomName
FROM
CTE
LEFT JOIN dbo.TestingName ON dbo.TestingName.NameID = CTE.rr
)
UPDATE CTE_ForUpdate
SET FName = RandomName
;
This solution depends on how smart optimizer is.
For example, if I use INNER JOIN instead of LEFT JOIN (which is the correct choice for this query), optimizer would move calculation of random numbers outside the join loop and end result would be not what we expect.
I created a table TestingName with 5 rows as in the question and a table TableWithABunchOfNames with 100 rows.
Here is the execution plan with LEFT JOIN. You can see the Compute scalar that calculates random numbers is done before the join loop. You can see that 100 rows were updated:
Here is the execution plan with INNER JOIN. You can see the Compute scalar that calculates random numbers is done after the join loop and with extra filter. This query may update not all rows in TableWithABunchOfNames and some rows in TableWithABunchOfNames may be updated several times. You can see that Filter left 102 rows and Stream aggregate left only 69 rows. It means that only 69 rows were eventually updated and also there were multiple matches for some rows (102 - 69 = 33).
To guarantee that the result is what you expect you should generate random number for each row in TableWithABunchOfNames and explicitly remember the result, i.e. materialize the CTE shown above. Then use this temporary result to join with the table TestingName.
You can add a column to TableWithABunchOfNames to store generated random numbers or save CTE to a temp table or table variable.

Related

Select one record, grab "n" records before it, and iterate through them to see if they're sequential

So, I'd like to grab a record from a table of results. Let's say that this is our "sample" record.
Once I have the sample record, I'd like to grab 10 results down the table, and check to see if the sample is sequential within this list of 10 results.
So, if our sample record was 124, I'd like to grab the 10 records before it, and check to see if they follow the sequence of 123, 122, 121, 120, etc.
Once I know that the sample result is in fact sequential down to 10 records, I would like to insert that record into a different table for keeping.
I am using SQL Server and T-SQL to do this, and pulling my hair out trying to do so. If anyone could offer any advice, I would GREATLY appreciate it. Here's what I have so far (with some data removed), with no idea if I'm on the right track.
declare #TestTable as table (a char(15), RowNumber integer)
declare #SampleNumber as char(15)
insert into #TestTable (a, RowNumber)
select top 10
[NUMBERS],
ROW_NUMBER() over (order by a) as RowNumber
from [TABLE]
where
[NUMBERS] like [CONDITIONS]
order by [NUMBERS] desc
With this, I'm trying to grab the result and also a set of row numbers, allowing me to iterate through them based on that row number. But, I'm getting an "Invalid column name 'a'" error when running. Feel free to forget about that error and write something totally new though, because I don't even know if I'm on the right track.
Again, any help would be appreciated.
I am not sure how well this would perform on a larger dataset, but as Peter Smith mentioned, this is possible by using lag to see what the value of the row x rows prior in an ordered window was, though be aware this will run for all rows in your table and return all those that meet the criteria, rather than randomly sampling:
-- Create a not quite sequential dataset
declare #t table(n int);
with n as
(
select row_number() over (order by (select null)) as n
,abs(checksum(newid())) % 14 as r
from sys.all_objects
)
insert into #t
select n
from n
where r > 2;
-- Output the original dataset
select *
from #t;
-- Only return rows that come after a certain number of sequential numbers
declare #seq int = 10;
with l as
(
select n
,n - lag(n,#seq,null) over (order by n) as l
from #t
)
select n
from l
where l = #seq;

Get random data from SQL Server without performance impact

I need to select random rows from my sql table, when search this cases in google, they suggested to ORDER BY NEWID() but it reduces the performance. Since my table has more than 2'000'000 rows of data, this solution does not suit me.
I tried this code to get random data :
SELECT TOP 10 *
FROM Table1
WHERE (ABS(CAST((BINARY_CHECKSUM(*) * RAND()) AS INT)) % 100) < 10
It also drops performance sometimes.
Could you please suggest good solution for getting random data from my table, I need minimum rows from that tables like 30 rows for each request. I tried TableSAMPLE to get the data, but it returns nothing once I added my where condition because it return the data by the basis of page not basis of row.
Try to calc the random ids before to filter your big table.
since your key is not identity, you need to number records and this will affect performances..
Pay attention, I have used distinct clause to be sure to get different numbers
EDIT: I have modified the query to use an arbitrary filter on your big table
declare #n int = 30
;with
t as (
-- EXTRACT DATA AND NUMBER ROWS
select *, ROW_NUMBER() over (order by YourPrimaryKey) n
from YourBigTable t
-- SOME FILTER
WHERE 1=1 /* <-- PUT HERE YOUR COMPLEX FILTER LOGIC */
),
r as (
-- RANDOM NUMBERS BETWEEN 1 AND COUNT(*) OF FILTERED TABLE
select distinct top (#n) abs(CHECKSUM(NEWID()) % n)+1 rnd
from sysobjects s
cross join (SELECT MAX(n) n FROM t) t
)
select t.*
from t
join r on r.rnd = t.n
If your uniqueidentifier key is a random GUID (not generated with NEWSEQUENTIALID() or UuidCreateSequential), you can use the method below. This will use the clustered primary key index without sorting all rows.
SELECT t1.*
FROM (VALUES(
NEWID()),(NEWID()),(NEWID()),(NEWID()),(NEWID()),(NEWID()),(NEWID()),(NEWID()),(NEWID()),(NEWID())
,(NEWID()),(NEWID()),(NEWID()),(NEWID()),(NEWID()),(NEWID()),(NEWID()),(NEWID()),(NEWID()),(NEWID())
,(NEWID()),(NEWID()),(NEWID()),(NEWID()),(NEWID()),(NEWID()),(NEWID()),(NEWID()),(NEWID()),(NEWID())) AS ThirtyKeys(ID)
CROSS APPLY(SELECT TOP (1) * FROM dbo.Table1 WHERE ID >= ThirtyKeys.ID) AS t1;

T-SQL Selecting TOP 1 In A Query With Aggregates/Groups

I'm still fairly new to SQL. This is a stripped down version of the query I'm trying to run. This query is suppose to find those customers with more than 3 cases and display either the top 1 case or all cases but still show the overall count of cases per customer in each row in addition to all the case numbers.
The TOP 1 subquery approach didn't work but is there another way to get the results I need? Hope that makes sense.
Here's the code:
SELECT t1.StoreID, t1.CustomerID, t2.LastName, t2.FirstName
,COUNT(t1.CaseNo) AS CasesCount
,(SELECT TOP 1 t1.CaseNo)
FROM MainDatabase t1
INNER JOIN CustomerDatabase t2
ON t1.StoreID = t2.StoreID
WHERE t1.SubmittedDate >= '01/01/2017' AND t1.SubmittedDate <= '05/31/2017'
GROUP BY t1.StoreID, t1.CustomerID, t2.LastName, t2.FirstName
HAVING COUNT (t1.CaseNo) >= 3
ORDER BY t1.StoreID, t1.PatronID
I would like it to look something like this, either one row with just the most recent case and detail or several rows showing all details of each case in addition to the store id, customer id, last name, first name, and case count.
Data Example
For these I usually like to make a temp table of aggregates:
DROP TABLE IF EXISTS #tmp;
CREATE TABLE #tmp (
CustomerlD int NOT NULL DEFAULT 0,
case_count int NOT NULL DEFAULT 0,
case_max int NOT NULL DEFAULT 0,
);
INSERT INTO #tmp
(CustomerlD, case_count, case_max)
SELECT CustomerlD, COUNT(tl.CaseNo), MAX(tl.CaseNo)
FROM MainDatabase
GROUP BY CustomerlD;
Then you can join this "tmp" table back to any other table you want to display the number of cases on, or the max case number on. And you can limit it to customers that have more than 3 cases with WHERE case_count > 3

postgresql: Insert two values in table b if both values are not in table a

I'm doing an assignment where I am to make an sql-database of a tournament result. Players can be added by their name, and when the database has at least two or more players who has not already been assigned to a match, two players should be matched against each other.
For instance, if the tables currently are empty I add Joe as a player. I then also add James and since the table then has two players, who also are not in the matches-table, a new row in the matches-table is created with their p_id set to left_player_P_id and right_player_P_id.
I thought it would be a good idea to create a function and a trigger so that every time a row is added to the player-table, the sql-code would run and create the row in the matches as needed. I am open to other ways of doing this.
I've tried multiple different approaches including SQL - Insert if the number of rows is greater than and Using IF ELSE statement based on Count to execute different Insert statements but I am now at a loss.
Problematic code:
This approach returns a syntax error.
IF ((select count(*) from players_not_in_any_matches) >= 2)
begin
insert into matches values (
(select p_id from players_not_in_any_matches limit 1),
(select p_id from players_not_in_any_matches limit 1 offset 1)
)
end;
Alternative approach (still problematic code):
This approach seems more promising (but less readable). However, it inserts even if there are no rows returned inside the where not exists.
insert into matches (left_player_p_id, right_player_p_id)
select
(select p_id from players_not_in_any_matches limit 1),
(select p_id from players_not_in_any_matches limit 1 offset 1)
where not exists (
select * from players_not_in_any_matches offset 2
);
Tables
CREATE TABLE players (
p_id serial PRIMARY KEY,
full_name text
);
CREATE TABLE matches(
left_player_P_id integer REFERENCES players,
right_player_P_id integer REFERENCES players,
winner integer REFERENCES players
);
Views
-- view for getting all players not currently assigned to a match
create view players_not_in_any_matches as
select * from players
where p_id not in (
select left_player_p_id from matches
) and
p_id not in (
select right_player_p_id from matches
);
Try:
insert into matches (left_player_p_id, right_player_p_id)
select p1.p_id, p2.p_id
from players p1
join players p2
on p1.p_id <> p2.p_id
and not exists(
select 1 from matches m
where p1.p_id in (m.left_player_p_id, m.right_player_p_id)
)
and not exists(
select 1 from matches m
where p2.p_id in (m.left_player_p_id, m.right_player_p_id)
)
limit 1
Anti joins (not-exists operators) in the above query could be further simplified a bit using LEFT JOINs:
insert into matches (left_player_p_id, right_player_p_id)
select p1.p_id, p2.p_id
from players p1
join players p2
left join matches m1
on p1.p_id in (m1.left_player_p_id, m1.right_player_p_id)
left join matches m2
on p2.p_id in (m2.left_player_p_id, m2.right_player_p_id)
where m1.left_player is null
and m2.left_player is null
limit 1
but in my opinion the former query is more readable, while the latter one looks tricky.

Retrieve Sorted Column Value in SQL Server

What i have:
I have a Column
ID  SerialNo
1  101
2  102
3  103
4  104
5  105
6  116
7  117
8  118
9  119
10 120
These are just the 10 dummy rows. The actual table has over 100 000 rows.
What I Want to get:
A method or formula like any sorting technique which could return me the starting and ending element of [SerialNo] Column for every sub-series. For example
Expected Result: 101-105, 115-120
The comma separation in the above result is not important, only the starting and ending elements are important.
What I have tried:
I did it by PL/SQL programming, by running a loop in which I’m getting the starting and ending elements getting stored in a TABLE.
But due to no. of rows (over 100 000) the query execution is taking around 2 minutes.
I have also searched about some sorting techniques for the SQL Server but I found nothing. Because rendering every row will take twice the time then a sorting algorithm
Assuming every sub series should contain 5 records, I got expected result using below sql. I hope this helps.
DECLARE #subSeriesRange INT=5;
CREATE TABLE #Temp(ID INT,SerialNo INT);
INSERT INTO #Temp VALUES(1,101),
(2,102),
(3,103),
(4,104),
(5,105),
(6,116),
(7,117),
(8,115),
(9,119),
(10,120);
SELECT STUFF((SELECT CONCAT(CASE ID%#subSeriesRange WHEN 1 THEN ',' ELSE '-' END,SerialNo)
FROM #Temp
WHERE ID%#subSeriesRange = 1 OR ID%#subSeriesRange=0
ORDER BY ID
FOR XML PATH('')),1,1,''
);
DROP TABLE #Temp;
Just finding the start and end of each series is quite straightforward:
declare #t table (ID int not null, SerialNo int not null)
insert into #t(ID,SerialNo) values
(1 ,101), (2 ,102), (3 ,103),
(4 ,104), (5 ,105), (6 ,116),
(7 ,117), (8 ,118), (9 ,119),
(10,120)
;With Starts as (
select t1.SerialNo,ROW_NUMBER() OVER (ORDER BY t1.SerialNo) as rn
from
#t t1
left join
#t t1_no
on t1.SerialNo = t1_no.SerialNo + 1
where t1_no.ID is null
), Ends as (
select t1.SerialNo,ROW_NUMBER() OVER (ORDER BY t1.SerialNo) as rn
from
#t t1
left join
#t t1_no
on t1.SerialNo = t1_no.SerialNo - 1
where t1_no.ID is null
)
select
s.SerialNo as StartSerial,
e.SerialNo as EndSerial
from
Starts s
inner join
Ends e
on s.rn = e.rn
The logic being that a Start is a row where there is no row that has the SerialNo one less than the current row, and an End is a row where there is no row that has the SerialNo one more than the current row.
This may still perform poorly if there is no index on the SerialNo column.
Results:
StartSerial EndSerial
----------- -----------
101 105
116 120
Which is hopefully acceptable since you didn't seem to care what the specific results look like. It's also keeping things set-based.

Resources