I'm sorry for the bad title. But I dont know how to explain it shortly.
Consider a datatable with 'ID' column with an clustered index on ID Descending. Its containing another column, where Null-Values only exist in the upper most entries. Lets call this column 'NewValue' and it does not have an index.
Now I'm pulling a specific resultset and I can do so in two different ways:
First way:
Select Top (Select Count([NewValue]) + 1 from [testtable] where [NewValue] is Null) * from [testable]
Second way:
Select * from [testable] where [NewValue] is Null Union Select Top 1 * from [testtable] where [NewValue] is not Null
I do not understand enough from execution plans to know myself which is better, but I see there is quiet a difference. The second way does a sort that takes 53%, which doesnt happen in the first method. I'm using MSSQL-Server
So can you tell me which is better? I need all datarows with 'NewValue' = Null and the first datarow where it is not null
Related
Have problem with quite big table, where are some null values in 3 columns - datetime2 (and 2 float columns).
Nice simple request from similar question returns only 2 rows where datetime2 is null, but nothing else (same as lot of others):
DELETE FROM MyTable
LEFT OUTER JOIN (
SELECT MIN(RowId) as RowId, allRemainingCols
FROM MyTable
GROUP BY allRemainingCols
) as KeepRows ON
MyTable.RowId = KeepRows.RowId
WHERE
KeepRows.RowId IS NULL
Seems to work without datetime2 column having nulls ??
There is manual workaround, but is there any way to create request or procedure using TSQL only ?
SELECT id,remainingColumns
FROM table
order BY remainingColumns
Compare all columns in XL (15 in my case, placed =ROW() in first column as a check and formula next to last column + auto filter for TRUEs): =AND(B1=B2;C1=C2;D1=D2;E1=E2;F1=F2;G1=G2;H1=H2;I1=I2;J1=J2;K1=K2;L1=L2;M1=M2;N1=N2;O1=O2;P1=P2)
Or compare 3 rows like this and select all non-unique rows
=OR(
AND(B1=B2;C1=C2;D1=D2;E1=E2;F1=F2;G1=G2;H1=H2;I1=I2;J1=J2;K1=K2;L1=L2;M1=M2;N1=N2;O1=O2;P1=P2);
AND(B2=B3;C2=C3;D2=D3;E2=E3;F2=F3;G2=G3;H2=H3;I2=I3;J2=J3;K2=K3;L2=L3;M2=M3;N2=N3;O2=O3;P2=P3)
)
Quite much work to find my particular data/answer...
Most of float numbers were slightly different.
Hard to find, but simple CAST(column as binary) can show these invisible differences...
Like 96,6666666666667 vs 0x0000000000000000000000000000000000000000000040582AAAAAAAAAAD vs 0x0000000000000000000000000000000000000000000040582AAAAAAAAAAB etc.
And visible 96.6666666666667 can return something different way again:
0x0000000000000000000000000000000000000F0D0001AB6A489F2D6F0300
I created a table, tblNewParts with 3 columns:
NewCustPart
AddedDate
Handled
and I am trying to FULL JOIN it to an existing table, tblPartsWorkedOn.
tblNewParts is defined to have Handled defaulted to 'N'...
SELECT *
FROM dbo.tblPartsWorkedOn AS BASE
FULL JOIN dbo.tblNewParts AS ADDON ON BASE.[CustPN] = ADDON.[NewCustPart]
WHERE ADDON.[Handled] IS NULL
ORDER BY [CustPN] DESC
And I want the field [Handled] to come back as 'N' instead of NULL when I run the query. The problem is that when there aren't any records in the new table, I get NULL's instead of 'N's.
I saw a SELECT CASE WHEN col1 IS NULL THEN defaultval ELSE col1 END as a mostly suitable answer from here. I am wondering if this will work in this instance, and how would I write that in T-SQL for SQL Server 2012? I need all of the columns from both tables, rather than just the one.
I'm making this a question, rather than a comment on the cited link, so as to not obscure the original link's question.
Thank you for helping!
Name the column (alias.column_name) in select statement and use ISNULL(alias.column,'N').
Thanks
After many iterations I found the answer, it's kind of bulky but here it is anyway. Synopsis:
Yes, the CASE statement does work, but it gives the output as an unnamed column. Also, in this instance to get all of the original columns AND the corrected column, I had to use SELECT *, CASE...END as [ColumnName].
But, here is the better solution, as it will place the information into the correct column, rather than adding a column to the end of the table and calling that column 'Unnamed Column'.
Select [ID], [Seq], [Shipped], [InternalPN], [CustPN], [Line], [Status],
CASE WHEN ADDON.[NewCustPart] IS NULL THEN BASE.[CustPN] ELSE
ADDON.[NewCustomerPart] END as [NewCustPart],
GetDate() as [AddedDate],
CASE WHEN ADDON.[Handled] IS NULL THEN 'N' ELSE ADDON.[Handled] END as [Handled]
from dbo.tblPartsWorkedOn as BASE
full join dbo.tblNewParts as AddOn ON Base.[CustPN] = AddOn.NewCustPart
where AddOn.Handled = 'N' or AddOn.Handled is null
order by [NewCustPart] desc
This sql code places the [CustPN] into [NewCustPart] if it's null, it puts a 'N' into the field [Handled] if it's null and it assigns the date to the [AddedDate] field. It also only returns records that have not been handled, so that you get the ones that need to be looked at; and it orders the resulting output by the [NewCustPart] field value.
Resulting Output looks something like this: (I shortened the DateTime for the output here.)
[ID] [SEQ] [Shipped] [InternalPN] [CustPN] [Status] [NewCustPart] [AddedDate] [Handled]
1 12 N 10012A 10012A UP 10012A 04/02/2016 N
...
Rather than with the nulls:
[ID] [SEQ] [Shipped] [InternalPN] [CustPN] [Status] [NewCustPart] [AddedDate] [Handled]
1 12 N 10012A 10012A UP NULL NULL NULL
...
I'm leaving this up, and just answering it rather than deleting it, because I am fairly sure that someone else will eventually ask this same question. I think that lots of examples showing how and why something is done, is a very helpful thing to have as not everything can be generalized. Just some thoughts and I hope that this helps someone else!
I'm trying to randomly populate a column with values from another table using this statement:
UPDATE dbo.SERVICE_TICKET
SET Vehicle_Type = (SELECT TOP 1 [text]
FROM dbo.vehicle_typ
WHERE id = abs(checksum(NewID()))%21)
It seems to work fine, however the value NULL is inserted into the column. How can I get rid of the NULL and only insert the values from the table?
This can happen when you don't have an appropriate index on the ID column of your vehicle_typ table. Here's a smaller query that exhibits the same problem:
create table T (ID int null)
insert into T(ID) values (0),(1),(2),(3)
select top 1 * from T where ID = abs(checksum(NewID()))%3
Because there's no index on T, what happens is that SQL Server performs a table scan and then, for each row, attempts to satisfy the where clause. Which means that, for each row it evaluates abs(checksum(NewID()))%3 anew. You'll only get a result if, by chance, that expression produces, say, 1 when it's evaluated for the row with ID 1.
If possible (I don't know your table structure) I would first populate a column in SERVICE_TICKET with a random number between 0 and 20 and then perform this update using the already generated number. Otherwise, with the current query structure, you're always relying on SQL Server being clever enough to only evaluate abs(checksum(NewID()))%21once for each outer row, which it may not always do (as you've already found out).
#Damien_The_Unbeliever explained why your query fails.
My first variant was not correct, because I didn't understand the problem in full.
You want to set each row in SERVICE_TICKET to a different random value from vehicle_typ.
To fix it simply order by random number, rather than comparing a random number with ID. Like this (and you don't care how many rows are in vehicle_typ as long as there is at least one row there).
WITH
CTE
AS
(
SELECT
dbo.SERVICE_TICKET.Vehicle_Type
CA.[text]
FROM
dbo.SERVICE_TICKET
CROSS APPLY
(
SELECT TOP 1 [text]
FROM dbo.vehicle_typ
ORDER BY NewID()
) AS CA
)
UPDATE CTE
SET Vehicle_Type = [text];
At first we make a Common Table Expression, you can think of it as a temporary table. For each row in SERVICE_TICKET we pick one random row from vehicle_typ using CROSS APPLY. Then we UPDATE the original table with chosen rows.
I have a T-SQL query that takes data from one table and copies it into a new table but only rows meeting a certain condition:
SELECT VibeFGEvents.*
INTO VibeFGEventsAfterStudyStart
FROM VibeFGEvents
LEFT OUTER JOIN VibeFGEventsStudyStart
ON
CHARINDEX(REPLACE(REPLACE(REPLACE(logName, 'MyVibe ', ''), ' new laptop', ''), ' old laptop', ''), excelFilename) > 0
AND VibeFGEventsStudyStart.MIN_TitleInstID <= VibeFGEvents.TitleInstID
AND VibeFGEventsStudyStart.MIN_WinInstId <= VibeFGEvents.WndInstID
WHERE VibeFGEventsStudyStart.excelFilename IS NOT NULL
ORDER BY VibeFGEvents.id
The code using the table relies on its order, and the copy above does not preserve the order I expected. I.e. the rows in the new table VibeFGEventsAfterStudyStart are not monotonically increasing in the VibeFGEventsAfterStudyStart.id column copied from VibeFGEvents.id.
In T-SQL how might I preserve the ordering of the rows from VibeFGEvents in VibeFGEventsStudyStart?
I know this is a bit old, but I needed to do something similar. I wanted to insert the contents of one table into another, but in a random order. I found that I could do this by using select top n and order by newid(). Without the 'top n', order was not preserved and the second table had rows in the same order as the first. However, with 'top n', the order (random in my case) was preserved. I used a value of 'n' that was greater than the number of rows. So my query was along the lines of:
insert Table2 (T2Col1, T2Col2)
select top 10000 T1Col1, T1Col2
from Table1
order by newid()
What for?
Point is – data in a table is not ordered. In SQL Server the intrinsic storage order of a table is that of the (if defined) clustered index.
The order in which data is inserted is basically "irrelevant". It is forgotten the moment the data is written into the table.
As such, nothing is gained, even if you get this stuff. If you need an order when dealing with data, you HAVE To put an order by clause on the select that gets it. Anything else is random - i.e. the order you et data is not determined and may change.
So it makes no sense to have a specific order on the insert as you try to achieve.
SQL 101: sets have no order.
Just add top to your sql with a number that is greater than the actual number of rows:
SELECT top 25000 *
into spx_copy
from SPX
order by date
I've found a specific scenario where we want the new table to be created with a specific order in the columns' content:
Amount of rows is very big (from 200 to 2000 millions of rows), so we are using SELECT INTO instead of CREATE TABLE + INSERT because needs to be loaded as fast as possible (minimal logging). We have tested using the trace flag 610 for loading an already created empty table with a clustered index but still takes longer than the following approach.
We need the data to be ordered by specific columns for query performances, so we are creating a CLUSTERED INDEX just after the table is loaded. We discarded creating a non-clustered index because it would need another read for the data that's not included in the ordered columns from the index, and we discarded creating a full-covering non-clustered index because it would practically double the amount of space needed to hold the table.
It happens that if you manage to somehow create the table with columns already "ordered", creating the clustered index (with the same order) takes a lot less time than when the data isn't ordered. And sometimes (you will have to test your case), ordering the rows in the SELECT INTO is faster than loading without order and creating the clustered index later.
The problem is that SQL Server 2012+ will ignore the ORDER BY column list when doing INSERT INTO or when doing SELECT INTO. It will consider the ORDER BY columns if you specify an IDENTITY column on the SELECT INTO or if the inserted table has an IDENTITY column, but just to determine the identity values and not the actual storage order in the underlying table. In this case, it's likely that the sort will happen but not guaranteed as it's highly dependent on the execution plan.
A trick we have found is that doing a SELECT INTO with the result of a UNION ALL makes the engine perform a SORT (not always an explicit SORT operator, sometimes a MERGE JOIN CONCATENATION, etc.) if you have an ORDER BY list. This way the select into already creates the new table in the order we are going to create the clustered index later and thus the index takes less time to create.
So you can rewrite this query:
SELECT
FirstColumn = T.FirstColumn,
SecondColumn = T.SecondColumn
INTO
#NewTable
FROM
VeryBigTable AS T
ORDER BY -- ORDER BY is ignored!
FirstColumn,
SecondColumn
to
SELECT
FirstColumn = T.FirstColumn,
SecondColumn = T.SecondColumn
INTO
#NewTable
FROM
VeryBigTable AS T
UNION ALL
-- A "fake" row to be deleted
SELECT
FirstColumn = 0,
SecondColumn = 0
ORDER BY
FirstColumn,
SecondColumn
We have used this trick a few times, but I can't guarantee it will always sort. I'm just posting this as a possible workaround in case someone has a similar scenario.
You cannot do this with ORDER BY but if you create a Clustered Index on VibeFGEvents.id after your SELECT INTO the table will be sorted on disk by VibeFGEvents.id.
I'v made a test on MS SQL 2012, and it clearly shows me, that insert into ... select ... order by makes sense. Here is what I did:
create table tmp1 (id int not null identity, name sysname);
create table tmp2 (id int not null identity, name sysname);
insert into tmp1 (name) values ('Apple');
insert into tmp1 (name) values ('Carrot');
insert into tmp1 (name) values ('Pineapple');
insert into tmp1 (name) values ('Orange');
insert into tmp1 (name) values ('Kiwi');
insert into tmp1 (name) values ('Ananas');
insert into tmp1 (name) values ('Banana');
insert into tmp1 (name) values ('Blackberry');
select * from tmp1 order by id;
And I got this list:
1 Apple
2 Carrot
3 Pineapple
4 Orange
5 Kiwi
6 Ananas
7 Banana
8 Blackberry
No surprises here. Then I made a copy from tmp1 to tmp2 this way:
insert into tmp2 (name)
select name
from tmp1
order by id;
select * from tmp2 order by id;
I got the exact response like before. Apple to Blackberry.
Now reverse the order to test it:
delete from tmp2;
insert into tmp2 (name)
select name
from tmp1
order by id desc;
select * from tmp2 order by id;
9 Blackberry
10 Banana
11 Ananas
12 Kiwi
13 Orange
14 Pineapple
15 Carrot
16 Apple
So the order in tmp2 is reversed too, so order by made sense when there is a identity column in the target table!
The reason why one would desire this (a specific order) is because you cannot define the order in a subquery, so, the idea is that, if you create a table variable, THEN make a query from that table variable, you would think you would retain the order(say, to concatenate rows that must be in order- say for XML or json), but you can't.
So, what do you do?
The answer is to force SQL to order it by using TOP in your select (just pick a number high enough to cover all your rows).
I have run into the same issue and one reason I have needed to preserve the order is when I try to use ROLLUP to get a weighted average based on the raw data and not an average of what is in that column. For instance, say I want to see the average of profit based on number of units sold by four store locations? I can do this very easily by creating the equation Profit / #Units = Avg. Now I include a ROLLUP in my GROUP BY so that I can also see the average across all locations. Now I think to myself, "This is good info but I want to see it in order of Best Average to Worse and keep the Overall at the bottom (or top) of the list)." The ROLLUP will fail you in this so you take a different approach.
Why not create row numbers based on the sequence (order) you need to preserve?
SELECT OrderBy = ROW_NUMBER() OVER(PARTITION BY 'field you want to count' ORDER BY 'field(s) you want to use ORDER BY')
, VibeFGEvents.*
FROM VibeFGEvents
LEFT OUTER JOIN VibeFGEventsStudyStart
ON
CHARINDEX(REPLACE(REPLACE(REPLACE(logName, 'MyVibe ', ''), ' new laptop', ''), ' old laptop', ''), excelFilename) > 0
AND VibeFGEventsStudyStart.MIN_TitleInstID <= VibeFGEvents.TitleInstID
AND VibeFGEventsStudyStart.MIN_WinInstId <= VibeFGEvents.WndInstID
WHERE VibeFGEventsStudyStart.excelFilename IS NOT NULL
Now you can use the OrderBy field from your table to set the order of values. I removed the ORDER BY statement from the query above since it does not affect how the data is loaded to the table.
I found this approach helpful to solve this problem:
WITH ordered as
(
SELECT TOP 1000
[Month]
FROM SourceTable
GROUP BY [Month]
ORDER BY [Month]
)
INSERT INTO DestinationTable (MonthStart)
(
SELECT * from ordered
)
Try using INSERT INTO instead of SELECT INTO
INSERT INTO VibeFGEventsAfterStudyStart
SELECT VibeFGEvents.*
FROM VibeFGEvents
LEFT OUTER JOIN VibeFGEventsStudyStart
ON
CHARINDEX(REPLACE(REPLACE(REPLACE(logName, 'MyVibe ', ''), ' new laptop', ''), ' old laptop', ''), excelFilename) > 0
AND VibeFGEventsStudyStart.MIN_TitleInstID <= VibeFGEvents.TitleInstID
AND VibeFGEventsStudyStart.MIN_WinInstId <= VibeFGEvents.WndInstID
WHERE VibeFGEventsStudyStart.excelFilename IS NOT NULL
ORDER BY VibeFGEvents.id`
The following T-SQL statement does not work because [key] has to be unique and the MAX call in the SELECT statement only seems to be called once. In other words it is only incrementing the key value once and and trying to insert that value over and over. Does anyone have a solution?
INSERT INTO [searchOC].[dbo].[searchTable]
([key],
dataVaultType,
dataVaultKey,
searchTerm)
SELECT (SELECT MAX([key]) + 1 FROM [searchOC].[dbo].[searchTable]) AS [key]
,'PERSON' as dataVaultType
,[student_id] as dataVaultKey
,[email] as searchTerm
FROM [JACOB].[myoc4Data].[dbo].[users]
WHERE [email] != '' AND [active] = '1'
AND [student_id] IN (SELECT [userID] FROM [JACOB].[myoc4Data].[dbo].[userRoles]
WHERE ([role] = 'STUDENT' OR [role] = 'FACUTLY' OR [role] = 'STAFF'))
If you can make the key column an IDENTITY column that would probably be the easiest. That allows SQL Server to generate incremental values.
Alternatively, if you are definite about finding your own way to generate the key then a blog post I wrote last month may help. Although it uses a composite key, it shows you what you need to do to stop the issue with inserting multiple rows in a single INSERT statement safely generating a new value for each new row and it is also safe across many simultaneous writers (which many examples don't deal with)
http://colinmackay.co.uk/2012/12/29/composite-primary-keys-including-identity-like-column/
Incidentally, the reason that you get the same value for MAX(Key) on each row in your SELECT is that this happens at the time the table is read from. So for all the rows that the SELECT statement returns the MAX(key) will always be the same. Unless you add some sort of GROUP BY clause for any SELECT statement any MAX(columnName) function will return the same value for each row returned.
Also, all aggregate functions are deterministic, so for each equivalent set of input it will always have the same output. So if your set of keys was 1, 5, 9 then it will always return 9.