Data eliminated while fetching from view in sql server - sql-server

I am running following query
insert into [tbl_Readership] -- Record count 7812940
select * from [vw_PortalReadership] -- Record count 7812985
while running this query I am getting following warning:
Null value is eliminated by an aggregate or other SET operation.
And some of my data i am loosing. Any suggestion how to track those records which are eliminated.

This is a standard ANSI warning, and an expected behavior.
Whichever columns are being aggregated in your view are able to have null values in your table, and at least one of them does.
Once you have the name/s of the column/s in the view being aggregated, you can see which ones are null in your table with:
select *
from [tbl_Readership]
where [aggregated_column_name] is null

Related

Snowflake INSERT inconsistent behavior using ORDER BY

I'm trying to insert records into a table in a certain (and simple) order, as the table have an IDENTITY column (e.g. MyTbl (ID INT IDENTITY(1,1), Sale_Date DATE, Product_ID INT, Sales INT).
The query is quite simple (this is just a simplified example):
INSERT INTO MyTbl (Sale_Date, Product_ID, Sales)
SELECT Sale_Date, Product_ID,COUNT(*) as sales
FROM Fact_tbl
GROUP BY Sale_Date,Product_ID
ORDER BY Sale_Date,Product_ID
The expected behavior is that when I select the highest values of the identity ID column, I should see the latest Sale_Date. However, this is not the case. The order of the ID column in the table has nothing to do with the dates. To make things even worse, if I recreate the table and run the same INSERT statement again and again and again, I'm getting different order of insertion each time for the same data.
I'm getting this behavior even if I encase the query and put the ORDER BY in or out of the casing.
I never saw this behavior in any other SQL platform. Is this the expected behavior in Snowflake?
It's expected. Let me explain the reason:
AUTOINCREMENT and IDENTITY are synonymous. If either is specified for a column, Snowflake utilizes a sequence to generate the values for the column.
https://docs.snowflake.com/en/sql-reference/sql/create-table.html#optional-parameters
There is no guarantee that values from a sequence are contiguous (gap-free) or that the sequence values are assigned in a particular order. There is, in fact, no way to assign values from a sequence to rows in a specified order other than to use single-row statements (this still provides no guarantee about gaps).
https://docs.snowflake.com/en/user-guide/querying-sequences.html#sequence-semantics
With Snowflake each INSERT has completely different order than the
same INSERT that ran a couple of minutes ago
No, it should insert the data in expected order because you use "ORDER BY" clause. The issue is, the sequence values are not assigned in a particular order!
It's not easy to verify if the data is sorted when you use "INSERT/SELECT ORDER BY", unless you have access to underlying metadata. For testing, you may define clustering keys on a table that you ingested "sorted" data.
Anyway, if you want to assign IDs matching the order when inserting bulk data, you need to use ROW_NUMBER instead of using an IDENTITY column or any sequence values.
This is not expected behavior in Snowflake. However the way you insert data into your table (with the order by) doesn't affect the order in which the data is stored inside the table. You can leave the order by out in the insert, but you should include it in your select.

COUNT(*) vs COUNT(column) Performance in Snowflake

Since SnowFlake is a columnar database, does it impact performance when you use COUNT(*) vs COUNT(column)? And this is assuming that the column that you're referencing does NOT have any NULLs
As a_horse_with_no_name explained these two functions are different but you already mentioned that the column has no NULL values. So they should return the same result in your case.
More important thing is, Snowflake has a special optimization for the COUNT function. As far I see, it does NOT impact performance if you use COUNT(*) or COUNT(column), even when the column contains NULL values! For both of them, Snowflake uses METADATA statistics, so it does not actually count rows.
You can test it with SNOWFLAKE_SAMPLE_DATA:
select count(*) from snowflake_sample_data.TPCH_SF1000.LINEITEM;
-- 5999989709
select count(L_ORDERKEY) from snowflake_sample_data.TPCH_SF1000.LINEITEM;
-- 5999989709
Both queries will return a result immediately although the table size is about 170G, and contain more than 5G rows.
I have to add this extra information because of the conversation between Niru and a_horse_with_no_name. a_horse_with_no_name said:
Even if all columns of a row are NULL, count(*) should include that row in the result. If it doesn't this is a clear violation of the SQL standard
I'm not sure about the SQL standard but when you use COUNT(*), Snowflake doesn't check if the columns are NULL or not (as you expected). I can see why Niru misunderstood the documents, the docs and the samples should be improved.
If you run my sample queries, you will see that they are completed in milliseconds. We are talking about counting almost 6 billion rows:
select count(*) from snowflake_sample_data.TPCH_SF1000.LINEITEM;
-- completes in milliseconds
select count(L_ORDERKEY) from snowflake_sample_data.TPCH_SF1000.LINEITEM;
-- completes in milliseconds
But if I do a little modification on the query, it takes about 3 minutes on the same warehouse (XSMALL):
select count(t.*) from sample_data.TPCH_SF1000.LINEITEM t;
-- completes in 3 minutes!?
Here is the trick:
Alias.*, which indicates that the function should return the number of rows that do not contain any NULLs.
https://docs.snowflake.com/en/sql-reference/functions/count.html#arguments
Only if you use alias.* (like I used t.* in my sample), Snowflake will check if all columns are null when producing the count. This is why it is much slower, and this is why there shouldn't be any performance issues when you are running COUNT(XYZ) or COUNT(*) on a table.
Here is the snowflake doc.. hope it helps
https://docs.snowflake.com/en/sql-reference/functions/count.html Please refer to snowflake document.. it does effect count(alias.*) will check the each column in the row where as count(column) do null check only on that column..

MSRS column group

I'm trying to prepare a report like on image below
Report1
When I'm trying to preview a report I get three additional columns between column Reservations and first type of stock_description
Report2
Now in T-SQL Query in select part I have got:
sum(units)
sum(units_required),
sum(units_avaliable)
I know that t-sql ignore null values. But when I change the query to:
sum(isnull (units,0)),
sum(isnull (units_required,0)),
sum(isnull (units_avaliable,0))
then I get 0 value in those additional columns instead of null value. When query returns any value them it is where it should be - in one of the stock_description.
What should I do to delete those three columns between Reservations and stock_location?
It is because your data has NULL values of Stock_description field. You can put additional condition in your TSQL to exclude NULL Stock Description.
SELECT ....
FROM ....
JOIN ....
WHERE .....
AND TableName.Stock_Description IS NOT NULL
But one thing you need to watch/Test is what happens if there are units under NULL Stock_description
You can also handle this in SSRS by filtering either at Tablix or datasource but doing in SQL itself is much better.

Inserting the values with condition

Using SQL Server 2005
When i insert the date it should compare the date in the table.
If it is equal with other date, it should display a error message and also it should allow only to insert the next date.
For Example
Table1
Date
20091201
20091202
Insert into table1 values('20091202')
The above query should not allow to insert the same value
Insert into table1 values('20091204')
The above query also should not allow to insert the long gap date.
The query should allow only the next date.
It should not allow same date and long gap date.
How to insert a query with this condition.
Is Possible in SQL or VB.Net
Need SQL Query or VB.Net code Help
You could use a where clause to ensure that the previous day is present in the table, and the current day is not:
insert into table1 ([dateColumn])
select '20091204'
where exists (
select * from table1 where [dateColumn] = dateadd(d,-1,'20091204')
)
and not exists (
select * from table1 where [dateColumn] = '20091204'
)
if ##rowcount <> 1
raiserror ('Oops', 16, 1)
If the insert succeeds, ##rowcount will be set to 1. Otherwise, an error is returned to VB using raiserror.
Why not just have a table of dates set up in advance, and update a row once you want to "insert" that date?
I'm not sure I understand the point of inserting a new date only once, and never allowing a gap. Could you describe your business problem in a little more detail?
Of course you could use an IDENTITY column, and then have a computed column or a view that calculates the date from the number of days since (some date). But IDENTITY columns do not guarantee contiguity, nor do they even guarantee uniqueness on their own (unless you set up suc a constraint separately).
Preventing duplicates should be done at the table level with a unique constraint, not with a query. You can check for duplicates first so that you can handle errors in your own way (rather than let the engine raise an exception for you), but that shouldn't be your only check.
Sounds like your date field should just be unique with auto-increment.

How does sql server choose values in an update statement where there are multiple options?

I have an update statement in SQL server where there are four possible values that can be assigned based on the join. It appears that SQL has an algorithm for choosing one value over another, and I'm not sure how that algorithm works.
As an example, say there is a table called Source with two columns (Match and Data) structured as below:
(The match column contains only 1's, the Data column increments by 1 for every row)
Match Data
`--------------------------
1 1
1 2
1 3
1 4
That table will update another table called Destination with the same two columns structured as below:
Match Data
`--------------------------
1 NULL
If you want to update the ID field in Destination in the following way:
UPDATE
Destination
SET
Data = Source.Data
FROM
Destination
INNER JOIN
Source
ON
Destination.Match = Source.Match
there will be four possible options that Destination.ID will be set to after this query is run. I've found that messing with the indexes of Source will have an impact on what Destination is set to, and it appears that SQL Server just updates the Destination table with the first value it finds that matches.
Is that accurate? Is it possible that SQL Server is updating the Destination with every possible value sequentially and I end up with the same kind of result as if it were updating with the first value it finds? It seems to be possibly problematic that it will seemingly randomly choose one row to update, as opposed to throwing an error when presented with this situation.
Thank you.
P.S. I apologize for the poor formatting. Hopefully, the intent is clear.
It sets all of the results to the Data. Which one you end up with after the query depends on the order of the results returned (which one it sets last).
Since there's no ORDER BY clause, you're left with whatever order Sql Server comes up with. That will normally follow the physical order of the records on disk, and that in turn typically follows the clustered index for a table. But this order isn't set in stone, particularly when joins are involved. If a join matches on a column with an index other than the clustered index, it may well order the results based on that index instead. In the end, unless you give it an ORDER BY clause, Sql Server will return the results in whatever order it thinks it can do fastest.
You can play with this by turning your upate query into a select query, so you can see the results. Notice which record comes first and which record comes last in the source table for each record of the destination table. Compare that with the results of your update query. Then play with your indexes again and check the results once more to see what you get.
Of course, it can be tricky here because UPDATE statements are not allowed to use an ORDER BY clause, so regardless of what you find, you should really write the join so it matches the destination table 1:1. You may find the APPLY operator useful in achieving this goal, and you can use it to effectively JOIN to another table and guarantee the join only matches one record.
The choice is not deterministic and it can be any of the source rows.
You can try
DECLARE #Source TABLE(Match INT, Data INT);
INSERT INTO #Source
VALUES
(1, 1),
(1, 2),
(1, 3),
(1, 4);
DECLARE #Destination TABLE(Match INT, Data INT);
INSERT INTO #Destination
VALUES
(1, NULL);
UPDATE Destination
SET Data = Source.Data
FROM #Destination Destination
INNER JOIN #Source Source
ON Destination.Match = Source.Match;
SELECT *
FROM #Destination;
And look at the actual execution plan. I see the following.
The output columns from #Destination are Bmk1000, Match. Bmk1000 is an internal row identifier (used here due to lack of clustered index in this example) and would be different for each row emitted from #Destination (if there was more than one).
The single row is then joined onto the four matching rows in #Source and the resultant four rows are passed into a stream aggregate.
The stream aggregate groups by Bmk1000 and collapses the multiple matching rows down to one. The operation performed by this aggregate is ANY(#Source.[Data]).
The ANY aggregate is an internal aggregate function not available in TSQL itself. No guarantees are made about which of the four source rows will be chosen.
Finally the single row per group feeds into the UPDATE operator to update the row with whatever value the ANY aggregate returned.
If you want deterministic results then you can use an aggregate function yourself...
WITH GroupedSource AS
(
SELECT Match,
MAX(Data) AS Data
FROM #Source
GROUP BY Match
)
UPDATE Destination
SET Data = Source.Data
FROM #Destination Destination
INNER JOIN GroupedSource Source
ON Destination.Match = Source.Match;
Or use ROW_NUMBER...
WITH RankedSource AS
(
SELECT Match,
Data,
ROW_NUMBER() OVER (PARTITION BY Match ORDER BY Data DESC) AS RN
FROM #Source
)
UPDATE Destination
SET Data = Source.Data
FROM #Destination Destination
INNER JOIN RankedSource Source
ON Destination.Match = Source.Match
WHERE RN = 1;
The latter form is generally more useful as in the event you need to set multiple columns this will ensure that all values used are from the same source row. In order to be deterministic the combination of partition by and order by columns should be unique.

Resources