Odd behaviour of rowstamp column in SQL Server - sql-server

Is the use of the datatype rowversion in a table always reliable?
I have seen a problem in comparing a value to a column of this datatype (ie select * from MyTable where tStamp > #value) and on investigating this - I have found some odd results. I am not sure if this is my misunderstanding, or a bigger issue that means the datatype cannot be relied on.
In the database is a table (MyTable) and the tStamp column is a rowversion datatype in which the minimum value is 0x00000004355B68B7. There is a stored procedure which accepts as input a parameter of the type rowversion, but that in turn calls a procedure that uses an input parameter of data type binary(8). So the first procedure is declared like
create procedure proc1 #inputTS rowversion
and the second is declared like
create procedure proc2 #inputTS binary(8)
I assumed initially that the problem I was seeing was due to the fact that the datatypes were different (when proc1 calls proc2 it passes the value #inputTS over to it). And when I changed proc2 to use rowversion, I got the expected results. But then I tried a number of tests and what I saw from those ... was odd.
If I use the value 0x0000000070000000:
declare #testRV rowversion = 0x0000000070000000
declare #testBin binary(8) = #testRV
select * from MyTable where tStamp > #testRV -- First Query
select * from MyTable where tStamp > #testBin -- Second Query
I found that the first select returned no rows and the second returned all rows. They should both have returned all rows. If I changed the value to 0x000000006FFFFFFF or 0x0000000070000001, then both queries did return all rows.
If I use the value 0x0000000080000000 then both queries return no rows. Using this value - 0x0000000441CED675 - both returned rows but this value - 0x0000000481CED675 - neither returned rows. And then using this value - 0x0000000080000001 - the first query returned all rows, the second returned none
At one point I was thinking that internally, SQL was treating the values as 2 integers - ie 0x0000000080000000 was 0x00000000 and 0x80000000. Since that's one higher than the max int size and is treated as -2147483648, it looked like the problem was where the "lower" integer was being treated as a negative. But that doesn't explain the behaviour when using 0x0000000070000000 or 0x0000000080000001.
I have tried this on SQL Server versions from 2008 R2 to 2017 and got the same results each time.
Am I missing something about how rowversion can be used?

Related

Trying to read returned value from stored procedure always shows 0

In SQL Server, I have an existing Document_Add stored procedure that works and returns a good DocID value (in Visual Studio vb code) and cannot change. Calling it like this in SQL:
EXEC #DocID = PADS2.dbo.Document_Add #SystemCode...
This runs the stored procedure, but #DocID is always 0 (whether declared as INT or varchar).
Expecting #DocID to be 2594631 or similar.
Any ideas?
I always say RTFM - read the fine manual provided by microsoft.
https://learn.microsoft.com/en-us/sql/t-sql/statements/create-procedure-transact-sql?view=sql-server-ver16
1 - Example D shows how to use input parameters.
2 - Example F show how to use output parameters.
Just remember, input and output parameters are scalar, not tables.
If you do need to input of a table look at Example G.
3 - Example G - how to pass a table value parameter
4 - Example B - return multiple result sets.
If you execute two SELECT statements in the procedure, you will have MARS (multiple active result sets).
By default, a procedure returns a value of zero. It is typical used to indicate if the procedure was a success. You can code a non zero value to indicate an error.
Always read the docs!
<If this is not the place to follow up, please advise.>
Credit to rjs123431:
SQL SP that ends like this:
...
SELECT #Rslt = CONVERT(VARCHAR(2000),#NewId)
select #Rslt;
END
Returns a string '2680914' in vb:
sDocid = DA.ExecScalarString(EXEC PADS2.DBO.[Document_Add] 'C', ...)
But returns 0 when called from SQL like this:
EXEC #DocID = PADS2.dbo.Document_Add #SystemCode...
This returns correct DocID (2680914) when using a Temp Table like this:
INSERT INTO #TempTable EXEC PADS2.dbo.Document_Add #SystemCode...
set #sDocID = (SELECT DocID FROM #TempTable)

SQL Server: Error converting data type varchar to numeric (Strange Behaviour)

I'm working on a legacy system using SQL Server in 2000 compatibility mode. There's a stored procedure that selects from a query into a virtual table.
When I run the query, I get the following error:
Error converting data type varchar to numeric
which initially tells me that something stringy is trying to make its way into a numeric column.
To debug, I created the virtual table as a physical table and started eliminating each column.
The culprit column is called accnum (which stores a bank account number, which has a source data type of varchar(21)), which I'm trying to insert into a numeric(16,0) column, which obviously could cause issues.
So I made the accnum column varchar(21) as well in the physical table I created and it imports 100%. I also added an additional column called accnum2 and made it numeric(16,0).
After the data is imported, I proceeded to update accnum2 to the value of accnum. Lo and behold, it updates without an error, yet it wouldn't work with an insert into...select query.
I have to work with the data types provided. Any ideas how I can get around this?
Can you try to use conversion in your insert statement like this:
SELECT [accnum] = CASE ISNUMERIC(accnum)
WHEN 0 THEN NULL
ELSE CAST(accnum AS NUMERIC(16, 0))
END

How to query for rows containing <Unable to read data> in a column?

I have a SQL table in which some columns, when viewed in SQL Server Manager, contain <Unable to read data>. Does anyone know how to query for <Unable to read data>? I can individually modify the data in this column with update table set column = NULL where key = 'value', but how can I find whether additional rows exist with this bad data?
I would recommend against replacing the data. There is nothing wrong with it, is just that SSMs cannot display it properly in the Edit panel. The data in the database itself is perfectly fine, from your description.
This script shows the problem:
create table test (id int not null identity(1,1) primary key,
large_value numeric(38,0));
go
insert into test (large_value) values (1);
insert into test (large_value) values (12345678901234567890123456789012345678);
insert into test (large_value) values (1234567890123456789012345678901234567);
insert into test (large_value) values (123456789012345678901234567890123456);
insert into test (large_value) values (12345678901234567890123456789012345);
insert into test (large_value) values (1234567890123456789012345678901234);
insert into test (large_value) values (123456789012345678901234567890123);
insert into test (large_value) values (12345678901234567890123456789012);
insert into test (large_value) values (1234567890123456789012345678901);
insert into test (large_value) values (123456789012345678901234567890);
insert into test (large_value) values (12345678901234567890123456789);
insert into test (large_value) values (NULL);
go
select * from test;
go
The SELECT will work fine, but showing the Edit Top 200 Rows in object explorer will not:
There is a Connect Item for this issue. SSMS 2012 still exhibits the same problem.
If we look at the Numeric and Decimal details we'll see that the problem occurs at a weird boundary, at precision 29 which is actually not a SQL Server boundary (precision 28 is):
Precision Storage bytes
1 - 9 5
10-19 9
20-28 13
29-38 17
If we check the .Net (SSMS is a managed application) decimal precision table we can see quickly where the crux of the issue is: Precision is 28-29 significant digits. So the .Net decimal type cannot map high precision (>29) SQL Server numeric/decimal types.
This will affect not only SSMS display, but your applications as well. Specialized applications like SSIS will use high precisions representation like DT_NUMERIC:
DT_NUMERIC An exact numeric value with a fixed precision and scale.
This data type is a 16-byte unsigned integer with a separate sign, a
scale of 0 - 38, and a maximum precision of 38.
Now back to your problem: you can discover invalid entries by simply looking at the value. Knowing that the C# representation range can accommodate values between approximate (-7.9 x 1028 to 7.9 x 1028) / (100 to 28)` (the range depends on the scale) you can search for values outside the range on each column (the actual values to search between will depend on the column scale). But that begs the question 'what to replace the data with?'.
I would recommend instead using dedicated tools for import export, tools that are capable of handling high precision numeric values. SSIS is the obvious candidate. But even the modest bcp.exe would also fit the bill.
BTW if your values are actually incorrect (ie. true corruption) then I would recommend running DBCC CHECKTABLE (...) WITH DATA_PURITY:
DATA_PURITY
Causes DBCC CHECKDB to check the database for column values that are not valid or out-of-range. For example, DBCC CHECKDB detects
columns with date and time values that are larger than or less than
the acceptable range for the datetime data type; or decimal or
approximate-numeric data type columns with scale or precision values
that are not valid.
For databases created in SQL Server 2005 and later, column-value integrity checks are enabled by default and do not require the
DATA_PURITY option. For databases upgraded from earlier versions of
SQL Server, column-value checks are not enabled by default until DBCC
CHECKDB WITH DATA_PURITY has been run error free on the database.
After this, DBCC CHECKDB checks column-value integrity by default.
Q: How can this issue arise for a datetime column?
use tempdb;
go
create table test(d datetime)
insert into test (d) values (getdate())
select %%physloc%%, * from test;
-- Row is on page 0x9100000001000000
dbcc traceon(3604,-1);
dbcc page(2,1,145,3);
Memory Dump #0x000000003FA1A060
0000000000000000: 10000c00 75f9ff00 6aa00000 010000 ....uùÿ.j .....
Slot 0 Column 1 Offset 0x4 Length 8 Length (physical) 8
dbcc writepage(2,1,145, 100, 8, 0xFFFFFFFFFFFFFFFF)
dbcc checktable('test') with data_purity;
Msg 2570, Level 16, State 3, Line 2 Page (1:145), slot 0 in object ID
837578022, index ID 0, partition ID 2882303763115671552, alloc unit ID
2882303763120062464 (type "In-row data"). Column "d" value is out of
range for data type "datetime". Update column to a legal value.
As suggested above ,these errors usually occurs when Precision and scale are not preserved .If your comfortable with SSIS then you can achieve to get those rows which are corrupt .Taking the values which Martin Smith created
CREATE TABLE T(ID int ,C DECIMAL(38,0));
INSERT INTO T VALUES(1,9999999999999999999999999999999999999)
The above table reproduces the error . Here the first column represents the primary key . I inserted around 1000 rows out of which few were corrupted values . Below is the SSIS package design
In the Data Conversion ,i took the column C which had errors and tried to cast it to Decimal(38,0) .Since a conversion or truncation error will occur ,therefore i redirected the error rows to an OLEDB command which basically updates the table and sets the column to NULL
Update T
Set C=NULL
where ID=?
The value of C and ID will be directed to oledb command .In case if there is no error then i'm just inserting into a table ( Actually no need to do this ).This will work if you have a primary key column in your table .
In case if there is any error in date time column a sql query can be written to verify the format of datetime values .Please go through the MSDN link for valid date time value
Select * from YourTable where ISDATE(Col)!=1
I think you can fetch data with cursor. please try again with cursor query such as below query :
DECLARE VerifyCursor CURSOR FOR
SELECT *
FROM MyTable
WHILE 1=1 BEGIN
BEGIN Try
FETCH FIRST FROM VerifyCursor INTO #Column1, #Column2, ...
INSERT INTO #MyTable2(Column1, Column2,...)
VALUES (#Column1, #Column2, ...)
END TRY
BEGIN CATCH
END CATCH
IF (##FETCH_STATUS<>0) BREAK
End
OPEN VerifyCursor
CLOSE VerifyCursor
DEALLOCATE VerifyCursor
Replacing the bad data is simple with an update:
UPDATE table SET column = NULL WHERE key_column = 'Some value'

How to INSERT EXEC when the result data from a CLR SP should be truncated?

I'm inserting the result set returned from a CLR stored procedure into a table variable. I get the error: "System.Data.SqlClient.SqlException: String or binary data would be truncated", because some strings' lengths in the result set exceed the varchar limit defined in the temporary table. The annoying thing is, truncation is exactly what I want!
So, how do I truncate (the strings in) the result set from the stored procedure, upon inserting it?
I'd rather not change the code of the CLR SP. The strings in the data that's being inserted are of arbitrary length.
I think I'd make the columns of the temp table large enough to accept the data, then truncate after the insert, e.g.,
UPDATE #YourTempTable
SET ColumnA = LEFT(ColumnA, 20),
ColumnB = LEFT(ColumnB, 50)

SQL Server Query: Fast with Literal but Slow with Variable

I have a view that returns 2 ints from a table using a CTE. If I query the view like this it runs in less than a second
SELECT * FROM view1 WHERE ID = 1
However if I query the view like this it takes 4 seconds.
DECLARE #id INT = 1
SELECT * FROM View1 WHERE ID = #id
I've checked the 2 query plans and the first query is performing a Clustered index seek on the main table returning 1 record then applying the rest of the view query to that result set, where as the second query is performing an index scan which is returning about 3000 records records rather than just the one I'm interested in and then later filtering the result set.
Is there anything obvious that I'm missing to try to get the second query to use the Index Seek rather than an index scan. I'm using SQL 2008 but anything I do needs to also run on SQL 2005. At first I thought it was some sort of parameter sniffing problem but I get the same results even if I clear the cache.
Probably it is because in the parameter case, the optimizer cannot know that the value is not null, so it needs to create a plan that returns correct results even when it is. If you have SQL Server 2008 SP1 you can try adding OPTION(RECOMPILE) to the query.
You could add an OPTIMIZE FOR hint to your query, e.g.
DECLARE #id INT = 1
SELECT * FROM View1 WHERE ID = #id OPTION (OPTIMIZE FOR (#ID = 1))
In my case in DB table column type was defined as VarChar and in parameterized query parameter type was defined as NVarChar, this introduced CONVERT_IMPLICIT in the actual execution plan to match data type before comparing and that was culprit for sow performance, 2 sec vs 11 sec. Just correcting parameter type made parameterized query as fast as non parameterized version.
One possible way to do that is to CAST the parameters, as such:
SELECT ...
FROM ...
WHERE name = CAST(:name AS varchar)
Hope this may help someone with similar issue.
I ran into this problem myself with a view that ran < 10ms with a direct assignment (WHERE UtilAcctId=12345), but took over 100 times as long with a variable assignment (WHERE UtilAcctId = #UtilAcctId).
The execution-plan for the latter was no different than if I had run the view on the entire table.
My solution didn't require tons of indexes, optimizer-hints, or a long-statistics-update.
Instead I converted the view into a User-Table-Function where the parameter was the value needed on the WHERE clause. In fact this WHERE clause was nested 3 queries deep and it still worked and it was back to the < 10ms speed.
Eventually I changed the parameter to be a TYPE that is a table of UtilAcctIds (int). Then I can limit the WHERE clause to a list from the table.
WHERE UtilAcctId = [parameter-List].UtilAcctId.
This works even better. I think the user-table-functions are pre-compiled.
When SQL starts to optimize the query plan for the query with the variable it will match the available index against the column. In this case there was an index so SQL figured it would just scan the index looking for the value. When SQL made the plan for the query with the column and a literal value it could look at the statistics and the value to decide if it should scan the index or if a seek would be correct.
Using the optimize hint and a value tells SQL that “this is the value which will be used most of the time so optimize for this value” and a plan is stored as if this literal value was used. Using the optimize hint and the sub-hint of UNKNOWN tells SQL you do not know what the value will be, so SQL looks at the statistics for the column and decides what, seek or scan, will be best and makes the plan accordingly.
I know this is long since answered, but I came across this same issue and have a fairly simple solution that doesn't require hints, statistics-updates, additional indexes, forcing plans etc.
Based on the comment above that "the optimizer cannot know that the value is not null", I decided to move the values from a variable into a table:
Original Code:
declare #StartTime datetime2(0) = '10/23/2020 00:00:00'
declare #EndTime datetime2(0) = '10/23/2020 01:00:00'
SELECT * FROM ...
WHERE
C.CreateDtTm >= #StartTime
AND C.CreateDtTm < #EndTime
New Code:
declare #StartTime datetime2(0) = '10/23/2020 00:00:00'
declare #EndTime datetime2(0) = '10/23/2020 01:00:00'
CREATE TABLE #Times (StartTime datetime2(0) NOT NULL, EndTime datetime2(0) NOT NULL)
INSERT INTO #Times(StartTime, EndTime) VALUES(#StartTime, #EndTime)
SELECT * FROM ...
WHERE
C.CreateDtTm >= (SELECT MAX(StartTime) FROM #Times)
AND C.CreateDtTm < (SELECT MAX(EndTime) FROM #Times)
This performed instantly as opposed to several minutes for the original code (obviously your results may vary) .
I assume if I changed my data type in my main table to be NOT NULL, it would work as well, but I was not able to test this at this time due to system constraints.
Came across this same issue myself and it turned out to be a missing index involving a (left) join on the result of a subquery.
select *
from foo A
left outer join (
select x, count(*)
from bar
group by x
) B on A.x = B.x
Added an index named bar_x for bar.x
DECLARE #id INT = 1
SELECT * FROM View1 WHERE ID = #id
Do this
DECLARE #sql varchar(max)
SET #sql = 'SELECT * FROM View1 WHERE ID =' + CAST(#id as varchar)
EXEC (#sql)
Solves your problem

Resources