I have a Table with a computed column that uses a scalar function
CREATE FUNCTION [dbo].[ConvertToMillimetres]
(
#fromUnitOfMeasure int,
#value decimal(18,4)
)
RETURNS decimal(18,2)
WITH SCHEMABINDING
AS
BEGIN
RETURN
CASE
WHEN #fromUnitOfMeasure=1 THEN #value
WHEN #fromUnitOfMeasure=2 THEN #value * 100
WHEN #fromUnitOfMeasure=3 THEN #value * 1000
WHEN #fromUnitOfMeasure=4 THEN #value * 25.4
ELSE #value
END
END
GO
The table has this column
[LengthInMm] AS (CONVERT([decimal](18,2),[dbo].[ConvertToMillimetres]([LengthUnitOfMeasure],[Length]))) PERSISTED,
Assuming that the [Length] on the Table is 62.01249 and [LengthUnitOfMeasure] is 4 the LengthInMm computed value comes with 1575.11 but when i run the function directly like
SELECT [dbo].[ConvertToMillimetres] (4, 62.01249) GO
It comes with 1575.12
[Length] column is (decimal(18,4, null))
Can anyone tell why this happens?
I'm not sure this really counts as an answer, but you've perhaps got a problem with a specific version of SQL?
I just had a go at replicating it (on a local SQL 2014 install) and got the following:
create table dbo.Widgets (
Name varchar(20),
Length decimal(18,4),
LengthInMm AS [dbo].[ConvertToMillimetres] (4, Length) PERSISTED
)
insert into dbo.Widgets (Name, Length) values ('Thingy', 62.01249)
select * from dbo.Widgets
Which gives the result:
Name Length LengthInMm
-------------------- --------------------------------------- ---------------------------------------
Thingy 62.0125 1575.12
(1 row(s) affected)
Note that your definition uses [LengthInMm] AS (CONVERT([decimal](18,2),[dbo].[ConvertToMillimetres]([LengthUnitOfMeasure],[Length]))) PERSISTED, but that doesn't seem to make a difference to the result.
I also tried on my PC (Microsoft SQL Server 2012 - 11.0.2100.60 (X64)). Works fine:
CREATE TABLE dbo.data(
LengthUnitOfMeasure INT,
[Length] decimal(18,4),
[LengthInMm] AS (CONVERT([decimal](18,2),[dbo].[ConvertToMillimetres]([LengthUnitOfMeasure],[Length]))) PERSISTED
)
INSERT INTO dbo.data (LengthUnitOfMeasure, [Length])
SELECT 4, 62.01249
SELECT *
FROM dbo.data
/*
RESULT
LengthUnitOfMeasure Length LengthInMm
4 62.0125 1575.12
*/
I think, I found the answer:
Lets see what you are saying:
There is a column with decimal(18,4) data type.
There is a calculated column which depend on this column.
The result differs when you select the calculated field and when you provide the same result manually. (Right?)
Sorry, but the input parameters are not the same:
The column in the table is decimal(18,4). The value you are provided manually is decimal(7,5) (62.01249)
Since the column in the table can not store any values with scale of 5, the provided values will not be equal. (Furthermore there is no record in the table with the value of 62.01249 in the Length column)
What is the output when you query the [Length] column from the table? Is it 62.0124? If yes, then this is the answer. The results can not be equal since the input values are not equal.
To be a bit more specific: 62.01249 will be casted (implicit cast) to 62.0125.
ROUND(25.4 * 62.0124, 2) = 1575.11
ROUND(25.4 * 62.0125, 2) = 1575.12
EDIT Everybody who tried to rebuild the schema made the same mistake (Including me). When we (blindly) inserted the values from the original question into our instances, we inserted the 62.01249 into the Length column -> the same implicit cast occured, so we have the value 62.0125 in our tables.
Related
For example, there is a table
int type
int number
int value
How to make that when inserting a value into a table
indexing started from 1 for different types.
type 1 => number 1,2,3...
type 2 => number 1,2,3...
That is, it will look like this.
type
number
value
1
1
-
1
2
-
1
3
-
2
1
-
1
4
-
2
2
-
3
1
-
6
1
-
1
5
-
2
3
-
6
2
-
Special thanks to #Larnu.
As a result, in my case, the best solution would be to create a table for each type.
As I mentioned in the comments, neither IDENTITY nor SEQUENCE support the use of another column to denote what "identity set" they should use. You can have multiple SEQUENCEs which you could use for a single table, however, this doesn't scale. If you are specific limited to 2 or 3 types, for example, you might choose to create 3 SEQUENCE objects, and then use a stored procedure to handle your INSERT statements. Then, when a user/application wants to INSERT data, they call the procedure and that procedure has logic to use the SEQUENCE based on the value of the parameter for the type column.
As mentioned, however, this doesn't scale well. If you have an undeterminate number of values of type then you can't easily handle getting the right SEQUENCE and handling new values for type would be difficult too. In this case, you would be better off using a IDENTITY and then a VIEW. The VIEW will use ROW_NUMBER to create your identifier, while IDENTITY gives you your always incrementing value.
CREATE TABLE dbo.YourTable (id int IDENTITY(1,1),
[type] int NOT NULL,
number int NULL,
[value] int NOT NULL);
GO
CREATE VIEW dbo.YourTableView AS
SELECT ROW_NUMBER() OVER (PARTITION BY [type] ORDER BY id ASC) AS Identifier,
[type],
number,
[value]
FROM dbo.YourTable;
Then, instead, you query the VIEW, not the TABLE.
If you need consistency of the column (I name identifier) you'll need to also ensure row(s) can't be DELETEd from the table. Most likely by adding an IsDeleted column to the table defined as a bit (with 0 for no deleted, and 1 for deleted), and then you can filter to those rows in the VIEW:
CREATE VIEW dbo.YourTableView AS
WITH CTE AS(
SELECT id,
ROW_NUMBER() OVER (PARTITION BY [type] ORDER BY id ASC) AS Identifier,
[type],
number,
[value],
IsDeleted
FROM dbo.YourTable)
SELECT id,
Identifier,
[type],
number,
[value]
FROM CTE
WHERE IsDeleted = 0;
You could, if you wanted, even handle the DELETEs on the VIEW (the INSERT and UPDATEs would be handled implicitly, as it's an updatable VIEW):
CREATE TRIGGER trg_YourTableView_Delete ON dbo.YourTableView
INSTEAD OF DELETE AS
BEGIN
SET NOCOUNT ON;
UPDATE YT
SET IsDeleted = 1
FROM dbo.YourTable YT
JOIN deleted d ON d.id = YT.id;
END;
GO
db<>fiddle
For completion, if you wanted to use different SEQUENCE object, it would look like this. Notice that this does not scale easily. I have to CREATE a SEQUENCE for every value of Type. As such, for a small, and known, range of values this would be a solution, but if you are going to end up with more value for type or already have a large range, this ends up not being feasible pretty quickly:
CREATE TABLE dbo.YourTable (identifier int NOT NULL,
[type] int NOT NULL,
number int NULL,
[value] int NOT NULL);
CREATE SEQUENCE dbo.YourTable_Type1
START WITH 1 INCREMENT BY 1;
CREATE SEQUENCE dbo.YourTable_Type2
START WITH 1 INCREMENT BY 1;
CREATE SEQUENCE dbo.YourTable_Type3
START WITH 1 INCREMENT BY 1;
GO
CREATE PROC dbo.Insert_YourTable #Type int, #Number int = NULL, #Value int AS
BEGIN
DECLARE #Identifier int;
IF #Type = 1
SELECT #Identifier = NEXT VALUE FOR dbo.YourTable_Type1;
IF #Type = 2
SELECT #Identifier = NEXT VALUE FOR dbo.YourTable_Type2;
IF #Type = 3
SELECT #Identifier = NEXT VALUE FOR dbo.YourTable_Type3;
INSERT INTO dbo.YourTable (identifier,[type],number,[value])
VALUES(#Identifier, #Type, #Number, #Value);
END;
Yesterday suddenly a report occurred that someone was not able to get some data anymore because the issue Msg 2628, Level 16, State 1, Line 57 String or binary data would be truncated in table 'tempdb.dbo.#BC6D141E', column 'string_2'. Truncated value: '!012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678'. appeared.
I was unable to create a repro without our tables. This is the closest as I can get to:
-- Create temporary table for results
DECLARE #results TABLE (
string_1 nvarchar(100) NOT NULL,
string_2 nvarchar(100) NOT NULL
);
CREATE TABLE #table (
T_ID BIGINT NULL,
T_STRING NVARCHAR(1000) NOT NULL
);
INSERT INTO #table VALUES
(NULL, '0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789'),
(NULL, '!0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789!');
WITH abc AS
(
SELECT
'' AS STRING_1,
t.T_STRING AS STRING_2
FROM
UT
INNER JOIN UTT ON UTT.UT_ID = UT.UT_ID
INNER JOIN MV ON MV.UTT_ID = UTT.UTT_ID
INNER JOIN OT ON OT.OT_ID = MV.OT_ID
INNER JOIN #table AS T ON T.T_ID = OT.T_ID -- this will never get hit because T_ID of #table is NULL
)
INSERT INTO #results
SELECT STRING_1, STRING_2 FROM abc
ORDER BY LEN(STRING_2) DESC
DROP TABLE #table;
As you can see the join of #table cannot yield any results because all T_ID are NULL nevertheless I am getting the error mentioned above. The result set is empty.
That would be okay if a text with more than 100 characters would be in the result set but that is not the case because it is empty. If I remove the INSERT INTO #results and display the results it does not contain any text with more than 100 characters. The ORDER BY was only used to determine the faulty text value (with the original data).
When I use SELECT STRING_1, LEFT(STRING_2, 100) FROM abc it does work but it does not contain the text either that is meant to be truncated.
Therefore: What am I missing? Is it a bug of SQL Server?
-- this will never get hit is a bad assumption. It is well known and documented that SQL Server may try to evaluate parts of your query before it's obvious that the result is impossible.
A much simpler repro (from this post and this db<>fiddle):
CREATE TABLE dbo.t1(id int NOT NULL, s varchar(5) NOT NULL);
CREATE TABLE dbo.t2(id int NOT NULL);
INSERT dbo.t1 (id, s) VALUES (1, 'l=3'), (2, 'len=5'), (3, 'l=3');
INSERT dbo.t2 (id) VALUES (1), (3), (4), (5);
GO
DECLARE #t table(dest varchar(3) NOT NULL);
INSERT #t(dest) SELECT t1.s
FROM dbo.t1
INNER JOIN dbo.t2 ON t1.id = t2.id;
Result:
Msg 2628, Level 16, State 1
String or binary data would be truncated in table 'tempdb.dbo.#AC65D70E', column 'dest'. Truncated value: 'len'.
While we should have only retrieved rows with values that fit in the destination column (id is 1 or 3, since those are the only two rows that match the join criteria), the error message indicates that the row where id is 2 was also returned, even though we know it couldn't possibly have been.
Here's the estimated plan:
This shows that SQL Server expected to convert all of the values in t1 before the filter eliminated the longer ones. And it's very difficult to predict or control when SQL Server will process your query in an order you don't expect - you can try with query hints that attempt to either force order or to stay away from hash joins but those can cause other, more severe problems later.
The best fix is to size the temp table to match the source (in other words, make it large enough to fit any value from the source). The blog post and db<>fiddle explain some other ways to work around the issue, but declaring columns to be wide enough is the simplest and least intrusive.
I am trying to create view by filtering some table, and include some converted to different type column into select list. View filter excludes from result set rows in which this column can not be converted to that type. Then I select rows from this view and filter rows using this converted column. And I always get error Conversion failed when converting the nvarchar value '2aaa' to data type int
SQL Fiddle
MS SQL Server 2008 Schema Setup:
create table _tmp_aaa (id int identity(1, 1), value nvarchar(max) not null)
go
insert _tmp_aaa (value) values ('1111'), ('11'), ('2aaa')
go
create view _tmp_v_aaa
as
select id, cast(value as int) as value from _tmp_aaa where value like '1%'
go
Query 1:
select * from _tmp_v_aaa where value = 11
Are there any workarounds?
Add to your view ISNUMERIC to check if string is numeric value:
CREATE VIEW _tmp_v_aaa
AS
SELECT
id,
[value] = CAST((CASE WHEN ISNUMERIC([value]) = 1 THEN [value] ELSE NULL END) AS INT)
FROM _tmp_aaa
WHERE [value] LIKE '1%'
AND ISNUMERIC([value]) = 1
I tried some tricks... Obviously the optimizer tries to hand down your where criterium where it is not yet tranformed. This is one problem to be solved with a. multi-statement function. Their biggest disadvantage is the advantage in this case: the optimizer will not look into it, but just take their result "as is":
create function fn_tmp_v_aaa()
returns #tbl table(id INT, value INT)
as
BEGIN
INSERT INTO #tbl
select id, cast(value as int) as value from _tmp_aaa where value like '1%'
RETURN;
END
select * from dbo.fn_tmp_v_aaa() where value=11;
If you look at the execution plan , predicates are passed down to the table something like....
And your query gets translated to something like .....
select id, cast(value as int) as value
from tmp_aaa
where CONVERT(INT, value,0) like '1%'
AND CONVERT(INT, value,0) = CONVERT(INT, 11,0)
Now if you run this query you will get the same error you get when you query against the view.
Conversion failed when converting the nvarchar value '2aaa' to data type int.
When the predicate CONVERT(INT, value,0) like '1%' is converted , you have INT on one side of the expressions and varchar on another, INT being the higher precedence, sql server tries to convert whole expression to INT and fails hence the error message.
I am trying to create T-SQL function from Northwind to return new table, that will containt ProductID, ProductName, UnitsInStock and new column indicating if there are more UnitsInStock than function parameter.
Example: Let's have table of 2 products. First has 10 units in stock, second has 5. So function with parameter 6 should return:
1, Product1, 10, YES
2, Product2, 5, NO
Here's my non working code sofar :(
CREATE FUNCTION dbo.ProductsReorder
(
#minValue int
)
RETURNS #tabvar TABLE (int _ProductID, nvarchar _ProductName, int _UnitsInStock, nvarchar _Reorder)
AS
BEGIN
INSERT INTO #tabvar
SELECT ProductID, ProductName, UnitsInStock, Reorder =
CASE
WHEN UnitsInStock > #minValue THEN "YES"
ELSE "NO"
END
FROM Products
RETURN
END
T-SQL gives me this not really helpful answer: "Column, parameter, or variable#1: Cannot find data type _ProductID". I googled but I found gazillion different issues for such a result.
I dunno if it's good to use CASE here, I have a little Oracle background and decode function was great for these issues.
it's an easy answer -- especially as you are from oracle
the table definition in your function is the wrong way round.
Replace:
#tabvar TABLE (int _ProductID, nvarchar _ProductName, int _UnitsInStock, nvarchar _Reorder)
with something like
#tabvar TABLE ([_ProductID] INT, [_ProductName] NVARCHAR(50), [_UnitsInStrock] INT, [_Reorder] NVARCHAR(50))
In sql server the types come after the column names
In your table definition, you should put the column name first, then the data type. For example
_UnitsInStock int, ...
also, the NVARCHAR data type needs a length value.
_ProductName nvarchar(20)
I believe I'm having a problem with the way SQL Server 2000 handles hexadecimal numbers.
If I do a
select * from table where [timestamp] = 44731446
the row that it returns shows the timestamp as 0x0000000202AA8C36
Equally in another table if I
select * from table2 where [timestamp] = 44731446
the row that it returns shows the timestamp as 0x0000000002AA8C36 (notice the missing 2)
MS Calc tells me that the first timestamp = 8634666038 in decimal and the second timestamp = 44731446 in decimal which matches my original query on both tables.
So why is SQL Server returning a different number, yet successfully querying it? I believe this is the route of an update problem I'm having where the row won't update.
Long story short, the binary to integer conversion is truncating data:
select cast(0x0000000202AA8C36 as int)
A TIMESTAMP column is really BINARY(8), so your query is comparing a BINARY(8) value to an INT value; because INT has the higher precedence, MSSQL converts the BINARY(8) value to INT before comparing them.
But, 0x0000000202AA8C36 (or 8634666038) is too big to be represented as INT, so MSSQL has to truncate it first, and it truncates to the same value as 0x0000000002AA8C36. This might be a little clearer:
create table dbo.t (tstamp binary(8) not null)
go
insert into dbo.t values (0x0000000202AA8C36)
insert into dbo.t values (0x0000000002AA8C36)
go
-- returns 2 rows
select * from dbo.t where tstamp = 44731446
-- returns 1 row
select * from dbo.t where tstamp = cast(44731446 as bigint)
go
drop table dbo.t
go
According to Books Online (for 2008, I don't have 2000):
When [non-string data types] are converted to
binary or varbinary, the data is
padded or truncated on the left.
Padding is achieved by using
hexadecimal zeros