ms sql server executes 'then' before 'when' in case - sql-server

when i try to select
select
case when (isnumeric(SUBSTRING([VZWECK2],1,9)) = 1)
then CONVERT(decimal,SUBSTRING([VZWECK2],1,9))
else null
end as [NUM]
from table
sql-server gives me:
Msg 8114, Level 16, State 5, Line 2
Error converting data type varchar to numeric.
[VZWECK2] is a char(27). is this a known bug? because it seems to me it executes the convert before it does the case, which defies the purpose of my select. i know that there are values that are not numeric obviously, which is why i need the case statement to weed them out.
for some reason selecting
select
case when (isnumeric(SUBSTRING([VZWECK2],1,9)) = 1)
then 99
else null
end as [NUM]
from table
yields no errors and behaves as expected

The problem is that ISNUMERIC is very forgiving, and that ISNUMERIC returns 1 is unfortunately no guarantee that CONVERT will work. This is why SQL Server 2012 and later introduced TRY_CAST and TRY_CONVERT.
If you are converting whole numbers, a more reliable check is to make sure the string consists of only digits with NOT LIKE '%[^0-9]%' (that is, it must not contain a non-digit anywhere). This is too restrictive for some formats (like floating point) but for integers it works nicely.

Do you know the value which throws the error? IsNumeric is not exactly fool-proof, for example:
select ISNUMERIC('$')
select ISNUMERIC('+')
select ISNUMERIC('-')
all yield 1
Alternatively, you could go with TRY_PARSE instead.
Edit: TRY_PARSE is introduced in sql server 2012, so may not be available to you.

Related

Error converting NVARCHAR to Float using a CAST in T-SQL

I have a table which has values in the r_version_label column like:
*CURRENT*, *LATEST*, 0.1, 0.2, 0.3, *0.8.5,* 1.0, 1.1
The CURRENT, LATEST and legacy version numbers such as 0.8.5 I can ignore.
I am writing SQL as below:
WITH cte_version_label AS
(
SELECT DISTINCT r_version_label
FROM pharma_document_rp
WHERE r_version_label LIKE '%.%'
AND r_version_label NOT LIKE '%.%.%'
)
SELECT *
FROM cte_version_label
WHERE CAST(r_version_label AS float) = 0.1
But I am getting:
Msg 8114, Level 16, State 5, Line 1
Error converting data type nvarchar to float.
I can however do this:
WITH cte_version_label AS
(
SELECT DISTINCT r_version_label
FROM pharma_document_rp
WHERE r_version_label LIKE '%.%'
AND r_version_label NOT LIKE '%.%.%'
)
SELECT CAST(r_version_label AS float)
FROM cte_version_label
Which returns all the right values without error.
So why can't I cast in the WHERE clause, but can in the SELECT clause? Obviously there is not really a CAST issue as I am removing the offending items, otherwise the SELECT CAST would not work.
The issue is, I need to run a python script reading in version numbers from Excel and then look these up in the table. Excel converts 1.0 into 1. So I need the whole query to operate using "floats" not the string type version stored in the database.
John's suggestion to use try_convert is definitely a better option.
But in response to WHY the second query works and the first doesn't, have a look at the execution plans.
On my instance (SQL 2017 Enterprise) this is the Estimated execution plan of the first query (can't use the actual because the query errors out).
Have a look at the predicate used in the first node. It's trying to do the CAST (internally using CONVERT) in the first operation on your whole table. When that hits something like 0.8.5 it bails.
Now let's look at the execution plan for your second query that works (this one is the Actual execution plan).
Notice the predicate in the first node - it's just your string filter. The CAST does not happen until later down the execution chain, in the Compute Scalar node, AFTER values that offend the CAST have already been filtered out.

Error cast varchar as decimal

I have this query in SQL Server 2008:
select ID,CAST(replace(SUBSTRING(LTRIM([value]),1,8)+'.'+SUBSTRING(LTRIM([value]),9,7),',','.')AS DECIMAL (16,7)),
from T1
This is returning cast error.
To find where is the problem I tried the following:
select ID,SUBSTRING(LTRIM([value]),1,8)+'.'+SUBSTRING(LTRIM([value]),9,7),
isnumeric(SUBSTRING(LTRIM([value]),1,8)+'.'+SUBSTRING(LTRIM([value]),9,7))
from T1
where isnumeric(SUBSTRING(LTRIM([value]),1,8)+'.'+SUBSTRING(LTRIM([value]),9,7))<>1
But it returns 0 rows, then I guess that all the values are feasible to cast them to decimal but when I run the first query it fails.
Am I missunderstanding something that creates the problem?
P.D: Value is varchar datatype.
If you were using 2012 then you can use Try_Convert()
However there is a way to get this to work with 2008 by using a CASE statement. This works because CASE stops evaluating when it finds a match.
Here's a simple example for Varchar to Int but you can easily adapt it for Decimal.
CASE WHEN [value] LIKE '%[^0-9]%' THEN [value]
ELSE Convert(varchar(20), CAST([value] as INT))
END

Convert long decimal or float to varchar in SQL Server

One of the table I'm trying to query has a field with type decimal(38,19). I need to convert it to varchar in order for my perl DBI module to handle. How should I write the conversion in SQL to make it work? Specifically, if I run this in SQL Server Management Studio:
select convert(varchar, 19040220000.0000000000000000000)
I get:
Msg 8115, Level 16, State 5, Line 1
Arithmetic overflow error converting numeric to data type varchar.
I tried to round the number first:
select convert(varchar, round(19040220000.0000000000000000000, 0))
but that doesn't seem to work either (same error message). In fact round() doesn't seem to have an effect on that number for some reason. What should I do? Thx.
If you don't specify a length for your varchar, it defaults to 30 characters in the case of a CONVERT operation.
That's not long enough to hold your 38-digit decimal. So give your varchar an appropriate length in the CONVERT statement!!
Try this:
select convert(varchar(40), 19040220000.0000000000000000000)
You need to use a varchar with a set length larger than the precision you want, i.e.
select convert(varchar(64), 19040220000.0000000000000000000)

How to fix "domain error" in SQL Server 2005 when using LOG() function to get product of set

I have a inline select statement to calculate the product of the set of values.
Since SQL Server 2005 doesn't have a built in Product aggregate function, I am using LOG/EXP to get it.
My select statement is:
(select exp(sum(log(value))) from table where value > 0)
Unfortunately I keep getting the following error:
Msg 3623, Level 16, State 1, Line 1
A domain error occurred.
I've ensured that none of the values are zero or negative so I'm not really sure why this error is occurring. Does anyone have any ideas?
One of the features of the query planner introduced in SQL 2005 is that, in some circumstances where the table statistics indicate it will be more efficient, the WHERE clause of a statement will be processed after the SELECT clause.
(I can't find the Books On-Line reference for this right now).
I suspect this is what is happening here. You either need to exclude the rows where value = 0 before carrying out the calculation - the most reliable way being to store the rows you need in a temporary (#) table - or to modify your query to handle zero internally:
SELECT EXP(SUM(LOG(ISNULL(NULLIF(VALUE,0),1)))) AS result
FROM [table]
The NULLIF\ISNULL pair I have added to your query substitutes 1 for 0 - I think this will work, but you will need to test it on your data.

Incosistency between MS Sql 2k and 2k5 with columns as function arguments

I'm having trouble getting the following to work in SQL Server 2k, but it works in 2k5:
--works in 2k5, not in 2k
create view foo as
SELECT usertable.legacyCSVVarcharCol as testvar
FROM usertable
WHERE rsrcID in
( select val
from
dbo.fnSplitStringToInt(usertable.legacyCSVVarcharCol, default)
)
--error message:
Msg 170, Level 15, State 1, Procedure foo, Line 4
Line 25: Incorrect syntax near '.'.
So, legacyCSVVarcharCol is a column containing comma-separated lists of INTs. I realize that this is a huge WTF, but this is legacy code, and there's nothing that can be done about the schema right now. Passing "testvar" as the argument to the function doesn't work in 2k either. In fact, it results in a slightly different (and even weirder error):
Msg 155, Level 15, State 1, Line 8
'testvar' is not a recognized OPTIMIZER LOCK HINTS option.
Passing a hard-coded string as the argument to fnSplitStringToInt works in both 2k and 2k5.
Does anyone know why this doesn't work in 2k? Is this perhaps a known bug in the query planner? Any suggestions for how to make it work? Again, I realize that the real answer is "don't store CSV lists in your DB!", but alas, that's beyond my control.
Some sample data, if it helps:
INSERT INTO usertable (legacyCSVVarcharCol) values ('1,2,3');
INSERT INTO usertable (legacyCSVVarcharCol) values ('11,13,42');
Note that the data in the table does not seem to matter since this is a syntax error, and it occurs even if usertable is completely empty.
EDIT: Realizing that perhaps the initial example was unclear, here are two examples, one of which works and one of which does not, which should highlight the problem that's occurring:
--fails in sql2000, works in 2005
SELECT t1.*
FROM usertable t1
WHERE 1 in
(Select val
from
fnSplitStringToInt(t1.legacyCSVVarcharCol, ',')
)
--works everywhere:
SELECT t1.*
FROM usertable t1
WHERE 1 in
( Select val
from
fnSplitStringToInt('1,4,543,56578', ',')
)
Note that the only difference is the first argument to fnSplitStringToInt is a column in the case that fails in 2k and a literal string in the case that succeeds in both.
Passing column-values to a table-valued user-defined function is not supported in SQL Server 2000, you can only use constants, so the following (simpler version) would also fail:
SELECT *, (SELECT TOP 1 val FROM dbo.fnSplitStringToInt(usertable.legacyCSVVarcharCol, ','))
FROM usertable
It will work on SQL Server 2005, though, as you have found out.
I don't think functions can have default values in functions in SS2K.
What happens when you run this SQL in SS2K?
select val
from dbo.fnSplitStringToInt('1,2,3', default)

Resources