I have a number, e.g., 11.61. I then make the following division: 1 / 11.61.
DECLARE #value1 numeric(28, 20)
,#value2 numeric(28, 20)
SET #value1 = 11.61
SET #value2 = 1 / #value1
The results look like
(No column name) (No column name)
11.61000000000000000000 0.08613264427217915000
I then want to reverse the division: 1 / #value2
SET #value1 = 1 / #value2
The result looks like
(No column name) (No column name)
11.61000000000000079000 0.08613264427217915000
The original and expected value should be 11.61. However, as you can see the result is 11.61000000000000079000. How can this error be avoided?
use CAST
like CAST(#value1 as decimal(10,2)) and CAST(#value2 as decimal(10,2))
it will show only 2 decimal places.
Related
I have a varchar(250) ParameterValue that I would like to check the number of decimal places in.
This is the line of code that I cannot get working:
where RIGHT(CAST(ParameterValue as DECIMAL(10,5)), ParameterValue), 1) != 0
The code below is where the line of code is used:
select *
INTO #ParamPrecision
from Data_table
where RIGHT(CAST(ParameterValue as DECIMAL(10,5)), ParameterValue), 1) != 0
AND ParameterBindStatus = 0
UPDATE a
SET a.ParameterBindStatus = 5
FROM Data_table, #ParamPrecision b
WHERE a.SQLParameterId = b.SQLParameterId
INSERT Log_table
(
SQLBatchId,
SQLProcess,
Error,
SQLError_Message,
ParametersSent
)
SELECT SQLBatchId,
'sp_ReadParametersToBindData',
1,
'Invalid parameter value sent from MES',
'some parameter info'
FROM #ParamPrecision
SELECT *
INTO #UnBoundPrompt
FROM Data_table
WHERE DATEADD(DAY, 1, SQLTimeStamp) < GETDATE()
AND ParameterBindStatus = 0
UPDATE a
SET a.ParameterBindStatus = 99
FROM Data_tablea, #UnBoundPrompt b
WHERE a.SQLParameterId = b.SQLParameterId
INSERT Log_table
(
SQLBatchId,
SQLProcess,
Error,
SQLError_Message,
ParametersSent
)
SELECT SQLBatchId,
'sp_ReadParametersToBindData',
1,
'Parameter download timeout',
'some parameter info'
FROM #UnBoundPrompt
If the check for decimal places is not satisfied, the next select statement checks if the parameter timestamp is active for more than 1 day. If this is satisfied, a log entry is made.
If the number of decimal places exceeds 4, then I want to set the ParameterBindStatus = 5 and update the log table.
I have changed the code as follows to allow me to confirm the rest of the code and that works but the code does not execute when trying to detect number of decimal places.
select *
INTO #ParamPrecision
from Data_table
where ParameterValue > '1500'
AND ParameterBindStatus = 0
this may help with your precision problem - I've laid it out as a table so you can see each step of the transformation but you can easily see the pattern :) essentially you just reverse the string and truncate it. All steps included (can be done faster) - you may/may not need to add a bit for the case that there is no decimal point.
--setup
create table test
(stringVal varchar(250));
insert into test values
('12.3456'),
('1.2345678'),
('12'),
('0.123')
--query
SELECT stringVal,
Reverse(CONVERT(VARCHAR(50), stringVal, 128)) as reversedText
, Cast(Reverse(CONVERT(VARCHAR(50), stringVal, 128)) as float) as float
, Cast(Cast(Reverse(CONVERT(VARCHAR(50), stringVal, 128)) as float) as bigint) as bigint
, len(Cast(Cast(Reverse(CONVERT(VARCHAR(50), stringVal, 128)) as float) as bigint)) as decimalPrecision
FROM test
I'm trying to load a huge number into Field1 INT which can hold only max=2,147,483,647, according to it I can't change DDL, so tried to find adhoc solution to cut out single digit from the middle of this number and then add check for uniqueness.
This numbers are in the format like: 29000001234, so I mean to keep this format with zeros in the middle to easy recognizing. I don't want to introduce any new columns/tables into this task, as limited in freedom there, this is 3rd party schema.
Can anybody suggest better solution, how to remap/keep all numbers under that limit; this is my draft:
DECLARE #fl FLOAT = 29000001234
DECLARE #I INT
SELECT #i = (SUBSTRING(CAST(CAST(#fl AS BIGINT) AS VARCHAR(18)),1,4) +
SUBSTRING(CAST(CAST(#fl AS BIGINT) AS VARCHAR(18)),7,LEN(CAST(CAST(#fl AS BIGINT) AS VARCHAR(18)))) )
select #i;
But if you really want to remove the middle digits, here's another approach:
DECLARE #fl FLOAT = 29000001234
DECLARE #I INT
DECLARE #StringFloat as varchar(80)
SET #StringFloat = CONVERT(varchar(80), CAST(#fl AS bigint))
SET #I = CAST( CONCAT(LEFT( #StringFloat, 4 ), RIGHT( #StringFloat, 5 )) as int )
SELECT #i;
I think arithmetic operations should be less expensive than string operations, so you should use them instead:
DECLARE #fl FLOAT = 29000001234
DECLARE #flBig BIGINT = #fl
DECLARE #i INT
SET #i = (#flBig / 1000000000) * 10000000 + (#flBig % 100000000)
select #i; --> 290001234
Provided example assumes the first part of the number will have a maximum of two digits (i.e. 29 in your case) and that you want to allow larger number in the left part (up to 999999).
NOTE: parentheses are redundant, as division and multiplication have the same priority and modulo operator has higher precedence over addition. I have used them just to highlight the parts of the computation.
You can't do that without any arithmetic overflow, or with out losing your original data.
If you have a limitation in columns of your destination table or query, use multiple rows:
declare #c bigint = 29000001234;
declare #s bigint = 1000000000; -- Separator value
;with cte(partNo, partValue) as (
select 1, #c % #s
union all
select partNo + 1, (#c / power(#s, partNo)) % #s
from cte
where (#c / power(#s, partNo)) > 0
)
select partValue
from cte;
Seems like a strange situation, not sure why you need to go to all the trouble of converting a big number to a string and then randomly remove a digit. Some more information about why or what the real goal is would be helpful.
That said, maybe it would be easier to just subtract a constant amount from these values? e.g.:
DECLARE #fl FLOAT = 29000001234
DECLARE #I INT
DECLARE #OFFSET BIGINT = 29000000000
SET #I = CAST(#fl AS BIGINT)-#OFFSET
SELECT #I
Which gives you an INT of 1234 as the result using your example.
The following creation drops increasingly wide blocks of digits from the original third party value and returns the results that fit in an INT. The results could be outer joined with the existing data to find a suitable new value.
declare #ThirdPartyValue as BigInt = 29000001234;
declare #MaxInt as BigInt = 2147483647;
declare #TPV as VarChar(19) = Cast( #ThirdPartyValue as VarChar(19) );
declare #TPVLen as Int = Len( #TPV );
with
-- 0 through 9.
Digits as (
select Digit from ( values (0), (1), (2), (3), (4), (5), (6), (7), (8), (9) ) as Digits( Digit ) ),
-- 0 through #TPVLen .
Positions as (
select Ten_1.Digit * 10 + Ten_0.Digit as Number
from Digits as Ten_0 cross join Digits as Ten_1
where Ten_1.Digit * 10 + Ten_0.Digit <= #TPVLen ),
-- 1 through #TPVLen - 1 .
Widths as (
select Number
from Positions
where 0 < Number and Number < #TPVLen ),
-- Try dropping Width digits at Position from #TPV .
AlteredTPVs as (
select P.Number as Position, W.Number as Width,
Stuff( #TPV, P.Number, W.Number, '' ) as AlteredTPV
from Positions as P cross join Widths as W
where P.Number + W.Number <= #TPVLen )
-- See which results fit in an Int .
select Position, Width, AlteredTPV, Cast( AlteredTPV as BigInt ) as AlteredTPVBigInt
from AlteredTPVs
where Cast( AlteredTPV as BigInt ) <= #MaxInt -- Comment out this line to see all results.
order by Width, Position
It could be more clever about returning only distinct new values.
This general idea could be used to hunt down blocks of zeroes or other suitable patterns to arrive at a set of values to be tested against the existing data.
The select returns right at 23,000 rows
The except will return between 60 to 200 rows (and not the same rows)
The except should return 0 as it is select a except select a
PK: [docSVenum1].[enumID], [docSVenum1].[valueID], [FTSindexWordOnce].[wordID]
[tf] is a float and and I get float is not exact
But I naively thought avg(float) would be repeatable
Avg(float) does appear to be repeatable
What is the solution?
TF is between 0 and 1 and I only need like 5 significant digits
I just need avg(TF) to be the same number run to run
Decimal(9,8) gives me enough precision and if I cast to decimal(9,8) the except properly returns 0
I can change [TF] to decimal(9,8) but it will be bit of work and lot of regression testing as some of the test that use [tf] take over a day to run
Is change [TF] to decimal(9,8) the best solution?
SELECT [docSVenum1].[enumID], [docSVenum1].[valueID], [FTSindexWordOnce].[wordID]
, avg([FTSindexWordOnce].[tf]) AS [avgTFraw]
FROM [docSVenum1]
JOIN [docFieldLock]
ON [docFieldLock].[sID] = [docSVenum1].[sID]
AND [docFieldLock].[fieldID] = [docSVenum1].[enumID]
AND [docFieldLock].[lockID] IN (4, 5) /* secLvl docAdm */
JOIN [FTSindexWordOnce]
ON [FTSindexWordOnce].[sID] = [docSVenum1].[sID]
GROUP BY [docSVenum1].[enumID], [docSVenum1].[valueID], [FTSindexWordOnce].[wordID]
except
SELECT [docSVenum1].[enumID], [docSVenum1].[valueID], [FTSindexWordOnce].[wordID]
, avg([FTSindexWordOnce].[tf]) AS [avgTFraw]
FROM [docSVenum1]
JOIN [docFieldLock]
ON [docFieldLock].[sID] = [docSVenum1].[sID]
AND [docFieldLock].[fieldID] = [docSVenum1].[enumID]
AND [docFieldLock].[lockID] IN (4, 5) /* secLvl docAdm */
JOIN [FTSindexWordOnce]
ON [FTSindexWordOnce].[sID] = [docSVenum1].[sID]
GROUP BY [docSVenum1].[enumID], [docSVenum1].[valueID], [FTSindexWordOnce].[wordID]
order by [docSVenum1].[enumID], [docSVenum1].[valueID], [FTSindexWordOnce].[wordID]
In this case tf is term frequency of tf-idf
tf normalization is subjective and does not require much precision
Avg(tf) needs to be consistent from select to select or the results are not consistent
In a single select with joins I need a consistent avg(tf)
Going with decimal and a low precision for tf got consistent results
This is very similiar to: SELECT SUM(...) is non-deterministic when adding the column-values of datatype float.
The problem is that with inaccurate datatype (FLOAT/REAL) the order of of arithmetic operations on floating point matters. Demo from connect:
DECLARE #fl FLOAT = 100000000000000000000
DECLARE #i SMALLINT = 0
WHILE (#i < 100)
BEGIN
SET #fl = #fl + CONVERT(float, 5000)
SET #i = #i + 1
END
SET #fl = #fl - 100000000000000000000
SELECT CONVERT(NVARCHAR(40), #fl, 2)
-- 0.000000000000000e+000
DECLARE #fl FLOAT = 0
DECLARE #i SMALLINT = 0
WHILE (#i < 100)
BEGIN
SET #fl = #fl + CONVERT(float, 5000)
SET #i = #i + 1
END
SET #fl = #fl + 100000000000000000000
SET #fl = #fl - 100000000000000000000
SELECT #fl
-- 507904
LiveDemo
Possible solutions:
CAST all arguments to accurate datatype like DECIMAL/NUMERIC
alter table and change FLOAT to DECIMAL
you can try to force query optimizer to calculate the sum with the same order.
The good news is that when a stable query result matters to your
application, you can force the order to be the same by preventing
parallelism with OPTION (MAXDOP 1).
It looks like intial link is dead. WebArchive
I have a situation like this
I got a column with 'money' type, 2 decimal . Example data:(65.00)
I need to add 12 zero / 000000000000 to it so that the output would be like this:
(65.00 convert to 6500) + (000000000000) = 000000006500
Output: 000000006500
How can I achieve this?. Thank you for your help and suggestion
You can do this with a couple of casts, multiplying by 100, and using REPLICATE('0') to pad with the requisite number of zeroes).
I'm assuming you DO want up to 2 x trailing decimals, but no more.
DECLARE #value MONEY;
SET #value = 65.123;
DECLARE #intValue BIGINT;
SET #intValue = CAST(#value * 100.0 AS BIGINT);
SELECT REPLICATE('0',12-LEN(#intValue)) + CAST(#intValue AS NVARCHAR(20));
Returns 000000006512
If you need to do this on a set, a CTE can be used for the intermediate step, e.g.
WITH cte AS
(
SELECT CAST(MoneyField * 100.0 AS BIGINT) AS intValue
FROM SomeTable
)
SELECT
REPLICATE('0',12-LEN(cte.intValue)) + CAST(cte.intValue AS NVARCHAR(20))
FROM cte;
Fiddle here
It is Possible .But output Column should be in the type of varchar(15) .If you want to do further operation of your output you have to convert that into int or whatever
SELECT CONCAT(REPEAT('0',12-LENGTH(65.00)),(65.00*100));
I am working with a table with a column 'value' with the type varchar(100).
All values in that column must be changed by multiplying them with 0.001 but my following update script fails due to "arithmetical overflow error while converting varchar to a numeric type".
update testTable
set value = cast ((value * 0.001) as varchar);
I must not change the type of the column and it holds values between 0 and 4294966796.
How do i cast correctly to get the calculation in the update working?
I tried cast (cast ((value * 0.001)) as float) as varchar) but it still throws the error.
CAST(CAST( value AS NUMERIC) *0.001 AS VARCHAR(100))
Here try this :
update testTable
set value = cast ((cast(value as float) * 0.001) as varchar);
If it still fails then one of the rows have non-numeric value
You can;
update testTable cast(cast(value as decimal) * 0.001 as varchar(32))
One way
update testTable
set value = convert(float,value) * 0.001
A simple example you can run
DECLARE #z varchar(100)
SELECT #z = CONVERT(float,'123') * 0.001
SELECT #z
0.123
If your values exceed the size of a float, then you can do the arithmetic as strings. This is a special case, because multiplying by 0.001 is just moving the decimal place over three places to the left. The following works for values greater than 1000, with or without decimal places:
update testTable
set value = (case when charindex('.', value) = 0
then left(value, len(value) - 3)+'.'+right(value, 3)
else left(value, charidnex('.', value) - 3) + '.' +
replace(right(value, len(value) - charindex('.', value) + 4), '.', '')
end)
If you have values less than 100, then you will need to prepend the values with 0s for this to work.