DateAdd side effects? - sql-server

I am experiencing a very strange behavior here with Microsoft SQL Server 2016 (SP2-CU15):
select convert(datetime, max(TS) + 1.0/24) as A
from table;
yields 2021-01-16 11:59:00.000
while
select convert(datetime, max(TS) + 1.0/24) as A
, dateadd(hour, 1, max(TS)) as B
from table;
gives me 2021-01-16 11:58:59.943 for A (and 2021-01-16 11:59:00.000 for B). So, it seems to me that adding the second column changes the result for the first?!
I can force the two-column version to work by casting 1.0 to real, btw: convert(datetime, max(TS) + cast(1.0 as real)/24), but I can not force the one-column version to fail by writing convert(datetime, max(TS) + cast(1.0 as float)/24).
Any ideas what's happening here?
Thanks!
Hendrik.
Update: As requested, here is a minimal example:
CREATE TABLE TestTS (TS FLOAT);
INSERT INTO TestTS (TS) VALUES (44210.4993055556);
SELECT convert(datetime, max(TS) + 1.0/24) as A
, dateadd(hour, 1, max(TS)) as B
from TestTS
As described, if you comment out the B-column, the value of A changes.

There's nothing wrong with DATEADD. The problem is the rest of the question.
First, there's a critical bug. Dates are stored as floats. An appropriate type should be used instead, eg datetime2, datetime or datetimeoffset. The best options are datetime2(0) or datetimeoffset(0), assuming no millisecond precision is needed.
datetime is essentially a legacy type, whose internal storage format is ... a float in the OADate format. That doesn't mean floats should be instead of the correct type though, no more than varbinary should be used instead of int or bigint.
Then, there's an attempt to add one hour to the OADate value, by calculating the floating point value of 1 hour in that format, 1/24. That's an irrational number though (0.04166666666....) which means that rounding errors always result in an inaccurate value.
Solution
The real solution is to use the correct type and DATEADD, eg :
CREATE TABLE TestTS (TS datetime2(0));
INSERT INTO TestTS (TS) VALUES ('2021-01-16 10:59:00.000');
SELECT dateadd(hour, 1, max(TS)) as B
from TestTS
If you want millisecond precision, use datetime2(3).
Getting the hack to work.
If you used datetime you wouldn't need to convert to datetime in the end, but the result would still be imprecise. This :
declare #TestTS table (TS datetime);
INSERT INTO #TestTS (TS) VALUES ('2021-01-16 10:59:00.000');
SELECT max(ts)+ (1.0/24)
from #TestTS
Produces 2021-01-16 11:58:59.943. The only reason the hack looked to be working in the first place was probably due to rounding errors during conversion.
The only way to get a correct result by adding floating point numbers is to increase precision to 8 fractional digits :
declare #TestTS table (TS datetime);
INSERT INTO #TestTS (TS) VALUES ('2021-01-16 10:59:00.000');
SELECT max(ts)+ (1.00000/24)--, dateadd(hour, 1, max(TS)) as B
from #TestTS
That produces 2021-01-16 11:59:00.000.
1.0 is a decimal(2,1). T-SQL calculates the fractional digits of decimal division based on the functional digits of the operands. If the operands have up to 4 fractional digits, the result will have 6 fractional digits, which isn't enough. 1 digit is added for any fractional digit above 4. 1.00000 results in 8 fractional digits 0.04166666
Don't do this though.

Cause
Thanks to #MartinSmith for the clue.
The cause is query auto-parameterization and the data types being chosen to store values.
Query 1 is auto-parameterized:
StatementText="SELECT CONVERT([datetime],MAX([TS])+#1/#2)
....
<ColumnReference Column="#2" ParameterCompiledValue="(24)" ParameterRuntimeValue="(24)" />
<ColumnReference Column="#1" ParameterCompiledValue="(1.0)" ParameterRuntimeValue="(1.0)" />
Query 2 is not auto-parameterized:
StatementText="SELECT convert(datetime, max(TS) + 1.0/24) as A...."
Why it happens is the first query and not the second query is a bit of a black magic.
From SQL Server data types page:
When you use the +, -, *, /, or % arithmetic operators to perform
implicit or explicit conversion of int, smallint, tinyint, or bigint
constant values to the float, real, decimal or numeric data types, the
rules that SQL Server applies when it calculates the data type and
precision of the expression results differ depending on whether the
query is autoparameterized or not.
Therefore, similar expressions in queries can sometimes produce
different results. When a query is not autoparameterized, the constant
value is first converted to numeric, whose precision is just large
enough to hold the value of the constant, before converting to the
specified data type. For example, the constant value 1 is converted to
numeric (1, 0), and the constant value 250 is converted to numeric (3, 0).
When a query is autoparameterized, the constant value is always
converted to numeric (10, 0) before converting to the final data
type. When the / operator is involved, not only can the result type's
precision differ among similar queries, but the result value can
differ also. For example, the result value of an autoparameterized
query that includes the expression SELECT CAST (1.0 / 7 AS float)
will differ from the result value of the same query that is not
autoparameterized, because the results of the autoparameterized query
will be truncated to fit into the numeric (10, 0) data type.
Effect
Based on the above, the following data types are used (refer to See: Precision, scale, and Length (Transact-SQL) for explanation of how result types are calculated):
Query 1 gives higher precision:
NUMERIC( 2, 1 ) / NUMERIC( 10, 0 ) = NUMERIC( 13, 12 )
Query 2:
NUMERIC( 2, 1 ) / NUMERIC( 2, 0 ) = NUMERIC( 7, 6 )
Solution
Cast your literals and / or intermediate results to the desired type to avoid surprises.
In your specific case, best solution is not to use number arithmetic to manipulate dates as Panagiotis Kanavos explains in his answer.
Alternatively, forcing float data types (per Dan Guzman comment) convert(datetime, max(TS) + 1e/24) would do the trick as well.
This question deals with the same issue.

Related

MS SQL Obfuscation of Varchar(10) to Integer and back?

I have a two fields that identify a user, one being a Integer and one being a varchar(10) in the format of 'AA111X' where the first two are alpha and the final x is alphanumeric, and need to convert that into an integer for the integer field as a translation. The integer value used to be provided for us but is no longer. The answer may well be this isn't possible, and a lookup table will have to be used but I'm trying to avoid the schema change if possible.
Is it necessary that you actually treat the first two characters as some value base 26, and the last character as some value base 36? Or is it only necessary that you can generate a unique integer for any possible input, in a way that can be converted back if necessary?
If the latter, and if the existing values are considered case insensitive, this solution will always result in a value that fits in a 4 byte integer:
declare #val varchar(10) = 'zz111z';
select cast(
concat(
ascii(substring(val, 1, 1)),
ascii(substring(val, 2, 1)),
substring(val, 3, 3),
ascii(substring(val, 6, 1))
)
as int
)
from (select upper(#val)) v(val);

Way to show items where more than 5 decimal places occur?

I am trying to filter out some query results to where it only shows items with 6 decimal places. I don't need it to round up or add 0's to the answer, just filter out anything that is 5 decimal places or below. My current query looks like this: (ex. if item is 199.54215 i dont want to see it but if it is 145.253146 i need it returned)
select
TRA_CODPLANTA,
TRA_WO,
TRA_IMASTER,
tra_codtipotransaccion,
tra_Correlativo,
TRA_INGRESOFECHA,
abs(tra_cantidadparcial) as QTY
from mw_tra_transaccion
where FLOOR (Tra_cantidadparcial*100000) !=tra_cantidadparcial*100000
and substring(tra_imaster,1,2) not in ('CP','SG','PI','MR')
and TRA_CODPLANTA not in ('4Q' , '5C' , '5V' , '8H' , '7W' , 'BD', 'DP')
AND tra_INGRESOFECHA > #from_date
and abs(tra_cantidadparcial) > 0.00000
Any assistance would be greatly appreciated!
Here is an example with ROUND, which seems to be the ideal function to use, since it remains in the realms of numbers. If you have at most 5 decimal places, then rounding to 5 decimal places will leave the value unchanged.
create table #test (Tra_cantidadparcial decimal(20,10));
INSERT #test (Tra_cantidadparcial) VALUES (1),(99999.999999), (1.000001), (45.000001), (45.00001);
SELECT * FROM #test WHERE ROUND(Tra_cantidadparcial,5) != Tra_cantidadparcial;
drop table #test
If your database values are VARCHAR and exist in the DB like so:
100.123456
100.1
100.100
You can achieve this using a wildcard LIKE statement example
WHERE YOUR_COLUMN_NAME LIKE '%.[0-9][0-9][0-9][0-9][0-9][0-9]%'
This will being anything containing a decimal place followed by AT LEAST 6 numeric values
Here is an example using a conversion to varchar and using the LEN - the CHARINDEX of the decimal point, I'm not saying this is the best way, but you did ask for an example in syntax, so here you go:
--Temp Decimal value holding up to 10 decimal places and 10 whole number places
DECLARE #temp DECIMAL(20, 10) = 123.4565432135
--LEN returns an integer number of characters in the converted varchar
--CHARINDEX returns the integer location of the decimal where it is found in the varchar
--IF the number of characters left after subtracting the index of the decimal from the length of the varchar is greater than 5,
--you have more than 6 decimal places
IF LEN(CAST(#temp AS varchar(20))) - CHARINDEX('.', CAST(#temp AS varchar(20)), 0) > 5
SELECT 1
ELSE
SELECT 0
Here is a shorthand way.
WHERE (LEN(CONVERT(DOUBLE PRECISION, FieldName % 1)) - 2) >=5
One way would be to convert / cast that column to a lower precision. Doing this would cause automatic rounding, but that would show you if it is 6 decimals or not based on the last digit. If the last digit of the converted value is 0, then it's false, otherwise it's true.
declare #table table (v decimal(11,10))
insert into #table
values
(1.123456789),
(1.123456),
(1.123),
(1.123405678)
select
v
,cast(v as decimal(11,5)) --here, we are changing the value to have a precision of 5. Notice the rounding.
,right(cast(v as decimal(11,5)),1) --this is taking the last digit. If it's 0, we don't want it
from #table
Thus, your where clause would simply be.
where right(cast(tra_cantidadparcial as decimal(11,5)),1) > 0

Breaking down progress(Percentage) through each quarter of the year in SQL Server

Basically I am trying to calculate the progress of the current quarter, represented as a percentage. Currently I have:
(DATEPART(dd,#AsOf)/91) * 100
We are using 91 days as a fixed solution for the quarter. They necessity for 100% accuracy is not required.
#AsOf is being passed in as a DATETIME type.
I have tried multiple ways and I receive 0. I assume it is because I am using INT instead of DECIMAL but I tried that and I still get 0.
You should just be able to force it to be a decimal by adding a decimal point, like 91.0 and 100.0 to avoid integer division issues:
DECLARE #date DATETIME
set #date = getdate();
select DATEPART(dd,#date) TheDay,
(DATEPART(dd,#date)/91.0) DivBy91,
(DATEPART(dd,#date)/91.0) * 100.0 Result
Result:
TheDay DivBy91 Result
19 0.208791 20,8791000
If an integer dividend is divided by an integer divisor, the result is an integer that has any fractional part of the result truncated.
What integer division will do is produce 0 in column 2, which is what is causing your result to be 0.

How do I do decimal arithmetic on two varchars and return result to an aliased column?

I have two fields of type varchar that contain numeric values or blank strings, the latter of which I have filtered out to avoid Divide by Zero errors.
I am attempting to determine the percentage value that num2 represents in relation to num1, i.e. (Num_2 * 1 / Num_1). Relatively simple math.
The problem I am having is that I cannot seem to do the math and then cast it to a decimal value. I keep receiving Arithmetic overflow error converting int to data type numeric errors.
Can someone help me out with the casting issue?
You didn't interpret the error correctly.
It is not about casting the result of your math to float, it is about implicit type casting before the equation is evaluated.
You have in your table some values that cannot be converted to numeric, because they are not valid numbers or numbers out of range. It is enough that one row contains invalid data to make fail the whole query.
perhaps you're looking for something similar to this?
declare #table table (
[numerator] [sysname]
, [denominator] [sysname]);
insert into #table
([numerator],[denominator])
values (N'1',N'2'),
(N'9999999999',N'88888888888');
select case
when isnumeric([numerator]) = 1
and isnumeric ([denominator]) = 1
then
cast([numerator] as [float]) / [denominator]
else
null
end
from #table;
Is this what you're looking for?
select cast('25.5' as decimal(15, 8)) / cast('100.0' as decimal(15, 8))
The example above will return this:
0.25500000000000000000000
In this case, I'm converting the operand types before they get used in the division.
Remember to replace the literals in my query by your field names.
you said that can be number or blank string.
son try something like this:
SELECT
(CASE WHEN NUM_2 = '' THEN 0 ELSE CAST(NUM_2 AS NUMERIC(15,4)) END)
/
(CASE WHEN NUM_1 = '' THEN 1 ELSE CAST(NUM_1 AS NUMERIC(15,4)) END)
you test if string is blank. if it is, you use 0 (or 1, to avoid division by zero)

How to get a float result by dividing two integer values using T-SQL?

Using T-SQL and Microsoft SQL Server I would like to specify the number of decimal digits when I do a division between 2 integer numbers like:
select 1/3
That currently returns 0. I would like it to return 0,33.
Something like:
select round(1/3, -2)
But that doesn't work. How can I achieve the desired result?
The suggestions from stb and xiowl are fine if you're looking for a constant. If you need to use existing fields or parameters which are integers, you can cast them to be floats first:
SELECT CAST(1 AS float) / CAST(3 AS float)
or
SELECT CAST(MyIntField1 AS float) / CAST(MyIntField2 AS float)
Because SQL Server performs integer division. Try this:
select 1 * 1.0 / 3
This is helpful when you pass integers as params.
select x * 1.0 / y
It's not necessary to cast both of them. Result datatype for a division is always the one with the higher data type precedence. Thus the solution must be:
SELECT CAST(1 AS float) / 3
or
SELECT 1 / CAST(3 AS float)
use
select 1/3.0
This will do the job.
I understand that CASTing to FLOAT is not allowed in MySQL and will raise an error when you attempt to CAST(1 AS float) as stated at MySQL dev.
The workaround to this is a simple one. Just do
(1 + 0.0)
Then use ROUND to achieve a specific number of decimal places like
ROUND((1+0.0)/(2+0.0), 3)
The above SQL divides 1 by 2 and returns a float to 3 decimal places, as in it would be 0.500.
One can CAST to the following types: binary, char, date, datetime, decimal, json, nchar, signed, time, and unsigned.
Looks like this trick works in SQL Server and is shorter (based in previous answers)
SELECT 1.0*MyInt1/MyInt2
Or:
SELECT (1.0*MyInt1)/MyInt2
Use this
select cast((1*1.00)/3 AS DECIMAL(16,2)) as Result
Here in this sql first convert to float or multiply by 1.00 .Which output will be a float number.Here i consider 2 decimal places. You can choose what you need.
If you came here (just like me) to find the solution for integer value, here is the answer:
CAST(9/2 AS UNSIGNED)
returns 5
I was surprised to see select 0.7/0.9 returning 0.8 in Teradata given they're already as floats/decimal numbers! I had to do cast(0.7 as float) to get the output that I was after.
When using literals, the best way is to "tell" SQL
which type you mean.
if you want a decimal result, add decimal point ".0" to your numbers:
SELECT 1.0 / 3.0
Result
0.333333
if you want a float (real) result, add "e0" to your numbers:
SELECT 1e0 / 3e0
Result
0.333333333333333

Resources