CURRENT_TIMESTAMP() precision? - snowflake-cloud-data-platform

The documentation says we can specify a precision from 0 (second) to 9 (nanosecond)
However it seems to default to milliseconds whatever I do...
SELECT
CURRENT_TIMESTAMP() AS CT0a
,to_varchar(CT0a, 'yyyy-mm-dd hh24:mi:ss.FF9') AS CT0b
,CURRENT_TIMESTAMP(1) AS CT1a
,to_varchar(CT1a, 'yyyy-mm-dd hh24:mi:ss.FF9') AS CT1b
,CURRENT_TIMESTAMP(9) AS CT9a
,to_varchar(CT9a, 'yyyy-mm-dd hh24:mi:ss.FF9') AS CT9b
;
Seems to return the same thing if I display it with enough decimals, which is invariably milliseconds...
CT0A 2020-04-08 23:37:56.667 +0000
CT0B 2020-04-08 23:37:56.667000000
CT1A 2020-04-08 23:37:56.600 +0000
CT1B 2020-04-08 23:37:56.667000000
CT9A 2020-04-08 23:37:56.667 +0000
CT9B 2020-04-08 23:37:56.667000000
The CT0a, CT1a and CT2a are displayed with the appropriate number of decimals but CT0b, CT1b and CT9b all display the exact same time up to 3 decimals...
Am I missing something obvious?

While the Snowflake timestamp data type supports precision up to 9 digits, the current_timestamp function does not. Anything beyond milliseconds would be false precision for the clock, so it's not reported.
https://docs.snowflake.com/en/sql-reference/functions/current_timestamp.html
Valid values range from 0 - 9. However, most platforms do not support
true nanosecond precision; the precision that you get might be less
than the precision you specify. In practice, precision is usually
approximately milliseconds (3 digits) at most.
If you don't specify the output format, Snowflake will display with three decimal places for seconds, even if the internal precision is higher:
select to_timestamp(15863566077839811234, 9); -- Seconds has 9 digit precision, but displays 3
select to_varchar(to_timestamp(15863566077839811234, 9), 'yyyy-mm-dd hh24:mi:ss.FF9'); -- Displays all 9 digit precision

Related

Breaking down progress(Percentage) through each quarter of the year in SQL Server

Basically I am trying to calculate the progress of the current quarter, represented as a percentage. Currently I have:
(DATEPART(dd,#AsOf)/91) * 100
We are using 91 days as a fixed solution for the quarter. They necessity for 100% accuracy is not required.
#AsOf is being passed in as a DATETIME type.
I have tried multiple ways and I receive 0. I assume it is because I am using INT instead of DECIMAL but I tried that and I still get 0.
You should just be able to force it to be a decimal by adding a decimal point, like 91.0 and 100.0 to avoid integer division issues:
DECLARE #date DATETIME
set #date = getdate();
select DATEPART(dd,#date) TheDay,
(DATEPART(dd,#date)/91.0) DivBy91,
(DATEPART(dd,#date)/91.0) * 100.0 Result
Result:
TheDay DivBy91 Result
19 0.208791 20,8791000
If an integer dividend is divided by an integer divisor, the result is an integer that has any fractional part of the result truncated.
What integer division will do is produce 0 in column 2, which is what is causing your result to be 0.

What do precision and scale mean in the Time datatype in Sql Server?

When creating a TIME datatype, SQL Server seems to default to time(7). MSDN states that the precision and scales can range from (8, 0) to (16, 7).
Now, hours (00-23), minutes (00-59) and seconds (00-59) can have at most two digits each, so their precision can't be more than 6 all combined. If we add milliseconds (000-999) to the equation, then precision can go up to 9. There is no decimal point, so scale should always be zero.
Then what do values like (8, 0) or (16, 7) mean for precision (unless precision counts the colon separaters between hours, minutes and seconds) and scale? Shouldn't the precision be just 9 (or 12 if we count the colons) and scale always zero? How to pick the precision of time datatype from the available (0-7) values?
In time (and datetime2), the precision is inferred by the scale. You as user can only chose the scale (which is the number of second fraction digits), the precision is then given. This controls the fraction part of seconds only, e.g. (0) is full seconds only, (3) gives you millisecond resolution, (7) 100ns resolution.
The precision is the (inferred) number of characters at a given scale, including colons and decimal separator for seconds. Therefore, with (0), this results in a string representation such as hh:mm:ss (8 characters), while (3) results in hh:mm:ss.fff (12 characters) and (7) is hh:mm:ss.fffffff (16 characters). This corresponds to the numbers given in the table of the MSDN page you linked to.
Run this:
DECLARE #timeval AS TIME(7) = '12:34:56.1234567';
SELECT #timeval Value16_7Precision,
LEFT(CAST(#timeval AS NVARCHAR(20)), 16) First16 ,
LEFT(CAST(#timeval AS NVARCHAR(20)), 9) First9 ,
RIGHT(CAST(#timeval AS NVARCHAR(20)), 7) Last7;
To get this:
Value16_7Precision First16 First9 Last7
12:34:56.1234567 12:34:56.1234567 12:34:56. 1234567
For TIME(0) = (8,0), only the first 8 characters are considered: 12:34:56

SQL query multiply decimal

I'm trying to multiply two values. One is a decimal and one is numeric.
Example - Total is what I want:
Number Decimal Total
900 1.111 999.9
800 1.25 1000
460 4.25 1955
In my Sql query, I've tried the following:
(ISNUMERIC(UpgradeEmptyNodesPercentageLimitForAllocation) * RawTotalNodes) as ExpectedEmptyNodeCountForUpgrade
However, it always returns Number. How do the above?
Thanks...
Check the scale on your data types, if they are zero then SQL will remove the decimals
DECLARE #d1 DECIMAL(18,0) = 1.111
DECLARE #d2 DECIMAL(18,10) = 1.111
SELECT #d1,#d2
ISNUMERIC is really for evaluating strings, not number types -- and it only returns 0 or 1. I think what you want to do is a CAST of your numeric and/or decimal values to one with more precision, and then multiply. What is affecting the operation is the precision and scale of both factors to your operation, as-stored.
These types, by name, are actually interchangeable - but precision and scale are not.
Try casting both to a NUMERIC of some acceptabe precision and scale, and then multiplying. Or, if you don't have a precision and scale that will always work, then cast to REAL, if that's an option.
Read more on MSDN.

Why does a FLOAT give me a more accurate result than a DECIMAL?

I am looking for a division result that is extremely accurate.
This SQL returns the following results:
SELECT (CAST(297282.26 AS DECIMAL(38, 30)) / CAST(495470.44 AS DECIMAL(38, 30))) AS ResultDecimal
SELECT (CAST(297282.26 AS FLOAT) / CAST(495470.44 AS FLOAT)) AS ResultFloat
Here is the accurate result from WolframAlpha:
http://www.wolframalpha.com/input/?i=297282.26%2F495470.44
I was under the impression that DECIMAL would be more accurate than FLOAT:
"Because of the approximate nature of the float and real data types, do not use these data types when exact numeric behavior is required, such as in financial applications, in operations involving rounding, or in equality checks. Instead, use the integer, decimal, money, or smallmoney data types."
https://technet.microsoft.com/en-us/library/ms187912(v=sql.105).aspx
Why does the FLOAT calculation give me a result more accurate than when using DECIMAL?
I found the best precision to be when you use:
SELECT (CAST(297282.26 AS DECIMAL(15, 9)) / CAST(495470.44 AS DECIMAL(24, 2))) AS ResultDecimal
This gives a result of
0.599999991926864496699338915153
I think the actual value (to 100 digits) is:
0.5999999919268644966993389151530412187657451370862810705720405842980259326873264124495499670979362562...
Please bear in mind SQL Server defines the maximum precision and scale for division as:
max precision = (p1 - s1 + s2) + MAX(6, s1 + p2 + 1) -- up to 38
max scale = MAX(6, s1 + p2 + 1)
Where p1 & p2 are the precision of the two numbers and s1 & s2 are the scale of the numbers.
In this case the maximum precision is (15-9+2) + MAX(6, 9+24+1) = 8 + 34 = 42.
However SQL Server only allows a maximum precision of 38.
The maximum scale = MAX(6, 9+24+1) = 34
Hopefully you already understand that just because the FLOAT version presents more numbers after the decimal point, doesn't necessarily mean that those are the true numbers. This is about precision, not accuracy.
It is the CAST function itself that causes this loss of precision, not the difference between the FLOAT and DECIMAL data types.
To demonstrate this, compare your previous results to the result of this:
SELECT 297282.26 / 495470.44 AS ResultNoCast
In my version of the query, the presence of a decimal point in the literal numbers tells SQL Server to treat the values as DECIMAL datatype, with appropriate length and precision as determined by the server. The result is more precise than when you CAST explicitly to DECIMAL.
A clue to the reason for this can be found hidden in the official documentation of the CAST function, under Truncating and Rounding Results:
When you convert data types that differ in decimal places, sometimes the result value is truncated and at other times it is rounded. The following table shows the behavior.
From | To | Behavior
numeric | numeric | Round
So the fact that each separate literal value is treated as a NUMERIC (same thing as DECIMAL) on the way in, and is being casted to NUMERIC, causes rounding.
Anticipating your next question a little, if you want a more precise result from the NUMERIC/DECIMAL datatype, you just need to tell SQL Server that each component of the calculation is more precise:
SELECT 297282.26000000 / 495470.44000000 AS ResultSuperPrecise
This appears (from experimentation) to be the most precise I can get: either adding or removing a 0 from either the numerator or denominator makes the result less precise. I'm at a loss to explain why that is, because the result is only 23 digits to the right of the decimal point.
It doesn't give you a more accurate result. I say that because the value is an approximate and not all values will be available to stored in a float. On the other side of that coin though is that float has the possibility of a lot more precision. The maximum precision of a decimal/numeric is 38. https://msdn.microsoft.com/en-us/library/ms187746.aspx
When you look at float though the maximum precision is 53. https://msdn.microsoft.com/en-us/library/ms173773.aspx
Okay, here is what I think is going on.
#philosophicles - I think you are right in that the CAST is causing the problem, but not because I am trying to "convert data types that differ in decimal places".
When I execute the following statement
SELECT CAST((297282.26 / 495470.44) AS DECIMAL(38, 30)) AS ResultDecimal
The accurate result for the calculation is
This has way more than 30 digits after the decimal point, and my data type has scale set to 30. So the CAST rounds the value, then just adds zeros to the end until there are 30 digits. We end up with this:
So the interesting thing is how does the CAST determine up to how many decimals to round or truncate the output? I am not sure, but as #philosophicles pointed out, the scale of the input effects the rounding applied on the output.
SELECT CAST(((297282.26/10000) / (495470.44/10000)) AS DECIMAL(38, 30)) AS ResultDecimal
Thoughts?
Also interesting:
However, in simple terms, precision is lost when the input scales are
high because the result scales need to be dropped to 38 with a
matching precision drop.
https://dba.stackexchange.com/questions/41743/automatic-decimal-rounding-issue
The precision and scale of the numeric data types besides decimal are fixed.
https://dba.stackexchange.com/questions/41743/automatic-decimal-rounding-issue

Is SQL Server 'MONEY' data type a decimal floating point or binary floating point?

I couldn't find anything that rejects or confirms whether SQL Server 'MONEY' data type is a decimal floating point or binary floating point.
In the description it says that MONEY type range is from -2^63 to 2^63 - 1 so this kind of implies that it should be a binary floating point.
But on this page it lists MONEY as "exact" numeric. Which kind of suggests that MONEY might be a decimal floating point (otherwise how is it exact? or what is the definition of exact?)
Then if MONEY is a decimal floating point, then what is the difference between MONEY and DECIMAL(19,4) ?
Neither. If it were an implementation of floating point it would be subject to the same inaccuracies as FLOAT and REAL types. See Floating Point on wikipedia.
MONEY is a fixed point type.
It's one byte smaller than a DECIMAL(19,4), because it has a smaller range (922,337,203,685,477.5808 to 922,337,203,685,477.5807) as opposed to (-10^15+1 to 10^15-1).
To see the differences we can look at the documentation:
Documentation for money:
Data type Range Storage
money -922,337,203,685,477.5808 to 922,337,203,685,477.5807 8 bytes
smallmoney -214,748.3648 to 214,748.3647 4 bytes
The money and smallmoney data types are accurate to a ten-thousandth of the monetary units that they represent.
Compare to decimal:
When maximum precision is used, valid values are from -10^38 + 1 through 10^38 - 1.
Precision Storage
1 - 9 5 bytes
10 - 19 9 bytes
20 - 28 13 bytes
29 - 38 17 bytes
So they're not exactly equivalent, just similar. A DECIMAL(19,4) has a slightly greater range than MONEY (it can store from -10^15 + 0.0001 to 10^15 - 0.0001), but also needs one more byte of storage.
In other words, this works:
CREATE TABLE Table1 (test DECIMAL(19,4) NOT NULL);
INSERT INTO Table1 (test) VALUES
(999999999999999.9999);
SELECT * FROM Table1
999999999999999.9999
But this doesn't:
CREATE TABLE Table1 (test MONEY NOT NULL);
INSERT INTO Table1 (test) VALUES
(999999999999999.9999);
SELECT * FROM Table1
Arithmetic overflow error converting numeric to data type money.
There's also a semantic difference. If you want to store monetary values, it makes sense to use the type money.
I think the primary difference will be the storage space required.
DECIMAL(19,4) will require 9 storage bytes
MONEY will require 8 storage bytes

Resources