Some doubts related Microsoft SQL Server bigint - sql-server

I have the following doubt related to Microsoft SQL Server. If a bigint column has a value as 0E-9, does it mean that this cell can contain value with 9 decimal digits or what?

BIGINT: -9,223,372,036,854,775,808 TO 9,223,372,036,854,775,807
INT: -2,147,483,648 TO 2,147,483,647
SMALLINT: -32,768 TO 32,767
TININT: 0 TO 255
These are for storing non-decimal values. You need to use DECIMAL or NUMERIC to store values shuch as 112.455. When maximum precision is used, valid values are from - 10^38 +1 through 10^38 - 1.
OE-9 isn't NUMERICor INTEGER value. It's a VARCHAR unless you are meaning something else like scientific notation.
https://msdn.microsoft.com/en-us/library/ms187746.aspx
https://msdn.microsoft.com/en-us/library/ms187745.aspx

No, the value would be stored as an integer. Any decimal amounts will be dropped off and only values to the left of the decimal will be kept (no rounding).
More specifically, bigint stores whole numbers between -2^63 and 2^63-1.

Related

Is the SQL Server data type called "money" a binary-fixed-point or decimal-fixed-point type?

I'm expanding on a similar question titled Is SQL Server 'MONEY' data type a decimal floating point or binary floating point?
The accepted answer told us that the "money" datatype is fixed-point rather than floating-point type, but didn't say whether it was binary-fixed-point or decimal-fixed-point.
I'm assuming it's decimal-fixed-point but I can't find confirmation of this anywhere.
The documentation tells us the range and size, but not the underlying implementation.
Not sure why you care about the underlying implementation but you can CAST a money data type value to binary(8) to see the value's bits:
DECLARE #money money;;
--same as min 64-bit signed integer (2's compliment) with 4 decimal places assumed
SET #money = -922337203685477.5808;
SELECT CAST(#money AS binary(8)); --0x8000000000000000
--same as max 64-bit signed integer with 4 decimal places assumed
SET #money = 922337203685477.5807
SELECT CAST(#money AS binary(8)); --0x7FFFFFFFFFFFFFFF
So money looks to be a 64 bit signed integer with 4 decimal places assumed. The precision/scale is not included with the value with money (and it's smallmoney cousin).

Weired Behavior of Round function in MSSQL Database for Real column only

I found weird or strange behavior of Round function in MSSQL for real column type. I have tested this issue in Azure SQL DB and SQL Server 2012
Why #number=201604.125 Return 201604.1 ?
Why round(1.12345,10) Return 1.1234500408 ?
-- For Float column it working as expected
-- Declare #number as float,#number1 as float;
Declare #number as real,#number1 as real;
set #number=201604.125;
set #number1=1.12345;
select #number as Realcolumn_Original
,round(#number,2) as Realcolumn_ROUND_2
,round(#number,3) as Realcolumn_ROUND_3
, #number1 as Realcolumn1_Original
,round(#number1,6) as Realcolumn1_ROUND_6
,round(#number1,7) as Realcolumn1_ROUND_7
,round(#number1,8) as Realcolumn1_ROUND_8
,round(#number1,9) as Realcolumn1_ROUND_9
,round(#number1,10) as Realcolumn1_ROUND_10
Output for real column type
I suspect what you are asking here is why does:
DECLARE #n real = 201604.125;
SELECT #n;
Return 201604.1?
First point of call for things like this should be the documentation: Let's start with float and real (Transact-SQL). Firstly we note that:
The ISO synonym for real is float(24).
If we then look further down:
float [ (n) ] Where n is the number of bits that are used to store the
mantissa of the float number in scientific notation and, therefore,
dictates the precision and storage size. If n is specified, it must be
a value between 1 and 53. The default value of n is 53. n value
Precision Storage size
1-24 7 digits 4 bytes
So, now we know that a real (aka a float(24)) has precision of 7. 201604.125 has a precision of 9, that's 2 too many; so off come that 2 and 5 in the return value.
Now, ROUND (Transact-SQL). That states:
Returns a numeric value, rounded to the specified length or precision.
When using real/float those digits aren't actually lost, as such, due to the floating point. When you use ROUND, you are specifically stating "I want this many decimal places". This is why you can then see the .13 and the .125, as you have specifically asked for those. When you just returned the value of #number it had a precision of 7, due to being a real, so 201604.1 was the value returned.

T-Sql numeric variables error conversion

It is really strange how auto convert between numeric data behaves in T-Sql
Declare #fdays as float(12)
Declare #mAmount As Money
Declare #fDaysi as float(12)
Set #fdays =3
Set #fdaysi =1
Set #mAmount=527228.52
Set #mAmount = #fdaysi * #mAmount/#fDays
Select #mAmount, 527228.52/3
The result of this computation is
175742.8281 175742.840000
Does this occur because money and float are not actually the same kind of numeric data? Float is Approximate Numeric and Money is Exact Numeric
Money and Decimal are fixed numeric datatypes while Float is an
approximate numeric datatype. Results of mathematical operations on
floating point numbers can seem unpredictable, especially when
rounding is involved. Be sure you understand the significance of the
difference before you use Float!
Also, Money doesn't provide any advantages over Decimal. If fractional
units up to 5 decimal places are not valid in your currency or
database schema, just use Decimal with the appropriate precision and
scale.
ref link : http://www.sqlservercentral.com/Forums/Topic1408159-391-1.aspx
Should you choose the MONEY or DECIMAL(x,y) datatypes in SQL Server?
https://dba.stackexchange.com/questions/12916/datatypes-in-sql-server-difference-between-similar-dataypes-numeric-money
float [ (n) ]
Where n is the number of bits that are used to store the mantissa of the float number in scientific notation and, therefore, dictates the precision and storage size. If n is specified, it must be a value between 1 and 53. The default value of n is 53.
When n in 1-24 then precision is 7 digits.
When n in 25-53 then precision is 15 digits.
So in your example precision is 7 digits, thus first part #fdaysi * #mAmount
rounds result to 7 digits 527228.5. The second part returns 527228.5/3=175742.828 and casting 175742.828 to Money results in 175742.8281. So FLOAT and REAL are approximate data types and sometimes you get such surprises.
DECLARE #f AS FLOAT = '29545428.022495';
SELECT CAST(#f AS NUMERIC(28, 14)) AS value;
The result of this is 29545428.02249500200000 with just a casting.

What is the max value of numeric(19, 0) and what happens if it is reached?

In an SQL Server table, I have a column with the following type:
numeric(19, 0)
What is the max value for this type?
This corresponds to a Long in Java, which has a maximum value of 9,223,372,036,854,775,807 (19 digits).
Does the above type in SQL Server have the same max value?
What happens in SQL Server if the above value is reached?
The MSDN documentation rather unhelpfully says:
When maximum precision is used, valid values are from - 10^38 +1 through 10^38 - 1...Maximum storage sizes vary, based on the precision.
(max precision is numeric(38)).
From experimenting in SQL Server 2012, the maximum value for numeric(19,0) is 10e18 - 1025. Attempting to insert any higher value gives:
Msg 8115, Level 16, State 6, Line xx Arithmetic overflow error converting float to data type numeric.
The maximum does vary by precision and appears to be a bit under 10^(p-1) (where p is precision), though the exact relationship is not obvious. No doubt a more exact answer could be given by delving into the bit-level details of the storage method.
numeric(19,0) specifies a fixed precision number having a precision of 19 digits and 0 digits to the right of the decimal point. It should not overflow. See decimal and numeric for more information.
First, decimal is synonimous with numeric.
decimal (2, 1) means that total number of all digits summed before and after dot can be maximum 2 and the number of digits after dot can be maximum 1. So range is from -9.9 to 9.9
Range for numeric(19, 0) is thus
-9999999999999999999.0 to
9999999999999999999.0
But sometimes also
9999999999999999999.2
will be allowed from my experiments, don't know why.

Is SQL Server 'MONEY' data type a decimal floating point or binary floating point?

I couldn't find anything that rejects or confirms whether SQL Server 'MONEY' data type is a decimal floating point or binary floating point.
In the description it says that MONEY type range is from -2^63 to 2^63 - 1 so this kind of implies that it should be a binary floating point.
But on this page it lists MONEY as "exact" numeric. Which kind of suggests that MONEY might be a decimal floating point (otherwise how is it exact? or what is the definition of exact?)
Then if MONEY is a decimal floating point, then what is the difference between MONEY and DECIMAL(19,4) ?
Neither. If it were an implementation of floating point it would be subject to the same inaccuracies as FLOAT and REAL types. See Floating Point on wikipedia.
MONEY is a fixed point type.
It's one byte smaller than a DECIMAL(19,4), because it has a smaller range (922,337,203,685,477.5808 to 922,337,203,685,477.5807) as opposed to (-10^15+1 to 10^15-1).
To see the differences we can look at the documentation:
Documentation for money:
Data type Range Storage
money -922,337,203,685,477.5808 to 922,337,203,685,477.5807 8 bytes
smallmoney -214,748.3648 to 214,748.3647 4 bytes
The money and smallmoney data types are accurate to a ten-thousandth of the monetary units that they represent.
Compare to decimal:
When maximum precision is used, valid values are from -10^38 + 1 through 10^38 - 1.
Precision Storage
1 - 9 5 bytes
10 - 19 9 bytes
20 - 28 13 bytes
29 - 38 17 bytes
So they're not exactly equivalent, just similar. A DECIMAL(19,4) has a slightly greater range than MONEY (it can store from -10^15 + 0.0001 to 10^15 - 0.0001), but also needs one more byte of storage.
In other words, this works:
CREATE TABLE Table1 (test DECIMAL(19,4) NOT NULL);
INSERT INTO Table1 (test) VALUES
(999999999999999.9999);
SELECT * FROM Table1
999999999999999.9999
But this doesn't:
CREATE TABLE Table1 (test MONEY NOT NULL);
INSERT INTO Table1 (test) VALUES
(999999999999999.9999);
SELECT * FROM Table1
Arithmetic overflow error converting numeric to data type money.
There's also a semantic difference. If you want to store monetary values, it makes sense to use the type money.
I think the primary difference will be the storage space required.
DECIMAL(19,4) will require 9 storage bytes
MONEY will require 8 storage bytes

Resources