Character Width of Number Values in TSQL? - sql-server

MSSSQL 2008.
I have 3 Ints, 1 BigInt, 1 Float, and 1 DateTime value.
I am trying to concatenate them all into a single Char value and not lose any precision, which should let me create a single unique long value.
What are would the total character width be if I could make all of the numbers Chars and then combine them? The DateTime should go to MMDDYYHHMMSS.
Thanks.

INTs can be up to 10 digits.
BIGINTs can be up to 19 digits.
Floats could be anything. They have 38 digits of precision, but could be an enormous number with limited precision (1.79E + 308). You don't want that as a string. If your application has knowledge of what the actual range of values the float could be, you could make an application decision for a specific number of digits.

Related

SQL query multiply decimal

I'm trying to multiply two values. One is a decimal and one is numeric.
Example - Total is what I want:
Number Decimal Total
900 1.111 999.9
800 1.25 1000
460 4.25 1955
In my Sql query, I've tried the following:
(ISNUMERIC(UpgradeEmptyNodesPercentageLimitForAllocation) * RawTotalNodes) as ExpectedEmptyNodeCountForUpgrade
However, it always returns Number. How do the above?
Thanks...
Check the scale on your data types, if they are zero then SQL will remove the decimals
DECLARE #d1 DECIMAL(18,0) = 1.111
DECLARE #d2 DECIMAL(18,10) = 1.111
SELECT #d1,#d2
ISNUMERIC is really for evaluating strings, not number types -- and it only returns 0 or 1. I think what you want to do is a CAST of your numeric and/or decimal values to one with more precision, and then multiply. What is affecting the operation is the precision and scale of both factors to your operation, as-stored.
These types, by name, are actually interchangeable - but precision and scale are not.
Try casting both to a NUMERIC of some acceptabe precision and scale, and then multiplying. Or, if you don't have a precision and scale that will always work, then cast to REAL, if that's an option.
Read more on MSDN.

Some doubts related Microsoft SQL Server bigint

I have the following doubt related to Microsoft SQL Server. If a bigint column has a value as 0E-9, does it mean that this cell can contain value with 9 decimal digits or what?
BIGINT: -9,223,372,036,854,775,808 TO 9,223,372,036,854,775,807
INT: -2,147,483,648 TO 2,147,483,647
SMALLINT: -32,768 TO 32,767
TININT: 0 TO 255
These are for storing non-decimal values. You need to use DECIMAL or NUMERIC to store values shuch as 112.455. When maximum precision is used, valid values are from - 10^38 +1 through 10^38 - 1.
OE-9 isn't NUMERICor INTEGER value. It's a VARCHAR unless you are meaning something else like scientific notation.
https://msdn.microsoft.com/en-us/library/ms187746.aspx
https://msdn.microsoft.com/en-us/library/ms187745.aspx
No, the value would be stored as an integer. Any decimal amounts will be dropped off and only values to the left of the decimal will be kept (no rounding).
More specifically, bigint stores whole numbers between -2^63 and 2^63-1.

T-Sql numeric variables error conversion

It is really strange how auto convert between numeric data behaves in T-Sql
Declare #fdays as float(12)
Declare #mAmount As Money
Declare #fDaysi as float(12)
Set #fdays =3
Set #fdaysi =1
Set #mAmount=527228.52
Set #mAmount = #fdaysi * #mAmount/#fDays
Select #mAmount, 527228.52/3
The result of this computation is
175742.8281 175742.840000
Does this occur because money and float are not actually the same kind of numeric data? Float is Approximate Numeric and Money is Exact Numeric
Money and Decimal are fixed numeric datatypes while Float is an
approximate numeric datatype. Results of mathematical operations on
floating point numbers can seem unpredictable, especially when
rounding is involved. Be sure you understand the significance of the
difference before you use Float!
Also, Money doesn't provide any advantages over Decimal. If fractional
units up to 5 decimal places are not valid in your currency or
database schema, just use Decimal with the appropriate precision and
scale.
ref link : http://www.sqlservercentral.com/Forums/Topic1408159-391-1.aspx
Should you choose the MONEY or DECIMAL(x,y) datatypes in SQL Server?
https://dba.stackexchange.com/questions/12916/datatypes-in-sql-server-difference-between-similar-dataypes-numeric-money
float [ (n) ]
Where n is the number of bits that are used to store the mantissa of the float number in scientific notation and, therefore, dictates the precision and storage size. If n is specified, it must be a value between 1 and 53. The default value of n is 53.
When n in 1-24 then precision is 7 digits.
When n in 25-53 then precision is 15 digits.
So in your example precision is 7 digits, thus first part #fdaysi * #mAmount
rounds result to 7 digits 527228.5. The second part returns 527228.5/3=175742.828 and casting 175742.828 to Money results in 175742.8281. So FLOAT and REAL are approximate data types and sometimes you get such surprises.
DECLARE #f AS FLOAT = '29545428.022495';
SELECT CAST(#f AS NUMERIC(28, 14)) AS value;
The result of this is 29545428.02249500200000 with just a casting.

SQL Server 2008 - Having trouble understanding decimal

I need to insert numbers with decimals into a SQL Server 2008 database. It seems like decimal() is the correct data type to use, however, I'm having trouble understanding it exactly.
I found this script (scroll down for decimal):
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=95322
Which allows me to test different decimal settings against numbers, and there's some I don't understand why they pass or fail. The way I understand it, when using decimal(precision, scale), precision is the number of digits to the left of the decimal and scale is the number of digits to the right of the decimal. Using this function, I don't understand why some are passing and why some are failing.
SELECT dbo.udfIsValidDECIMAL('2511.1', 6, 3)
I have 4 digits on the left and 1 on the right, yet this fails.
SELECT dbo.udfIsValidDECIMAL('10.123456789123456789', 18, 17)
SELECT dbo.udfIsValidDECIMAL('10.123456789123456789', 18, 16)
The first one fails, the second one passes. There are 18 digits after the decimal point, so it seems like both should fail (or pass and SQL truncates the number).
Maybe I have a fundamental misunderstanding in how decimal() is supposed to work?
DECIMAL(6,3) means: 6 digits in all, 3 of which to the right of the decimal point.
So you have 3 digits before, 3 digits after the decimal point, and of course it cannot handle 2511.1 - that's got four digits to the left of the decimal point. You'd need DECIMAL(7,3) to handle that.
See the MSDN documentation on DECIMAL:
decimal[ (p[ ,s] )] and numeric[ (p[ ,s] )]
p (precision)
The maximum total number of decimal digits that can be stored, both to the left and to the right of the decimal point. The precision
must be a value from 1 through the maximum precision of 38. The
default precision is 18.
s (scale)
The maximum number of decimal digits that can be stored to the right of the decimal point. Scale must be a value from 0 through p.
Scale can be specified only if precision is specified. The default
scale is 0; therefore, 0 <= s <= p. Maximum storage sizes vary, based
on the precision.
Precision is the number of digits that can be stored total.
So the number to the left of the decimal will be precision - scale.
For example, your first example will fail because you are only allowing for three places to the left of the decimal:
SELECT dbo.udfIsValidDECIMAL('2511.1',6,3)
cast(10.123456789123456789 as decimal(18,17))
A precision of 18 & scale of 17 allows just 1 digit to the left of the decimal place, but there are 2 in that example.
cast(10.123456789123456789 as decimal(18,16)
Has room for 2 digits so succeeds.

Why can't I enter an integer value into a decimal field?

I'm trying to write an insert statement for a SQL Server table that inserts the value 1 into a decimal field. The field is of the type decimal(10, 10) which, as far as I understand, means that it can have up to 10 digits altogether, and up to 10 of those digits can be after the decimal point. But, when I try to run the insert statement I get the following error:
Arithmetic overflow error converting int to data type numeric.
If I change the data type of the field to decimal(11, 10), it suddenly works. What am I not understanding here? What am I doing wrong?
decimal(10, 10) means all decimal places, no digits to the left of the decimal point!
see here: http://msdn.microsoft.com/en-us/library/aa258832(SQL.80).aspx_
decimal[(p[, s])]
p (precision) Specifies the maximum total number of decimal digits
that can be stored, both to the left
and to the right of the decimal point.
The precision must be a value from 1
through the maximum precision. The
maximum precision is 38. The default
precision is 18.
s (scale) Specifies the maximum number of decimal digits that can be
stored to the right of the decimal
point. Scale must be a value from 0
through p. Scale can be specified only
if precision is specified. The default
scale is 0; therefore, 0 <= s <= p.
Maximum storage sizes vary, based on
the precision.
decimal(11,10) gives you 1 digit the the left of the decimal and 10 to the right, so integer 1 fits now!
EDIT
when using: decimal(p,s), think of p as how many total digits (regardless of left or right of the decimal point) you want to store, and s as how many of those p digits should be to the right of the decimal point.
DECIMAL(10,5)= 12345.12345
DECIMAL(10,2)= 12345678.12
DECIMAL(10,10)= .1234567891
DECIMAL(11,10)= 1.1234567891

Resources