Round Integer Value in SQL Query [closed] - sql-server

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I want to get following result in SQL Query.
If number is 121 then query should return me 130
If number is 125 then query should return me 130
If number is 128 then query should return me 130
If number is 130 then query should return me 130
If number is 137 then query should return me 140
If number is 140 then query should return me 140

You don't even need the floor() function when you're dealing with integer math.
(num + 9) / 10 * 10
Or you can think of it as finding the tens complement and adding that to the original number.
num + (10 - num % 10) % 10

Use FLOOR function with some math, something like this
select floor((121 + 9) / 10) * 10; --130
SQL FIDDLE DEMO

You need ROUND(number, -1)
ROUND - Rounds a positive or negative value to a specific length and accepts three values:
Value to round
Positive or negative number
This data type can be an int (tiny, small, big), decimal, numeric, money or smallmoney
Precision when rounding
Positive number rounds on the right side of the decimal point
Negative number rounds on the left side of the decimal point
Truncation of the value to round occurs when this value is not 0 or not included
SqlFiddleDemo
SELECT
num
,[rounded] = ROUND(num, -1)
FROM (VALUES (121), (125), (128), (130), (137), (140)) AS tab(num)

Related

swapping the any 2 digits of an number based on its position [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
What would be the logic of swapping any two digits of any number based on its position?
For example in 57896, depending on the position of any two digit It has to be swapped.
Like 1st position with 3rd or 2nd with fifth and then print the swapped no..
You can do this numerically, which will be much faster than "going in and out of a string". I'll work through swapping the 2 and 5 in the number 12345 and, hopefully, you can generalise the approach. Denote the difference between the original number and the number with the digits swapped as d.
Compute the difference between the two digits that are to be swapped. In your case that's 5 - 2 = 3. Note the sign convention carefully.
Since 2 is in the 1000's position, and the 5 in the units position, the difference d will be 3 * 1000 - 3 * 1 = 2997. Note that the sign of the second term will be the opposite of the first term.
Add that to the original number to obtain your result.
As another example, consider swapping 2 and 3 from 12345. The difference is 1, 2 is in the 1000's position and 3 in the 100's position. Therefore the difference is 1 * 1000 - 1 * 100 = 900. And you add that to the original number.

How to calculate function points [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
This is a question about theoretical computing. I have came through a question like below;
Consider a project with the following functional units :
Number of user inputs = 50
Number of user outputs = 40
Number of user enquiries = 35
Number of user files = 06
Number of external interfaces = 04
Assuming all complexity adjustment factors and weighing factors as average, the function points for the project will be;
The answer is 672. How is this calculated?
1. Typical complexity averages are as follows:
AVERAGE complexity weights = {4, 5, 4, 10, 7} for the 5 complexities respectively.
2. Typical Characteristic weights are as follows:
AVERAGE characteristic weight = 3.
3. Function point = FP = UFP x VAF
UFP = Sum of all the complexities i.e. the 5 parameters provided in the question,
VAF = Value added Factor i.e. 0.65 + (0.01 * TDI),
TDI = Total Degree of Influence of the 14 General System Characteristics.
Thus function points can be calculated as:
= (200 + 200 + 140 + 60 + 28) x (0.65 + (0.01 x (14 x 3))
= 628 x (0.65 + 0.42)
= 628 x (1.07)
= 672
Thus the function points for the project will be 672.
Checkout this article for a detailed walk-through into function-point calculations.

Problematic understanding of IEEE 754 [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
First of all i woild like to point out that i am not native speaker and i really need some terms used more commonly.
And the second thing i would like to mention is that i am not a math genious. I am really trying to understand everything about programming.. but ieee-754 makes me think that it'll never happan.. its full of mathematical terms i don't understand..
What is precision? What is it used for? What is mantissa and what is mantissa used for? How to determine the range of float/double by their size? What is ± symbol (Plus-minus) used for? (i believe its positive/negative choice but what does that have to do with everything?),
Isn't there any brief and clean explanation you guys could provide me with?
I spent 600 years of trying to understand wikipedia. I failed tremendously.
What is precision?
It refers to how closely a binary floating point representation can represent a real value. Real values have infinite precision and infinite range. Digital values have finite range and precision. In practice a single-precision IEEE-754 can represent real values of a precision of 6 significant figures (decimal), while double-precision is good for 15 significant figures.
The practical effect of this for example is that a single precision value: 123456000.00 cannot be distinguished from say 123456001.00, but equally a value 0.00123456 can be represented.
What is it used for?
Precision is not used for anything other than to define a characteristic of a particular floating point representation.
What is mantissa and what is mantissa used for?
The term is not mentioned in the English language Wikipedia article, and is imprecise - in mathematics in general it has a different meaning that that used here.
The correct term is significand. For a decimal value 0.00123456 for example the significand is is 123456. 123456000.00 has exactly the same significand. Each of these values has the same significand but a different exponent. The exponent is a scaling factor which determines where the decimal point is (hence floating point).
Of course IEEE754 is a binary floating point representation not decimal, but for the same of explanation of the terms it is perhaps easier to use decimal.
How to determine the range of float/double by their size?
By the size alone you cannot; you need to know how many bits are assigned to the significand and how many bits are assigned to the exponent. In C however the range is defined by the macros FLT_MIN, FLT_MAX, DBL_MIN and DBL_MAX in the float.h header. Other characteristics of the implementations floating point representation are described there also.
Note that a specific compiler may not in fact use IEEE754, however that is the format used by most hardware FPU implementations, and the compiler will naturally follow that. For targets with no FPU (small embedded processors typically), other formats may be used.
What is ± symbol (Plus-minus) used for?
It simply means that the value given may be both positive or negative. It may refer to a specific value, or it may indicate a range. So ±n may refer to two discrete values -n or +n, or it may mean a range -n to +n. Context is everything! In this article it refers to discrete values +0, -0, +∞ and -∞.
There are 3 different components: sign, exponent, mantissa
Assuming that the exponent has only 2 Bits, 4 combinations are possible:
binary decimal
00 0
01 1
10 2
11 3
The represented floating-point value is 2exponent:
binary exponent-value
00 2^0 = 1
01 2^1 = 2
10 2^2 = 4
11 2^3 = 8
The range of the floating point value, results from the exponent. 2 bits => maximum value = 8.
The mantissa divide the range from a given exponent to the next higher exponent.
For example the exponent is 2 and the mantissa has one bit, then there are two values possible:
exponent-value mantissa-binary represented floating-point value
2 0 2
2 1 3
The represented floating-point value is 2exponent × (1 + m1×2-1 + m2×2-2 + m3×2-3 + …).
Here an example with a 3 bit mantissa:
exponent-value mantissa-binary represented floating-point value
2 000 2 * (1 ) = 2
2 001 2 * (1 + 2^-3) = 2,25
2 010 2 * (1 + 2^-2 ) = 2,5
2 011 2 * (1 + 2^-2 + 2^-3) = 2,75
2 100 2 * (1 + 2^-1 ) = 3
and so on…
The sign has only just one Bit:
0 -> positive value
1 -> negative value
In IEEE-754 a 32 bit floating-point data type has an 8 bit exponent (with a range from 2-127 to 2128) and a 23 bit mantissa.
1 10000010 01101000000000000000000
- 130 1,40625
The represented floating-point value for this is:
-1 × 2(130 – 127) × (1 + 2-2 + 2-3 + 2-5) = -11,25
Try it: http://www.h-schmidt.net/FloatConverter/IEEE754.html

Finding Factorials and Ending zeros of a Factorial Number [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a problem. I want to find factorial of big numbers.
Ex: 1555! = ?.
195! = ?.
My main problem is that I want to know the exact number of ending 0's of the factorial numbers.
I use the following formula:
(m!)^n = m! = 2*10^(n-1) + 2^2 * 10^(n-2) + ------- + 2^n.
with this I can solve the other factorials for number of ending 0's like this.
100!= 2*10^1 + 2^2*10^0 = 20+4 = 24
100! has 24 ending 0's as per this calculation.
But, then I got other problem,
Ex: For 95!
i) 95! = (100 - 5)! = 24 - 2*5^(1-1) = 24 - 2 = 22 => 95! has 22 0's.
ii) 95! = (90 + 5)! = 9*(2*10^0) + 2*5^0)= 18+2 = 20 => 95! has 20 0's.
this is my problem. By using the above formula I got two different answers and I am confused, I don't get the perfect answer so please help me to find it.
Thank you...
The number of trailing zeros in n! is the number of factors of 5 in the sequence 1, 2, ..., n. This is because a trailing zeros is the number of factors of 10 in the result, and 10 has a prime factorisation of 5 x 2. There's always more factors of 2 than 5, so the number of 5's gives the result.
The number of factors of 5 is... [n/5] + [n/25] + ... + [n/(5^k)] + ... where [ ] means round down (floor).
What should the code look like to compute this? Something like this perhaps.
int trailing_factorial_zeros(int n) {
int result = 0;
int m5 = 5;
while (n >= m5) {
result += n / m5;
m5 *= 5;
}
return result;
}
This is a bad question, probably belongs on Math site anyway. But here's a thought for you:
First 100! = 100 * 99!
99! = 99 * 98! and so forth until
1! = 1, and 0! = 1.
You want to know how many trailing 0's are in N! (at least that is how I understand the question).
Think of how many are in 10!
10! = 3628800
so there are two. The reason why is because only 2*5 = a number with a trailing 0 along with 10. So we have a total of 2. (5*4 has a trailing 0 but 4 is a multiple of 2, and besides, we only get to multiply individual numbers once)
It is a good bet, then, that 20! has 4 (it does).
It's now your job to prove (or disprove) that this pattern will hold, and then come up with a way to code it.

SQL Server cast fails with arithmetic overflow

According to the entry for decimal and numeric data types in SQL Server 2008 Books Online, precision is:
p (precision)
The maximum total number of decimal digits that can be stored, both to the left and to the right of the decimal point. The precision must be a value from 1 through the maximum precision of 38. The default precision is 18.
However, the second select below fails with "Arithmetic overflow error converting int to data type numeric."
SELECT CAST(123456789 as decimal(9,0))
SELECT CAST(123456789 as decimal(9,1))
see here: http://msdn.microsoft.com/en-us/library/aa258832(SQL.80).aspx
decimal[(p[, s])]
p (precision) Specifies the maximum total number of decimal digits
that can be stored, both to the left
and to the right of the decimal point.
The precision must be a value from 1
through the maximum precision. The
maximum precision is 38. The default
precision is 18.
s (scale) Specifies the maximum number of decimal digits that can be
stored to the right of the decimal
point. Scale must be a value from 0
through p. Scale can be specified only
if precision is specified. The default
scale is 0; therefore, 0 <= s <= p.
Maximum storage sizes vary, based on
the precision.
when using: decimal(p,s), think of p as how many total digits (regardless of left or right of the decimal point) you want to store, and s as how many of those p digits should be to the right of the decimal point.
DECIMAL(10,5)= 12345.12345
DECIMAL(10,2)= 12345678.12
DECIMAL(10,10)= .1234567891
DECIMAL(11,10)= 1.1234567891
your sample code fails:
SELECT CAST(123456789 as decimal(9,1))
because:
9=precision (total number of digits to left and right of decimal)
1=scale (total number of digits to the right of the decimal)
(9-1)=8 (total digits to the left of the decimal)
and your value 123456789 requires 9 digits to the left of the decimal. you will need decimal(10,1) or just decimal(9,0)
Correct. Since you're doing decimal(9,1) that means you have 9 total digits, but the ,1 is reserving one of them for the right of the decimal place, so you can do at most 8 to the left and 1 to the right.
try
SELECT CAST(123456789 as decimal(10,1))

Resources