Ruby on rails 3 - Show decimals - database

Probably one of the simpler questions but I haven't found a solution for it. I want to show 2 decimals after making an equation of numbers coming from a database.
This is the code I have.
(((#best.price * #amount) + #best.retailer.profile.shippingCost)/(#best.productSize.productSize * #amount))

number_with_precision is your friend:
number = (((#best.price * #amount) + #best.retailer.profile.shippingCost)/(#best.productSize.productSize * #amount))
Then in your view:
number_with_precision(number, :precision => 2)

Related

Set number limit to 2 decimals

I am having a problem creating a function for SQL server query in php or changing how the value output I get in index page where the result would be something like = 25.879999999999999 when I want it as 25.87
when ap.idproduct = 1 then cast(tr.PreviousBalance as float)/100
else cast(tr.FinalBalance as float)/100 end as balance_before,
need float limited to 2 decimals or a function ( please explain how it is used as I kinda new to PHP)
FIX :
ROUND(cast(tr.PreviousBalance /100 as float), 4)
Wrapping cast in round

How do BigIntegers work in detail at the fundamental level?

I saw Explain BigInt Like I'm Five, but I already understand what a BigInt is. I want to know how to make one though. I am trying to pick apart BigInt.js (the v8 bigint.cc is too large and I'm not familiar with C++).
For myself and perhaps others in the future, could one explain what the data model looks like for a BigInt that supports arbitrary sized integers? Basically, what is the object and its properties. I get that there are all the arithmetic functions implemented in unique ways for the BigInt, but I don't see what the kernel is. What is the essence of the structure of the BigInt? Perhaps this one will be slightly easier to grok.
A BigInt works exactly like you learned about integers in school, except the "digits" are not based on 10 symbols, they are based on 4294967296 (or 18446744073709551616, or specifically for ECMAScript 9007199254740991).
The kernel of the data model is simply a list of "digits" that are themselves fixed-size integers and a sign bit (or alternatively, the first "digit" is itself signed). Everything else would be a performance optimization.
In pseudo-code, it would look something like this:
record BigInt
sign: boolean
digits: sequence[unsigned_integer]
or this:
record BigInt
first_digit: signed_integer
digits: sequence[unsigned_integer]
Again, if you write down an integer in base-10, you write it as a sequence of digits and a sign, i.e. writing the current year, you would write: 2, 0, 1, 9, signifying (from right-to-left)
9 * 10^0 = 9
+ 1 * 10^1 = 10
+ 0 * 10^2 = 000
+ 2 * 10^3 = 2000
====
2019
Or, maybe you would write 7, E, 3, signifying (from right-to-left)
3_16 * 10_16^0
+ E_16 * 10_16^1
+ 7_16 * 10_16^2
which is the same as
3_16 * 16_10^0
+ E_16 * 16_10^1
+ 7_16 * 16_10^2
which is the same as
3_10 * 16_10^0 = 3_10
+ 14_10 * 16_10^1 = 224_10
+ 7_10 * 16_10^2 = 1792_10
=======
2019_10
And a BigInt is represented in exactly the same way, except the base is (much) larger.

What is an efficient algorithm to input 4B integers to a text file

Let's say I want to write 1,2,3,4....up to 4.096B in a text file. What would be a time efficient way to do it. Just doing it sequentially is taking a long time. So wondering if there's a distributed way.
Thanks to all your comments on my question. It helped me solve this problem in a reasonable amount of time. Here's what I did -
Create a file using Excel to create a million integers from 0 - 1000000
Upload this file in Hadoop
Write a Hive query with 4296 lines like below -
a0 = SELECT IPDecimal + (100000 * 1) + 1 AS IPDecimal FROM #file;
a1 = SELECT IPDecimal + (100000 * 2) + 1 AS IPDecimal FROM #file;
.
.
.
a4295 = SELECT IPDecimal + (100000 * 4295) + 1 AS IPDecimal FROM #file;
Output the result of each SELECT statement above in a separate file and then consolidate the integers in the 4296 files in one single file

Problems with Expressions in C

I have two functions written that have simple assignment statements with very simple expressions. The expressions are the same for both functions, however, they involve different variable types: One function uses an array of structs, the other just uses a typedef'd struct.
When running the functions, the second function fails to divide by 256, and I get very high values that are not "normalized". I have to uncomment the second line in the second function (valueB = valueB / 256) to get it to work.
The first function, however, works perfectly.
Heres the statement in Function One:
value = ((p[0].value * p2Area)+(p[1].value * p3Area)+(p[2].value * p0Area)+(p[3].value * p1Area) / 256);
Heres the statement in Function Two:
valueB = ((dataPoints.p0B * p2Area)+(dataPoints.p1B * p3Area)+(dataPoints.p2B * p0Area)+(dataPoints.p3B * p1Area) / 256);
//valueB = valueB / 256;
Why would this happen?
Also, I pass the functions the same numbers and it doesn't seem to help.
This is on MacOSX 10.6.8, inside Xcode 3.2.6
Are you absolutely sure the first one works properly? You have
value = ((p[0].value * p2Area)+(p[1].value * p3Area)+(p[2].value * p0Area)+(p[3].value * p1Area) / 256);
I think you want:
value = (((p[0].value * p2Area)+(p[1].value * p3Area)+(p[2].value * p0Area)+(p[3].value * p1Area)) / 256);
Similar thing with the second. I think it should be:
value = (((p[0].value * p2Area)+(p[1].value * p3Area)+(p[2].value * p0Area)+(p[3].value * p1Area)) / 256);
In both cases I think you want to divide the sum of the products by 256. Not just the last one. My change only involves placing an extra set of parentheses around the sum of the product subexpressions and dividing the entire thing by 256
In all languages there is an order by which mathematical (and all other operators are completed). It just so happens that * and / are higher in precedence than + and - in C/C++ You may refer to this link for more details.
To simplify what happened to you, I will create this simple equation:
2 + 4 + 6 + 4 / 2
Since division occurs first (and there are no parentheses to alter the order) it gets computed as:
2 + 4 + 6 + (4 / 2) = 14
Not:
(2 + 4 + 6 + 4) / 2 = 8
So my change to your code was the same as putting parentheses around 2 + 4 + 6 + 4 / 2 giving (2 + 4 + 6 + 4) / 2 and forcing the division to be done last after all the additions are completed.

Why excel and sql server calculation is different?

Let's have a look at the picture.
The result is different even though the expression is the same.
Why does this happen?
I have to follow excel result, what should I have do with sql server?
No matter whatever the software is 1+1 will always yeild 2 and if its not you should check you calculation again. see below
SELECT ((4972000.0000) * (1.0000 - 4.4000/100.0000))
/ ((1.0000 + ((36.0000/365.0000)) * (13.0000 / 100.0000)))
RESULT: 4693057.996104
To get the result on upto four decimal places Use ROUND() function.
SELECT ROUND(((4972000.0000) * (1.0000 - 4.4000/100.0000))
/ ((1.0000 + ((36.0000/365.0000)) * (13.0000 / 100.0000))), 4)
RESULT: 4693057.996100

Resources