My program does NxN matrices multiplication where elements of both the matrices are initialized to values (0, 1, 2, ... N) using a for loop. Both the matrix elements are of type float. There is no memory allocation problem. Matrix sizes are input as a multiple of 4 eg: 4x4 or 8x8 etc. The answers are verified with a sequential calculation. Everything works fine upto matrix size of 64x64. A difference between the sequential version and SSE version is observed only when the matrix size exceeds 64 (eg: 68 x 68).
SSE snippet is as shown (size = 68):
void matrix_mult_sse(int size, float *mat1_in, float *mat2_in, float *ans_out) {
__m128 a_line, b_line, r_line;
int i, j, k;
for (k = 0; k < size * size; k += size) {
for (i = 0; i < size; i += 4) {
j = 0;
b_line = _mm_load_ps(&mat2_in[i]);
a_line = _mm_set1_ps(mat1_in[j + k]);
r_line = _mm_mul_ps(a_line, b_line);
for (j = 1; j < size; j++) {
b_line = _mm_load_ps(&mat2_in[j * size + i]);
a_line = _mm_set1_ps(mat1_in[j + k]);
r_line = _mm_add_ps(_mm_mul_ps(a_line, b_line), r_line);
}
_mm_store_ps(&ans_out[i + k], r_line);
}
}
}
With this, the answer differs at element 3673 where I get the answers of multiplication as follows
scalar: 576030144.000000 & SSE: 576030208.000000
I also wrote a similar program in Java with the same initialization and setup and N = 68 and for element 3673, I got the answer as 576030210.000000
Now there are three different answers and I'm not sure how to proceed. Why does this difference occur and how do we eliminate this?
I am summarizing the discussion in order to close this question as answered.
So according to the article (What Every Computer Scientist Should Know About Floating-Point Arithmetic) in link, floating point always results in a rounding error which is a direct consequence of the approximate representation nature of the floating point number.
Arithmetic operations such as addition, subtraction etc results in a precision error. Hence, the 6 most significant digits of the floating point answer (irrespective of where the decimal point is situated) can be considered to be accurate while the other digits may be erroneous (prone to precision error).
Related
I'm trying to write a program that calculates decimal digits of π to 1000 digits or more.
To practice low-level programming for fun, the final program will be written in assembly, on a 8-bit CPU that has no multiplication or division, and only performs 16-bit additions. To ease the implementation, it's desirable to be able to use only 16-bit unsigned integer operations, and use an iterative algorithm. Speed is not a major concern. And fast multiplication and division is beyond the scope of this question, so don't consider those issues as well.
Before implementing it in assembly, I'm still trying to figure out an usable algorithm in C on my desktop computer. So far, I found the following series is reasonably efficient and relatively easy to implement.
The formula is derived from the Leibniz Series using a convergence acceleration technique, To derive it, see Computing the Digits in π, by Carl D. Offner (https://cs.umb.edu/~offner/files/pi.pdf), page 19-26. The final formula is shown in page 26. The initial formula I've written had some typos, please refresh the page to see the fixed formula. The constant term 2 at the greatest term is explained in page 54. The paper described an advanced iterative algorithm as well, but I didn't use it here.
If one evaluates the series using many (e.g. 5000) terms, it's possible to get thousands digits of π easily, and I found this series is easy to evaluate iteratively as well using this algorithm:
Algorithm
First, rearrange the formula to obtain its constant terms from an array.
Fill the array with 2 to start the first iteration, hence the new formula resembles the original one.
Let carry = 0.
Start from the greatest term. Obtain one term (2) from the array, multiply the term by PRECISION to perform a fixed-point division against 2 * i + 1, and save the reminder as the new term to the array. Then add the next term. Now decrement i, go to the next term, repeat until i == 1. Finally add the final term x_0.
Because 16-bit integer is used, PRECISION is 10, hence 2 decimal digits are obtained, but only the first digit is valid. Save the second digit as carry. Show the first digit plus carry.
x_0 is the integer 2, it should not be added for the successive iterations, clear it.
Goto step 4 to calculate the next decimal digit, until we have all the digits we want.
Implementation 1
Translating this algorithm to C:
#include <stdio.h>
#include <stdint.h>
#define N 2160
#define PRECISION 10
uint16_t terms[N + 1] = {0};
int main(void)
{
/* initialize the initial terms */
for (size_t i = 0; i < N + 1; i++) {
terms[i] = 2;
}
uint16_t carry = 0;
for (size_t j = 0; j < N / 4; j++) {
uint16_t numerator = 0;
uint16_t denominator;
uint16_t digit;
for (size_t i = N; i > 0; i--) {
numerator += terms[i] * PRECISION;
denominator = 2 * i + 1;
terms[i] = numerator % denominator;
numerator /= denominator;
numerator *= i;
}
numerator += terms[0] * PRECISION;
digit = numerator / PRECISION + carry;
carry = numerator % PRECISION;
printf("%01u", digit);
/* constant term 2, only needed for the first iteration. */
terms[0] = 0;
}
putchar('\n');
}
The code can calculate π to 31 decimal digits, until it makes an error.
31415926535897932384626433832794
10 <-- wrong
Sometimes digit + carry is greater than 9, so it needs an extra carry. If we are very unlucky, there may even be a double carry, triple carry, etc. We use a ring-buffer to store the last 4 digits. If an extra carry is detected, we output a backspace to erase the previous digit, perform a carry, and reprint them. This is just a ugly solution to the Proof-of-Concept, which is irrelevant to my question about overflow, but for completeness, here is it. Something better would be implemented in the future.
Implementation 2 with Repeated Carry
#include <stdio.h>
#include <stdint.h>
#define N 2160
#define PRECISION 10
#define BUF_SIZE 4
uint16_t terms[N + 1] = {0};
int main(void)
{
/* initialize the initial terms */
for (size_t i = 0; i < N + 1; i++) {
terms[i] = 2;
}
uint16_t carry = 0;
uint16_t digit[BUF_SIZE];
int8_t idx = 0;
for (size_t j = 0; j < N / 4; j++) {
uint16_t numerator = 0;
uint16_t denominator;
for (size_t i = N; i > 0; i--) {
numerator += terms[i] * PRECISION;
denominator = 2 * i + 1;
terms[i] = numerator % denominator;
numerator /= denominator;
numerator *= i;
}
numerator += terms[0] * PRECISION;
digit[idx] = numerator / PRECISION + carry;
/* over 9, needs at least one carry op. */
if (digit[idx] > 9) {
for (int i = 1; i <= 4; i++) {
if (i > 3) {
/* allow up to 3 consecutive carry ops */
fprintf(stderr, "ERROR: too many carry ops!\n");
return 1;
}
/* erase a digit */
putchar('\b');
/* carry */
digit[idx] -= 10;
idx--;
if (idx < 0) {
idx = BUF_SIZE - 1;
}
digit[idx]++;
if (digit[idx] < 10) {
/* done! reprint the digits */
for (int j = 0; j <= i; j++) {
printf("%01u", digit[idx]);
idx++;
if (idx > BUF_SIZE - 1) {
idx = 0;
}
}
break;
}
}
}
else {
printf("%01u", digit[idx]);
}
carry = numerator % PRECISION;
terms[0] = 0;
/* put an element to the ring buffer */
idx++;
if (idx > BUF_SIZE - 1) {
idx = 0;
}
}
putchar('\n');
}
Great, now the program can correctly calculate 534 digits of π, until it makes an
error.
3141592653589793238462643383279502884
1971693993751058209749445923078164062
8620899862803482534211706798214808651
3282306647093844609550582231725359408
1284811174502841027019385211055596446
2294895493038196442881097566593344612
8475648233786783165271201909145648566
9234603486104543266482133936072602491
4127372458700660631558817488152092096
2829254091715364367892590360011330530
5488204665213841469519415116094330572
7036575959195309218611738193261179310
5118548074462379962749567351885752724
8912279381830119491298336733624406566
43086021394946395
22421 <-- wrong
16-bit Integer Overflow
It turns out, during the calculation of the largest terms at the beginning, the error term gets quite large, since the divisors at the beginning are in the range of ~4000. When evaluating the series, numerator actually starts to overflow in the multiplication immediately.
The integer overflow is insignificant when calculating the first 500 digits, but starts to get worse and worse, until it gives an incorrect result.
Changing uint16_t numerator = 0 to uint32_t numerator = 0 can solve this problem and calculate π to 1000+ digits.
However, as I mentioned before, my target platform is a 8-bit CPU, and only has 16-bit operations. Is there a trick to solve the 16-bit integer overflow issue that I'm seeing here, using only one or more uint16_t? If it's not possible to avoid multiple-precision arithmetic, what is the simplest method to implement it here? I know somehow I need to introduce an extra 16-bit "extension word", but I'm not sure how can I implement it.
And thanks in advance for your patience to understand the long context here.
Take a look at related QA:
Baking-Pi Challenge - Understanding & Improving
Its using Wiki: Bailey–Borwein–Plouffe_formula which is more suited for integer arithmetics.
The real challenge however would be:
How do I convert a very long binary number to decimal?.
As you probably want to print the number in dec base ...
Also if you need carry in higher level language than asm take a look at this:
Cant make value propagate through carry
You can modify it to handle as many carry bits as you need (if still less than the data type bit-width).
[Edit1] BBP example in C++/VCL
I used this formula (taken from Wiki page linked above):
converted to fixed point...
//---------------------------------------------------------------------------
AnsiString str_hex2dec(const AnsiString &hex)
{
char c;
AnsiString dec="",s;
int i,j,l,ll,cy,val;
int i0,i1,i2,i3,sig;
sig=+1; l=hex.Length();
if (l) { c=hex[l]; if (c=='h') l--; if (c=='H') l--; }
i0=0; i1=l; i2=0; i3=l;
for (i=1;i<=l;i++) // scan for parts of number
{
char c=hex[i];
if (c=='-') sig=-sig;
if ((c=='.')||(c==',')) i1=i-1;
if ((c>='0')&&(c<='9')) { if (!i0) i0=i; if ((!i2)&&(i>i1)) i2=i; }
if ((c>='A')&&(c<='F')) { if (!i0) i0=i; if ((!i2)&&(i>i1)) i2=i; }
if ((c>='a')&&(c<='f')) { if (!i0) i0=i; if ((!i2)&&(i>i1)) i2=i; }
}
l=0; s=""; if (i0) for (i=i0;i<=i1;i++)
{
c=hex[i];
if ((c>='0')&&(c<='9')) c-='0';
else if ((c>='A')&&(c<='F')) c-='A'-10;
else if ((c>='a')&&(c<='f')) c-='A'-10;
for (cy=c,j=1;j<=l;j++)
{
val=(s[j]<<4)+cy;
s[j]=val%10;
cy =val/10;
}
while (cy>0)
{
l++;
s+=char(cy%10);
cy/=10;
}
}
if (s!="")
{
for (j=1;j<=l;j++) { c=s[j]; if (c<10) c+='0'; else c+='A'-10; s[j]=c; }
for (i=l,j=1;j<i;j++,i--) { c=s[i]; s[i]=s[j]; s[j]=c; }
dec+=s;
}
if (dec=="") dec="0";
if (sig<0) dec="-"+dec;
if (i2)
{
dec+='.';
s=hex.SubString(i2,i3-i2+1);
l=s.Length();
for (i=1;i<=l;i++)
{
c=s[i];
if ((c>='0')&&(c<='9')) c-='0';
else if ((c>='A')&&(c<='F')) c-='A'-10;
else if ((c>='a')&&(c<='f')) c-='A'-10;
s[i]=c;
}
ll=((l*1234)>>10); // num of decimals to compute
for (cy=0,i=1;i<=ll;i++)
{
for (cy=0,j=l;j>=1;j--)
{
val=s[j];
val*=10;
val+=cy;
s[j]=val&15;
cy=val>>4;
}
dec+=char(cy+'0');
for (;;)
{
if (!l) break;;
if (s[l]) break;
l--;
}
if (!l) break;;
}
}
return dec;
}
//---------------------------------------------------------------------------
AnsiString pi_BBP() // https://en.wikipedia.org/wiki/Bailey–Borwein–Plouffe_formula
{
const int N=100; // 32*N bit uint arithmetics
int sh;
AnsiString s;
uint<N> pi,a,b,k,k2,k3,k4;
for (pi=0,sh=(N<<5)-8,k=0;sh>=0;k++,sh-=4)
{
k2=k*k;
k3=k2*k;
k4=k3*k;
a =k2* 120;
a+=k * 151;
a+= 47;
b =k4* 512;
b+=k3*1024;
b+=k2* 712;
b+=k * 194;
b+= 15;
a<<=sh;
pi+=a/b;
}
pi<<=4;
s=pi.strhex();
s=s.Insert(".",2);
return str_hex2dec(s);
}
//---------------------------------------------------------------------------
The code is using VCL AnsiString which is a self allocating string and mine uint<N> template which is unsigned integer arithmetics of 32*N bitwidth based on mine ALU32. As you can see you only need big integer division addition and multiplication for this (all the other stuff is doable on normal integers).
Here decadic result versus 1000 digit Pi reference:
ref: 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989
BPP: 3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798214808651328230664709384460955058223172535940812848111745028410270193852110555964462294895493038196442881097566593344612847564823378678316527120190914564856692346034861045432664821339360726024914127372458700660631558817488152092096282925409171536436789259036001133053054882046652138414695194151160943305727036575959195309218611738193261179310511854807446237996274956735188575272489122793818301194912983367336244065664308602139494639522473719070217986094370277053921717629317675238467481846766940513200056812714526356082778577134275778960917363717872146844090122495343014654958537105079227968925892354201995611212902196086403441815981362977477130996051870721134999999837297804995105973173281609631859502445945534690830264252230825334468503526193118817101000313783875288658753320838142061717766914730359825349042875546873115956286388235378759375195778185778048187
The computed bigint value is exported to hex string and then converted to decadic base using str_hex2dec from link above. The number of iterations depends on the target bitwidth.
The code is not optimized yet...
What about implementing 32 bits arithmetic ?
For an addition, add the two high order words (16 bits), then the two low order words, test the overflow bit, and carry to the high order result if necessary.
If you can predict when overflow will occur, you can switch from 16 to 32 bits arithmetic when necessary.
Testing the overflow bit cannot be done in pure C, it will require some inline assembly or an intrinsic function.
Otherwise, you can be inspired by this answer: https://codereview.stackexchange.com/a/37178/39646
There is a trick:
Consider using an array for the numerators and another array for the denominators. Each position would represent the number of times that number is multiplied to get the actual number.
An example:
(1 * 2 * 3 * 7 * 7) / (3 * 6 * 8)
Would be represented as:
num[] = {1, 1, 1, 0, 0, 0, 2};
denom[] = {0, 0, 1, 0, 0, 1, 0, 1};
Then consider factorizing into prime numbers every number before storing it, so that you have lower numbers. Now you will need another array to store all the primes:
primes[] = {2, 3, 5, 7};
num[] = {1, 1, 0, 2};
denom[] = {4, 2, 0, 0};
This will allow you to store unimaginably big numbers, but you will sooner or later want to transform them back into numbers, so you will want to simplify this first. The way to do it is just subtract factors[i] += num[i] - denom[i] for every field in the arrays, for every fraction in the series. You will want to simplify after each iteration, so you minimize overflow risk.
factors[] = {-3, -1, 0, 2};
When you need the number, just do num *= pow(primes[i], factors[i]); if the factor is positive, or num /= pow(primes, -factors[i]); if it is negative, for every field in the arrays. (Do nothing if it is 0.
num and denom are temporary arrays used to store a fraction, the array where the result is being stored is factors. Remember to memset the temporary arrays before every use.
This explanation is useful for any big fraction. To adapt it to your specific problem, you may need to use an integer power function, and also multiply by 10^something to turn the decimal part into an integral part. That is your mission, should you accept it :)
I need to calculate the entropy and due to the limitations of my system I need to use restricted C features (no loops, no floating point support) and I need as much precision as possible. From here I figure out how to estimate the floor log2 of an integer using bitwise operations. Nevertheless, I need to increase the precision of the results. Since no floating point operations are allowed, is there any way to calculate log2(x/y) with x < y so that the result would be something like log2(x/y)*10000, aiming at getting the precision I need through arithmetic integer?
You will base an algorithm on the formula
log2(x/y) = K*(-log(x/y));
where
K = -1.0/log(2.0); // you can precompute this constant before run-time
a = (y-x)/y;
-log(x/y) = a + a^2/2 + a^3/3 + a^4/4 + a^5/5 + ...
If you write the loop correctly—or, if you prefer, unroll the loop to code the same sequence of operations looplessly—then you can handle everything in integer operations:
(y^N*(1*2*3*4*5*...*N)) * (-log(x/y))
= y^(N-1)*(2*3*4*5*...*N)*(y-x) + y^(N-2)*(1*3*4*5*...*N)*(y-x)^2 + ...
Of course, ^, the power operator, binding tighter than *, is not a C operator, but you can implement that efficiently in the context of your (perhaps unrolled) loop as a running product.
The N is an integer large enough to afford desired precision but not so large that it overruns the number of bits you have available. If unsure, then try N = 6 for instance. Regarding K, you might object that that is a floating-point number, but this is not a problem for you because you are going to precompute K, storing it as a ratio of integers.
SAMPLE CODE
This is a toy code but it works for small values of x and y such as 5 and 7, thus sufficing to prove the concept. In the toy code, larger values can silently overflow the default 64-bit registers. More work would be needed to make the code robust.
#include <stddef.h>
#include <stdlib.h>
// Your program will not need the below headers, which are here
// included only for comparison and demonstration.
#include <math.h>
#include <stdio.h>
const size_t N = 6;
const long long Ky = 1 << 10; // denominator of K
// Your code should define a precomputed value for Kx here.
int main(const int argc, const char *const *const argv)
{
// Your program won't include the following library calls but this
// does not matter. You can instead precompute the value of Kx and
// hard-code its value above with Ky.
const long long Kx = lrintl((-1.0/log(2.0))*Ky); // numerator of K
printf("K == %lld/%lld\n", Kx, Ky);
if (argc != 3) exit(1);
// Read x and y from the command line.
const long long x0 = atoll(argv[1]);
const long long y = atoll(argv[2]);
printf("x/y == %lld/%lld\n", x0, y);
if (x0 <= 0 || y <= 0 || x0 > y) exit(1);
// If 2*x <= y, then, to improve accuracy, double x repeatedly
// until 2*x > y. Each doubling offsets the log2 by 1. The offset
// is to be recovered later.
long long x = x0;
int integral_part_of_log2 = 0;
while (1) {
const long long trial_x = x << 1;
if (trial_x > y) break;
x = trial_x;
--integral_part_of_log2;
}
printf("integral_part_of_log2 == %d\n", integral_part_of_log2);
// Calculate the denominator of -log(x/y).
long long yy = 1;
for (size_t j = N; j; --j) yy *= j*y;
// Calculate the numerator of -log(x/y).
long long xx = 0;
{
const long long y_minus_x = y - x;
for (size_t i = N; i; --i) {
long long term = 1;
size_t j = N;
for (; j > i; --j) {
term *= j*y;
}
term *= y_minus_x;
--j;
for (; j; --j) {
term *= j*y_minus_x;
}
xx += term;
}
}
// Convert log to log2.
xx *= Kx;
yy *= Ky;
// Restore the aforementioned offset.
for (; integral_part_of_log2; ++integral_part_of_log2) xx -= yy;
printf("log2(%lld/%lld) == %lld/%lld\n", x0, y, xx, yy);
printf("in floating point, this ratio of integers works out to %g\n",
(1.0*xx)/(1.0*yy));
printf("the CPU's floating-point unit computes the log2 to be %g\n",
log2((1.0*x0)/(1.0*y)));
return 0;
}
Running this on my machine with command-line arguments of 5 7, it outputs:
K == -1477/1024
x/y == 5/7
integral_part_of_log2 == 0
log2(5/7) == -42093223872/86740254720
in floating point, this ratio of integers works out to -0.485279
the CPU's floating-point unit computes the log2 to be -0.485427
Accuracy would be substantially improved by N = 12 and Ky = 1 << 20, but for that you need either thriftier code or more than 64 bits.
THRIFTIER CODE
Thriftier code, wanting more effort to write, might represent numerator and denominator in prime factors. For example, it might represent 500 as [2 0 3], meaning (22)(30)(53).
Yet further improvements might occur to your imagination.
AN ALTERNATE APPROACH
For an alternate approach, though it might not meet your requirements precisely as you have stated them, #phuclv has given the suggestion I would be inclined to follow if your program were mine: work the problem in reverse, guessing a value c/d for the logarithm and then computing 2^(c/d), presumably via a Newton-Raphson iteration. Personally, I like the Newton-Raphson approach better. See sect. 4.8 here (my original).
MATHEMATICAL BACKGROUND
Several sources including mine already linked explain the Taylor series underlying the first approach and the Newton-Raphson iteration of the second approach. The mathematics unfortunately is nontrivial, but there you have it. Good luck.
This question already has answers here:
How best to sum up lots of floating point numbers?
(5 answers)
Is floating point math broken?
(31 answers)
Closed 4 years ago.
Here I have a function sum() of type float that takes in a pointer t of type float and an integer size. It returns the sum of all the elements in the array. Then I create two arrays using that function. One that has the BIG value at the first index and one that has it at the last index. When I return the sums of each of those arrays I get different results. This is my code:
#include <stdlib.h>
#include <stdio.h>
#define N 1024
#define SMALL 1.0
#define BIG 100000000.0
float sum(float* t, int size) { // here I define the function sum()
float s = 0.0;
for (int i = 0; i < size; i++) {
s += t[i];
}
return s;
}
int main() {
float tab[N];
for (int i = 0; i < N; i++) {
tab[i] = SMALL;
}
tab[0] = BIG;
float sum1 = sum(tab, N); // initialize sum1 with the big value at index 0
printf("sum1 = %f\n", sum1);
tab[0] = SMALL;
tab[N-1] = BIG;
float sum2 = sum(tab, N); // initialize sum2 with the big value at last index
printf("sum2 = %f\n", sum2);
return 0;
}
After compiling the code and running it I get the following output:
Sum = 100000000.000000
Sum = 100001024.000000
Why do I get different results even though the arrays have the same elements ( but at different indexes ).
What you're experiencing is floating point imprecision. Here's a simple demonstration.
int main() {
float big = 100000000.0;
float small = 1.0;
printf("%f\n", big + small);
printf("%f\n", big + (19 *small));
return 0;
}
You'd expect 100000001.0 and 100000019.0.
$ ./test
100000000.000000
100000016.000000
Why'd that happen? Because computers don't store numbers like we do, floating point numbers doubly so. A float has a size of just 32 bits, but can store numbers up to about 3^38 rather than the just 2^31 a 32 bit integer can. And it can store decimal places. How? They cheat. What it really stores is the sign, an exponent, and a mantissa.
sign * 2^exponent * mantissa
The mantissa is what determines accuracy and there's only 24 bits in a float. So large numbers lose precision.
You can read about exactly how and play around with the representation.
To solve this either use a double which has greater precision, or use an accurate, but slower, arbitrary precision library such as GMP.
Why do I get different results even though the arrays have the same elements
In floating-point math, 100000000.0 + 1.0 equals 100000000.0 and not 100000001.0, but 100000000.0 + 1024.0 does equal 100001024.0. Given the value 100000000.0, the value 1.0 is too small to show up in the available bits used to represent 100000000.0.
So when you put 100000000.0 first, all the later + 1.0 operations have no effect.
When you put 100000000.0 last, though, all the previous 1000+ 1.0 + 1.0 + ... do add up to 1024.0, and 1024.0 is "big enough" to make a difference given the available precision of floating point math.
I have written the following function for sprintf
#include <reg52.h>
char output_arr[7];
void print_to_arr(float val)
{
int integer, fractional, temp = 0, i, ni = 2;
float fract;
integer = (int)val;
fract = val - (float)integer;
fract = fract *1000;
fractional = (int)fract;
for(i=0; i<=2; i++)
{
temp = integer%10;
integer = integer/10;
output_arr[ni - i] = temp+48;
}
temp = 0;
output_arr[3] = '.';
for(i=0; i<=2; i++)
{
temp = fractional%10;
fractional = fractional/10;
output_arr[(ni+4) - i] = temp+48;
}
temp = 0;
}
and for this the code size is
Program Size: data=36.0 xdata=0 code=1112
Can anyone please hep me with how to reduce the code size to the minimal possible vale?
Performing floating point operations on a device with no floating point hardware, and especially an 8 bit device will itself introduce a significant amount of code (every operation must be supported by a function rather than a single instruction). Most stdio implementations for constrained embedded targets have an option to omit floating point support for exactly that reason.
Even without floating-point, on an 8 bit target integer modulo (%) and divide (/) are often expensive. In this case 8051 only has an 8 bit divide operation, so for larger types divide is synthesized from multiple instructions.
If you really must have floating point support, then the following may have a small impact on size. The changes include minimisation of floating point operations, eradication of unnecessary variables, the use of initialisation rather than assignment of the variable initial values, and removal of the loop invariant ni + 4 operation. These may have no impact, or they may make it easier for the optimiser to generate smaller code - they may make things worse; you'll have to experiment.
void print_to_arr(float val)
{
int integer = (int)val ;
int fractional = (int)(val * 1000.0f) - (integer * 1000) ;
int i ;
for( i = 0; i < 3; i++, integer /= 10 )
{
output_arr[2 - i] = integer % 10 + '0' ;
}
output_arr[3] = '.';
for( i = 0; i < 3; i++, fractional /= 10 )
{
output_arr[6 - i] = (fractional % 10) + '0' ;
}
}
Note that the implementation does not round the final fractional digit, and fails completely for negative values. Fixing these issues requires yet more code.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
Here's a question on Page 60-61 in A Book On C(3rd Edition) by Al Kelley/Ira Pohl:
The following code fragment shows two different methods of computing a running average:
int i;
double x;
double avg= sum= 0.0;
double navg;
for (i=1; scanf("%lf", &x)==1; ++i)
{
avg+= (x-avg)/i;
sum+= x;
navg= sum/i;
}
The original question as written in the book is: If you input some "ordinary" numbers, the avg and the navg seem to be identical. Demonstrate experimentally that the avg is better, even when sum does not overflow.
My question as a beginning programmer is:
What is the criteria for a "better" algorithm? I believe precision and the amount of running time are two key factors, but are there any other things that make an algorithm "better"?
In terms of precision and running time, how can I demonstrate experimentally that when overflow is ruled out, avg is still a better method than navg is? Should I use numbers that are "out of the ordinary", e.g., of largely different magnitudes?
the two algorithms don't have much difference on running time;
compared with navg, avg is better on precision.
(1)running time:
the following two pieces of code demonstrate that at the magnitude of 1000000, the two algorithms do not have much difference.
#include<stdio.h>
#include<time.h>
int main()
{
int i ;
double x ,sum = 0,avg = 0;
srand(time(NULL));
for(i = 0; i < 1000000 ; i++)
{
x = rand()%10+1;
sum += x;
}
avg = sum/i;
printf("%lf\n",avg);
printf("time use:%lf\n",(double)clock()/CLOCKS_PER_SEC);
}
#include<stdio.h>
#include<time.h>
int main()
{
double sum = 0,avg = 0;
double x;
int i;
srand(time(NULL));
for(i = 0 ; i < 1000000; i++)
{
x = rand()%10+1;
avg += (x-avg)/(i+1);
}
printf("%lf\n",avg);
printf("time use:%lf\n",(double)clock()/CLOCKS_PER_SEC);
}
(2)precision:
the code below demonstrates that, adding the differences between avg and every x, the result is 0; while as for navg the result is -2.44718e-005, which means that avg is better on precision.
#include <stdlib.h>
#include <stdio.h>
int main()
{
static double data[1000000];
double sum, avg, check_value;
int i;
int n = sizeof(data)/sizeof(data[0]);
avg = 0;
for( i = 0; i < n; ++ i)
{
avg += ( data[i] - avg) / (i + 1);
}
check_value = 0;
for( i = 0; i < n; ++ i)
{
check_value = check_value + ( data[i] - avg );
}
printf("\navg += (x[i] - avb) / i:\tavg = %g\t check_value = %g", avg, check_value );
for( i = 0; i < n; ++ i )
{
data[i] = 1.3;
}
sum = 0;
for( i = 0; i < n; ++ i)
{
sum += data[i];
}
avg = sum / n;
check_value = 0;
for( i = 0; i < n; ++ i)
{
check_value = check_value + ( data[i] - avg );
}
printf("\n avg = sum / N: \tavg = %g\t check_value = %g", avg, check_value );
getchar();
}
I think this is a valid question, although not too well phrased. A problem is that even the question referred to by furins was not phrased well and was closed before receiving a good answer.
The question itself however is interesting especially for the closed one which shows that it was even included in a book, so it could lead more people in one or another direction.
I think neither algorithm is particularly better. In the naive average it looks like we will lose precision, or that when averaging numbers with several magnitude of differences we even lose numbers, but the same may probably be discovered with the other algorithm as well, maybe just with different sets of input data.
So, especially for that it is from an existing book, I think this is a perfectly valid question seeking for some decent answer.
I attempt to try to cover up what I think about the two algorithm by an example. So imagine you have 4 numbers of roughly the same magnitude and you want to average them.
The naive method sums them up first, one after another. After summing the first two you obviously lost one bit of precision at the low end (since you now probably have one larger exponent). When you add the last number, you have 2 bits lost (which bits are now used for representing the high part of the sum). But then you divide by four which in this case essentially is just subtracting 2 from your exponent.
What did we lose during this process? Now it's easier to answer what if all the numbers were truncated by 2 bits first. This case obviously the last two bits of the fraction of the resulting average will be zero, and up to additional 2 bits worth of error may be introduced (if all the truncated bits were happened to be ones in the original numbers compared to if they were zeros). So essentially if the sources were single precision floats with 23 bits of fraction, the resulting avg would now have about 19 bits worth of precision.
The actual result from the naive method is better though as the first numbers summed didn't lose that much precision.
In the differential method in each iteration the appropriately weighted difference is added to the sum. If the numbers are of the same magnitude, then this difference will most likely be somewhere one magnitude below. It is then divided by the current count, in this operation nothing lost, but the resulting difference for the last number (with i=4 in this example) may be about 3 magnitudes below than the source numbers. We add this to the running average which is around the same magnitude as the original numbers.
So with the differential method in this example adding the last number seems to have lost about 3 bits of precision, for all the 4 numbers it may even look like we may be down to 5 essentially lost bits of precision - maybe even worse than the naive method?
The differential method is harder to follow, maybe I did some mistake in my assumptions. But I think it's clear: It just doesn't seem valid to regard one or another performing better, or if so, maybe it depends on the layout and magnitude differences of the data.
Note that you're dividing by zero in your for() loop even if you do ++i