I am trying to understand the different aspects of memory allocation in C. In the example below, I am calculating the mean of an array. I have defined one function in which the return is an int = 4, and the second in which the return is a double = 4.57.
#include <stdio.h>
#include <stdlib.h>
int getMean(int arr[], int size);
double getMean2(int arr[], int size);
int main()
{
int mean1;
double mean2;
int array[7] = {1,3,5,7,5,4,7};
mean1 = getMean(array, sizeof(array)/sizeof(array[0]));
printf(" Mean 1 = %d", mean1);
mean2 = getMean2(array, sizeof(array)/sizeof(array[0]));
printf("\n Mean 2 = %.2f", mean2);
return 0;
}
int getMean(int arr[], int size) {
int i;
printf("\n");
int sum = 0;
for (i = 0; i < size; ++i)
{
sum += arr[i];
}
return sum/size;
}
double getMean2(int arr[], int size) {
int i;
printf("\n");
double sum = 0;
for (i = 0; i < size; ++i)
{
sum += arr[i];
}
return sum/size;
}
In the case of the mean function returning an int, is the same memory allocation in RAM still used as in the function returning the double? Or is it still able to perform the calculation using less RAM?
When the int function is performing the calculation, does it still have to store the number as a double, before returning the int?
When the int function is performing the calculation, does it still have to store the number as a double, before returning the int?
This question seems to assume that the result of the following line:
return sum/size;
is always a floating point. But this assumption is wrong. See for example
C11 (draft N1570), §6.5.6 p6:
When integers are divided, the result of the / operator is the algebraic quotient with any
fractional part discarded.
So, if both operands have an integer type, you just get an integer division, the result is an integer type, in your example int (with a value that just discards any fractional part).
In your other function, one operand is already a double. Have a look at
C11 (draft N1570) §6.3.1.8 p1:
[...]
Otherwise, if the corresponding real type of either operand is double, the other
operand is converted, without change of type domain, to a type whose
corresponding real type is double.
So in this case, your size is implicitly converted to double and therefore / performs a floating point division, the result is a double.
The answer depends on
1.Size of integer on your platform (Compiler Specific).
2.The way in which your compiler+processor supports floating point arithmetic. Floating point arithmetic could be emulated by your compiler if your processor doesn't have FPU.
Consider Below Points:
Assuming for your platform double needs more bytes than integer:
Stack Usage will be more in getMean2function.
Assuming your processor don't have FPU: Text(Code) Segment will consume more memory in getMean2 function.
return sum/size;
will be a integer division in getMean1 and it will be a floating point division in getMean2
Note:
As you are neither allocating memory dynamically nor you are having global variables your data segment and heap will be unaffected.
Related
I'm implementing my own decrease-and-conquer method for an.
Here's the program:
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <time.h>
double dncpow(int a, int n)
{
double p = 1.0;
if(n != 0)
{
p = dncpow(a, n / 2);
p = p * p;
if(n % 2)
{
p = p * (double)a;
}
}
return p;
}
int main()
{
int a;
int n;
int a_upper = 10;
int n_upper = 50;
int times = 5;
time_t t;
srand(time(&t));
for(int i = 0; i < times; ++i)
{
a = rand() % a_upper;
n = rand() % n_upper;
printf("a = %d, n = %d\n", a, n);
printf("pow = %.0f\ndnc = %.0f\n\n", pow(a, n), dncpow(a, n));
}
return 0;
}
My code works for small values of a and n, but a mismatch in the output of pow() and dncpow() is observed for inputs such as:
a = 7, n = 39
pow = 909543680129861204865300750663680
dnc = 909543680129861348980488826519552
I'm pretty sure that the algorithm is correct, but dncpow() is giving me wrong answers.
Can someone please help me rectify this? Thanks in advance!
Simple as that, these numbers are too large for what your computer can represent exactly in a single variable. With a floating point type, there's an exponent stored separately and therefore it's still possible to represent a number near the real number, dropping the lowest bits of the mantissa.
Regarding this comment:
I'm getting similar outputs upon replacing 'double' with 'long long'. The latter is supposed to be stored exactly, isn't it?
If you call a function taking double, it won't magically operate on long long instead. Your value is simply converted to double and you'll just get the same result.
Even with a function handling long long (which has 64 bits on nowadays' typical platforms), you can't deal with such large numbers. 64 bits aren't enough to store them. With an unsigned integer type, they will just "wrap around" to 0 on overflow. With a signed integer type, the behavior of overflow is undefined (but still somewhat likely a wrap around). So you'll get some number that has absolutely nothing to do with your expected result. That's arguably worse than the result with a floating point type, that's just not precise.
For exact calculations on large numbers, the only way is to store them in an array (typically of unsigned integers like uintmax_t) and implement all the arithmetics yourself. That's a nice exercise, and a lot of work, especially when performance is of interest (the "naive" arithmetic algorithms are typically very inefficient).
For some real-life program, you won't reinvent the wheel here, as there are libraries for handling large numbers. The arguably best known is libgmp. Read the manuals there and use it.
I was working on Exercise 2-1 of K&R, the goal is to calculate the range of different variable types, bellow is my function to calculate the maximum value a short int can contain:
short int max_short(void) {
short int i = 1, j = 0, k = 0;
while (i > k) {
k = i;
if (((short int)2 * i) > (short int)0)
i *= 2;
else {
j = i;
while (i + j <= (short int)0)
j /= 2;
i += j;
}
}
return i;
}
My problem is that the returned value by this function is: -32768 which is obviously wrong since I'm expecting a positive value. I can't figure out where the problem is, I used the same function (with changes in the variables types) to calculate the maximum value an int can contain and it worked...
I though the problem could be caused by comparison inside the if and while statements, hence the typecasting but that didn't help...
Any ideas what is causing this ? Thanks in advance!
EDIT: Thanks to Antti Haapala for his explanations, the overflow to the sign bit results in undefined behavior NOT in negative values.
You can't use calculations like this to deduce the range of signed integers, because signed integer overflow has undefined behaviour, and narrowing conversion at best results in an implementation-defined value, or a signal being raised. The proper solution is to just use SHRT_MAX, INT_MAX ... of <limits.h>. Deducing the maximum value of signed integers via arithmetic is a trick question in standardized C language, and has been so ever since the first standard was published in 1989.
Note that the original edition of K&R predates the standardization of C by 11 years, and even the 2nd one - the "ANSI-C" version predates the finalized standard and differs from it somewhat - they were written for a language that wasn't almost, but not quite, entirely unlike the C language of this day.
You can do it easily for unsigned integers though:
unsigned int i = -1;
// i now holds the maximum value of `unsigned int`.
Per definition, you cannot calculate the maximum value of a type in C, by using variables of that very same type. It simply doesn't make any sense. The type will overflow when it goes "over the top". In case of signed integer overflow, the behavior is undefined, meaning you will get a major bug if you attempt it.
The correct way to do this is to simply check SHRT_MAX from limits.h.
An alternative, somewhat more questionable way would be to create the maximum of an unsigned short and then divide that by 2. We can create the maximum by taking the bitwise inversion of the value 0.
#include <stdio.h>
#include <limits.h>
int main()
{
printf("%hd\n", SHRT_MAX); // best way
unsigned short ushort_max = ~0u;
short short_max = ushort_max / 2;
printf("%hd\n", short_max);
return 0;
}
One note about your code:
Casts such as ((short int)2*i)>(short int)0 are completely superfluous. Most binary operators in C such as * and > implement something called "the usual arithmetic conversions", which is a way to implicitly convert and balance types of an expression. These implicit conversion rules will silently make both of the operands type int despite your casts.
You forgot to cast to short int during comparison
OK, here I assume that the computer would handle integer overflow behavior by changing into negative integers, as I believe that you have assumed in writing this program.
code that outputs 32767:
#include <stdlib.h>
#include <stdio.h>
#include <malloc.h>
short int max_short(void)
{
short int i = 1, j = 0, k = 0;
while (i>k)
{
k = i;
if (((short int)(2 * i))>(short int)0)
i *= 2;
else
{
j = i;
while ((short int)(i + j) <= (short int)0)
j /= 2;
i += j;
}
}
return i;
}
int main() {
printf("%d", max_short());
while (1);
}
added 2 casts
I stumbled on one issue while I was implementing in C the given algorithm:
int getNumberOfAllFactors(int number) {
int counter = 0;
double sqrt_num = sqrt(number);
for (int i = 1; i <= sqrt_num; i++) {
if ( number % i == 0) {
counter = counter + 2;
}
}
if (number == sqrt_num * sqrt_num)
counter--;
return counter;
}
– the reason for second condition – is to make a correction for perfect squares (i.e. 36 = 6 * 6), however it does not avoid situations (false positives) like this one:
sqrt(91) = 18.027756377319946
18.027756377319946 * 18.027756377319946 = 91.0
So my questions are: how to avoid it and what is the best way in C language to figure out whether a double number has any digits after decimal point? Should I cast square root values from double to integers?
In your case, you could test it like this:
if (sqrt_num == (int)sqrt_num)
You should probably use the modf() family of functions:
#include <math.h>
double modf(double value, double *iptr);
The modf functions break the argument value into integral and fractional parts, each of
which has the same type and sign as the argument. They store the integral part (in
floating-point format) in the object pointed to by iptr.
This is more reliable than trying to use direct conversions to int because an int is typically a 32-bit number and a double can usually store far larger integer values (up to 53 bits worth) so you can run into errors unnecessarily. If you decide you must use a conversion to int and are working with double values, at least use long long for the conversion rather than int.
(The other members of the family are modff() which handles float and modfl() which handles long double.)
I was trying out few examples on do's and dont's of typecasting. I could not understand why the following code snippets failed to output the correct result.
/* int to float */
#include<stdio.h>
int main(){
int i = 37;
float f = *(float*)&i;
printf("\n %f \n",f);
return 0;
}
This prints 0.000000
/* float to short */
#include<stdio.h>
int main(){
float f = 7.0;
short s = *(float*)&f;
printf("\n s: %d \n",s);
return 0;
}
This prints 7
/* From double to char */
#include<stdio.h>
int main(){
double d = 3.14;
char ch = *(char*)&d;
printf("\n ch : %c \n",ch);
return 0;
}
This prints garbage
/* From short to double */
#include<stdio.h>
int main(){
short s = 45;
double d = *(double*)&s;
printf("\n d : %f \n",d);
return 0;
}
This prints 0.000000
Why does the cast from float to int give the correct result and all the other conversions give wrong results when type is cast explicitly?
I couldn't clearly understand why this typecasting of (float*) is needed instead of float
int i = 10;
float f = (float) i; // gives the correct op as : 10.000
But,
int i = 10;
float f = *(float*)&i; // gives a 0.0000
What is the difference between the above two type casts?
Why cant we use:
float f = (float**)&i;
float f = *(float*)&i;
In this example:
char ch = *(char*)&d;
You are not casting from double to a char. You are casting from a double* to a char*; that is, you are casting from a double pointer to a char pointer.
C will convert floating point types to integer types when casting the values, but since you are casting pointers to those values instead, there is no conversion done. You get garbage because floating point numbers are stored very differently from fixed point numbers.
Read about the representation of floating point numbers in systems. Its not the way you're expecting it to be. Cast made through (float *) in your first snippet read the most significant first 16 bits. And if your system is little endian, there will be always zeros in most significant bits if the value containing in the int type variable is lesser than 2^16.
If you need to convert int to float, the conversion is straight, because the promotion rules of C.
So, it is enough to write:
int i = 37;
float f = i;
This gives the result f == 37.0.
However, int the cast (float *)(&i), the result is an object of type "pointer to float".
In this case, the address of "pointer to integer" &i is the same as of the the "pointer to float" (float *)(&i). However, the object pointed by this last object is a float whose bits are the same as of the object i, which is an integer.
Now, the main point in this discussion is that the bit-representation of objects in memory is very different for integers and for floats.
A positive integer is represented in explicit form, as its binary mathematical expression dictates.
However, the floating point numbers have other representation, consisting of mantissa and exponent.
So, the bits of an object, when interpreted as an integer, have one meaning, but the same bits, interpreted as a float, have another very different meaning.
The better question is, why does it EVER work. You see, when you do
typedef int T;//replace with whatever
typedef double J;//replace with whatever
T s = 45;
J d = *(J*)(&s);
You are basically telling the compiler (get the T* address of s, reintepret what it points to as J, and then get that value). No casting of the value (changing the bytes) actually happens. Sometimes, by luck, this is the same (low value floats will have an exponential of 0, so the integer interpretation may be the same) but often times, this'll be garbage, or worse, if the sizes are not the same (like casting to double from char) you can read unallocated data (heap corruption (sometimes)!).
So I have a program in C. its running but I don't understand how the output is generated ??
Here is the program :
#include <stdio.h>
int c;
void main() {
int a=10,b=20,j;
c=30;
int *p[3];
p[0]=&a;
p[1]=&b;
p[2]=&c;
j=p[0]-p[2];
printf("\nValue of p[0] = %u\nValue of p[2] = %u\nValue of j = %d\n\n",p[0],p[2],j);
}
and Here is the output :
Value of p[0] = 3213675396
Value of p[2] = 134520860
Value of j = -303953190
Can anyone tell me how j got this value i.e. -303953190 ?? It is supposed to be 3079154536
You are doing 3213675396 - 134520860. If you want to get the value use *p[0]. If your intention is to substract the address(which doesnt make sense but still) the expected answer should be 3079154536. But since the number if too large to hold hence you get the answer -303953190. Consider char for simplicity on number line
-128 -127 -126 -125 ... 0 1 2 ... 125 126 127
Now if you try to store 128 it out of range so it will give value -128. If try to assign value 130 you will get -126. So when the right hand side limit is exceeded you can see the counting starts from the left hand side. This is just for explanation purpose only the real reason for this behavior is owed due the fact that it is stored as two's compliment. More info can be found here
You should compute the difference of the pointed objects rather than of the pointers:
j=(*(p[0]))-(*(p[2]));
p is array of pointers to int - so its storing pointers to int and not ints. Hence, p[0] and p[2] are pointers - subtracting them will give you an integer which may overflow that you are trying to store in an int where the problem lies. Also addresses are to printed using %p not %d.
Dereference the value and you will get what you are looking for, like this:
j=p[0][0]-p[2][0];
or like this:
j=*(p[0])-*(p[2]);
Substracting two pointers results in a signed integer.
From the C Standard chapter 6.56:
6.5.6 Additive operators
[...]
9 When two pointers are subtracted, both shall point to elements of the same array object,
or one past the last element of the array object; the result is the difference of the
subscripts of the two array elements. The size of the result is implementation-defined,
and its type (a signed integer type) is ptrdiff_t defined in the < stddef.h> header.
And assigning the pointer difference to an int overflows the int.
To get around this overflow instead of
int j;
use
ptrdiff_t j;
and then print the value using %td.
From the C Standard chapter 7.17:
7.17 Common definitions < stddef.h>
[...]
2 The types are
ptrdiff_t
which is the signed integer type of the result of subtracting two pointers;
Also (unrelated)
void main()
is wrong. It shall be
int main(void)
So the correct code would look like this:
#include <stdio.h>
#include <stddef.h> /* for ptrdiff_t */
int c;
int main(void)
{
int a=10, b=20;
ptrdiff_t j;
int * p[3];
c=30;
p[0]=&a;
p[1]=&b;
p[2]=&c;
j=p[0]-p[2];
printf("\nValue of p[0] = %p\nValue of p[2] = %p\nValue of j = %td\n\n",
(void *) p[0],
(void *) p[2],
j);
return 0;
}
You're printing it as an integer instead of an unsigned. Use %u instead of %d.
Try this:
#include <stdio.h>
int c;
void main() {
int a=10,b=20;
unsigned j;
c=30;
int *p[3];
p[0]=&a;
p[1]=&b;
p[2]=&c;
j=(unsigned)p[0]-(unsigned)p[2];
printf("\nValue of p[0] = %u\nValue of p[2] = %u\nValue of j = %u\n\n",(unsigned)p[0],(unsigned)p[2],j);
}