Int Address in memory increasing by 4 - c

#include<stdio.h>
int main()
{
int x = 1,*t;
float y = 1.50,*u;
char k = 'c',*v;
t = &x;
u = &y;
v = &k;
printf("%p %p %p", t, u, v);
t++;
u++;
v++;
printf(" %p %p %p", t, u, v);
return 0;
}
Hi i have made this code but here something unusual is happening , i am printing the addresses , when i increase the address of all then from my view increment in int would be 2 , float would be 4 and char would be 1 but i got the following :
0xbffa6ef8 0xbffa6ef0 0xbffa6eff 0xbffa6efc 0xbffa6ef4 0xbffa6f00
For float and char i think its correct but for int i don't know why it is giving so

You are assuming the sizeof(int) is 2 on your system/envrionment.
However, Your assumption is not correct. The standard does not require the size of int to be 2 or anything specific.
As your program shows the size of int is 4 on your system/envrionment.
Lesson to Learn:
Never rely or assume on size of an type, always use sizeof to determine the size, that is the reason the standard provides sizeof.

The memory address increment for t is given by sizeof(int).
If sizeof(int) is different than what you expect t to be incremented by then your assumption is wrong.

An int data type is usually 4 bytes. a short or a short int data type is 2 bytes;

The actual size of integer types varies by implementation. The only guarantee is that the long long is not smaller than long, which is not smaller than int, which is not smaller than short.
or
sizeof ( short int ) <= sizeof ( int ) <= sizeof ( long int )
But you can be sure that it will be atleast 16 bits in size.
This will be helpful,detailed informations regarding the size of basic C++ types.

Related

printf() prints the correct int value written to a pointer, but not the correct double value written to another. Why is that?

I tried declaring two variables,one of type int * and one of type double * and assigned each their addresses,but assignment by de-reference and printing displays correct value of int but prints 0.0 for double.why is that?
#include <stdio.h>
int main(void)
{
int *x;
x = (int *)&x;
*x = 3;
//print val of pointer
printf("%d\n", x);
double *y;
y = (double *)&y;
*y = 4.0;
printf("%lf\n", y);
return 0;
}
I get 4.0.
What you do is you re-interpret the memory allocated to store addresses (x and y) as int and double, respectively.
You do that twice: When you assign data values to the re-interpreted memory, and when you print a copy of it. The two cases are distinct.
Writing to memory through a pointer of incompatible type is undefined bahavior, and compilers like gcc are known to do funny things (trap or ignore the code) in such cases. There are meandering discussions about that, including a famous rant by Linus Torvalds. It may or may not work. If it works, it probably does the expected thing. (For correct code you must use a union or perform a memcpy.)
One condition for it to work is that your data types don't need more space than the pointers. On a 32 bit architecture (and that may be a 32 bit compiler for a 64 bit Intel CPU), a double will be longer than a 4 byte address (a IEEE 754 double has 8 bytes). *y = 4.0; writes beyond y's memory, overwriting other data on the stack. (Note that y points to itself, so that assigning to *y overwrites y's own memory.)
Passing a pointer value as a parameter to printf with a conversion specification of %d resp. %lf is undefined as well. (Actually it's already undefined if the conversion specification is %p and the pointer value is not cast to void *; but that's often ignored and irrelevant on common architectures.) printf will just interpret the memory on the stack (which is a copy of the parameters) as an int resp. as a double.
In order to understand what happens let's look at the memory layout on the stack of main. I have written a program detailing it; the source is below. On my 64 bit Windows the double value of 4.0 gets printed alright; the pointer variable y is large enough to hold the bytes of a double, and all 8 bytes are copied to printf's stack. But if the pointer size is only 4 bytes, only those 4 bytes will be copied to printf's stack, which are all 0, and the bytes beyond that stack will contain memory from earlier operations, or arbitrary values, for example 0 ;-), which printf will read in an attempt to decode a double.
Here is an inspection of the stack on a 64 bit architecture during the various steps. I have bracketed the pointer declarations with two sentinel variables declStart and declEnd, so that I could see where the memory is. I would assume that the program would run with minor changes on a 32 bit architecture as well. Try it and tell us what you see!
Update: It runs on ideone, which appears to have 4 byte addresses. The double version doesn't print 0.0 but some arbitrary value, likey because of stack garbage behind the 4 address bytes. Cf. https://ideone.com/TJAXli.
The program for the output above is here:
#include <stdio.h>
void dumpMem(void *start, int numBytes)
{
printf("memory at %p:", start);
char *p = start;
while((unsigned long)p%8){ p--; numBytes++;} // align to 8 byte boundary
for(int i=0; i<numBytes; i++)
{
if( i%8 == 0 ) printf("\nAddr %p:", p+i);
printf(" %02x", (unsigned int) (p[i] & 0xff));
}
putchar('\n');
}
int len; // static allocation, protect them from stack overwrites
char *from, *to;
int main(void)
{
unsigned int declStart = 0xaaaaaaaa; // marker
int *x = (int *) 0xbbbbbbbbbbbbbbbb;
double *y = (double *)0xcccccccccccccccc;
unsigned int declEnd = 0xdddddddd; // marker
printf("Addr. of x: %p,\n of y: %p\n", &x, &y);
// This is all UB because the pointers are not
// belonging to the same object. But it should
// work on standard architectures.
// All calls to dumpMem() therefore are UB, too.
// Thinking of it, I'd be hard-pressed to find
// any defined behavior in this program.
if( &declStart < &declEnd )
{
from = (char *)&declStart;
to = (char *)&declEnd + sizeof(declEnd);
}
else
{
from = (char *)&declEnd;
to = (char *)&declStart + sizeof(declStart);
}
len = to - from;
printf("len is %d\n", len);
printf("Memory after initializations:\n");
dumpMem(from, len);
x = (int *)&x;
printf("\nMemory after assigning own address %p to x/*x: \n", &x);
dumpMem(from, len);
*x = 3;
printf("\nMemory after assigning 3 to x/*x: \n");
dumpMem(from, len);
//print val of pointer
printf("x as long: %d\n", (unsigned long)x);
y = (double *)&y;
*y = 4.0;
printf("\nMemory after assigning 4.0 to y/*y: \n");
dumpMem(from, len);
printf("y as float: %f\n", y);
printf("y as double: %lf\n", y);
printf("y as unsigned int: 0x%x\n", y);
printf("y as unsigned long: 0x%lx\n", y);
return 0;
}
Boy, that's the weirdest piece of code I have seen lately ...
Anyway if you really want to figure out what is going on throughout the code, the best way to do it would be to step through it with a debugger. Here's how it works on my machine:
(gdb) break main
Breakpoint 1 at 0x400535: file test.c, line 6.
(gdb) run
...
Breakpoint 1, main () at test.c:6
warning: Source file is more recent than executable.
6 x = (int *)&x;
(gdb) n
7 *x = 3;
(gdb) p x
$1 = (int *) 0x7fffffffdab0
(gdb) n
9 printf("%d\n", x);
(gdb) p x
$2 = (int *) 0x7fff00000003
(gdb) n
3
12 y = (double *)&y;
(gdb) n
13 *y = 4.0;
(gdb) p y
$3 = (double *) 0x7fffffffdab8
(gdb) n
14 printf("%lf\n", y);
(gdb) p y
$4 = (double *) 0x4010000000000000
(gdb) n
0.000000
15 return 0;
(gdb)
Basically what you're doing is messing up with the pointers values by using themselves in the process. When doing *x = 3; you can see you wiped out the least significat 32 bits of x by writing 0x00000003 instead. After that, when you do *y = 4.0; you overwrite the whole pointer value with the internal double representation of 4.0. Intuitively, the second printf should print 4.0, so I guess the issue stands within printf itself. If you do:
double test;
memcpy(&test, &y, sizeof(double));
printf("%lf\n", test);
This will output 4.000000.

Why Size is different for different pointers

#include <stdio.h>
#define R 10
#define C 20
int main()
{
int *p;
int *p1[R];
int *p2[R][C];
printf("%d %d %d", sizeof(*p),sizeof(*p1),sizeof(*p2));
getchar();
return 0;
}
Why is the output: 4 8 160? Why does size of p1 becomes 8 and not 4?
Consider the types.
sizeof(*p) ==> sizeof(int)
sizeof(*p1) ==> sizeof(int *)
sizeof(*p2) ==> sizeof((int [20]))
Note: Depending on your platform and compiler, you'll get different results.
Also, as we know, sizeof produces a result of type size_t, it's advised to use %zu format specifier to print the result.

Typecasting from int,float,char,double

I was trying out few examples on do's and dont's of typecasting. I could not understand why the following code snippets failed to output the correct result.
/* int to float */
#include<stdio.h>
int main(){
int i = 37;
float f = *(float*)&i;
printf("\n %f \n",f);
return 0;
}
This prints 0.000000
/* float to short */
#include<stdio.h>
int main(){
float f = 7.0;
short s = *(float*)&f;
printf("\n s: %d \n",s);
return 0;
}
This prints 7
/* From double to char */
#include<stdio.h>
int main(){
double d = 3.14;
char ch = *(char*)&d;
printf("\n ch : %c \n",ch);
return 0;
}
This prints garbage
/* From short to double */
#include<stdio.h>
int main(){
short s = 45;
double d = *(double*)&s;
printf("\n d : %f \n",d);
return 0;
}
This prints 0.000000
Why does the cast from float to int give the correct result and all the other conversions give wrong results when type is cast explicitly?
I couldn't clearly understand why this typecasting of (float*) is needed instead of float
int i = 10;
float f = (float) i; // gives the correct op as : 10.000
But,
int i = 10;
float f = *(float*)&i; // gives a 0.0000
What is the difference between the above two type casts?
Why cant we use:
float f = (float**)&i;
float f = *(float*)&i;
In this example:
char ch = *(char*)&d;
You are not casting from double to a char. You are casting from a double* to a char*; that is, you are casting from a double pointer to a char pointer.
C will convert floating point types to integer types when casting the values, but since you are casting pointers to those values instead, there is no conversion done. You get garbage because floating point numbers are stored very differently from fixed point numbers.
Read about the representation of floating point numbers in systems. Its not the way you're expecting it to be. Cast made through (float *) in your first snippet read the most significant first 16 bits. And if your system is little endian, there will be always zeros in most significant bits if the value containing in the int type variable is lesser than 2^16.
If you need to convert int to float, the conversion is straight, because the promotion rules of C.
So, it is enough to write:
int i = 37;
float f = i;
This gives the result f == 37.0.
However, int the cast (float *)(&i), the result is an object of type "pointer to float".
In this case, the address of "pointer to integer" &i is the same as of the the "pointer to float" (float *)(&i). However, the object pointed by this last object is a float whose bits are the same as of the object i, which is an integer.
Now, the main point in this discussion is that the bit-representation of objects in memory is very different for integers and for floats.
A positive integer is represented in explicit form, as its binary mathematical expression dictates.
However, the floating point numbers have other representation, consisting of mantissa and exponent.
So, the bits of an object, when interpreted as an integer, have one meaning, but the same bits, interpreted as a float, have another very different meaning.
The better question is, why does it EVER work. You see, when you do
typedef int T;//replace with whatever
typedef double J;//replace with whatever
T s = 45;
J d = *(J*)(&s);
You are basically telling the compiler (get the T* address of s, reintepret what it points to as J, and then get that value). No casting of the value (changing the bytes) actually happens. Sometimes, by luck, this is the same (low value floats will have an exponential of 0, so the integer interpretation may be the same) but often times, this'll be garbage, or worse, if the sizes are not the same (like casting to double from char) you can read unallocated data (heap corruption (sometimes)!).

Substracting pointers: how variable j is getting this value?

So I have a program in C. its running but I don't understand how the output is generated ??
Here is the program :
#include <stdio.h>
int c;
void main() {
int a=10,b=20,j;
c=30;
int *p[3];
p[0]=&a;
p[1]=&b;
p[2]=&c;
j=p[0]-p[2];
printf("\nValue of p[0] = %u\nValue of p[2] = %u\nValue of j = %d\n\n",p[0],p[2],j);
}
and Here is the output :
Value of p[0] = 3213675396
Value of p[2] = 134520860
Value of j = -303953190
Can anyone tell me how j got this value i.e. -303953190 ?? It is supposed to be 3079154536
You are doing 3213675396 - 134520860. If you want to get the value use *p[0]. If your intention is to substract the address(which doesnt make sense but still) the expected answer should be 3079154536. But since the number if too large to hold hence you get the answer -303953190. Consider char for simplicity on number line
-128 -127 -126 -125 ... 0 1 2 ... 125 126 127
Now if you try to store 128 it out of range so it will give value -128. If try to assign value 130 you will get -126. So when the right hand side limit is exceeded you can see the counting starts from the left hand side. This is just for explanation purpose only the real reason for this behavior is owed due the fact that it is stored as two's compliment. More info can be found here
You should compute the difference of the pointed objects rather than of the pointers:
j=(*(p[0]))-(*(p[2]));
p is array of pointers to int - so its storing pointers to int and not ints. Hence, p[0] and p[2] are pointers - subtracting them will give you an integer which may overflow that you are trying to store in an int where the problem lies. Also addresses are to printed using %p not %d.
Dereference the value and you will get what you are looking for, like this:
j=p[0][0]-p[2][0];
or like this:
j=*(p[0])-*(p[2]);
Substracting two pointers results in a signed integer.
From the C Standard chapter 6.56:
6.5.6 Additive operators
[...]
9 When two pointers are subtracted, both shall point to elements of the same array object,
or one past the last element of the array object; the result is the difference of the
subscripts of the two array elements. The size of the result is implementation-defined,
and its type (a signed integer type) is ptrdiff_t defined in the < stddef.h> header.
And assigning the pointer difference to an int overflows the int.
To get around this overflow instead of
int j;
use
ptrdiff_t j;
and then print the value using %td.
From the C Standard chapter 7.17:
7.17 Common definitions < stddef.h>
[...]
2 The types are
ptrdiff_t
which is the signed integer type of the result of subtracting two pointers;
Also (unrelated)
void main()
is wrong. It shall be
int main(void)
So the correct code would look like this:
#include <stdio.h>
#include <stddef.h> /* for ptrdiff_t */
int c;
int main(void)
{
int a=10, b=20;
ptrdiff_t j;
int * p[3];
c=30;
p[0]=&a;
p[1]=&b;
p[2]=&c;
j=p[0]-p[2];
printf("\nValue of p[0] = %p\nValue of p[2] = %p\nValue of j = %td\n\n",
(void *) p[0],
(void *) p[2],
j);
return 0;
}
You're printing it as an integer instead of an unsigned. Use %u instead of %d.
Try this:
#include <stdio.h>
int c;
void main() {
int a=10,b=20;
unsigned j;
c=30;
int *p[3];
p[0]=&a;
p[1]=&b;
p[2]=&c;
j=(unsigned)p[0]-(unsigned)p[2];
printf("\nValue of p[0] = %u\nValue of p[2] = %u\nValue of j = %u\n\n",(unsigned)p[0],(unsigned)p[2],j);
}

In C, sizeof operator returns 8 bytes when passing 2.5m but 4 bytes when passing 1.25m * 2

I do not understand why the sizeof operator is producing the following results:
sizeof( 2500000000 ) // => 8 (8 bytes).
... it returns 8, and when I do the following:
sizeof( 1250000000 * 2 ) // => 4 (4 bytes).
... it returns 4, rather than 8 (which is what I expected). Can someone clarify how sizeof determines the size of an expression (or data type) and why in my specific case this is occurring?
My best guess is that the sizeof operator is a compile-time operator.
Bounty Question: Is there a run time operator that can evaluate these expressions and produce my expected output (without casting)?
2500000000 doesn't fit in an int, so the compiler correctly interprets it as a long (or long long, or a type where it fits). 1250000000 does, and so does 2. The parameter to sizeof isn't evaluated, so the compiler can't possibly know that the multiplication doesn't fit in an int, and so returns the size of an int.
Also, even if the parameter was evaluated, you'd likely get an overflow (and undefined behavior), but probably still resulting in 4.
Here:
#include <iostream>
int main()
{
long long x = 1250000000 * 2;
std::cout << x;
}
can you guess the output? If you think it's 2500000000, you'd be wrong. The type of the expression 1250000000 * 2 is int, because the operands are int and int and multiplication isn't automagically promoted to a larger data type if it doesn't fit.
http://ideone.com/4Adf97
So here, gcc says it's -1794967296, but it's undefined behavior, so that could be any number. This number does fit into an int.
In addition, if you cast one of the operands to the expected type (much like you cast integers when dividing if you're looking for a non-integer result), you'll see this working:
#include <iostream>
int main()
{
long long x = (long long)1250000000 * 2;
std::cout << x;
}
yields the correct 2500000000.
[Edit: I did not notice, initially, that this was posted as both C and C++. I'm answering only with respect to C.]
Answering your followup question, "Is there anyway to determine the amount of memory allocated to an expression or variable at run time?": well, not exactly. The problem is that this is not a very well formed question.
"Expressions", in C-the-language (as opposed to some specific implementation), don't actually use any memory. (Specific implementations need some code and/or data memory to hold calculations, depending on how many results will fit into CPU registers and so on.) If an expression result is not stashed away in a variable, it simply vanishes (and the compiler can often omit the run-time code to calculate the never-saved result). The language doesn't give you a way to ask about something it doesn't assume exists, i.e., storage space for expressions.
Variables, on the other hand, do occupy storage (memory). The declaration for a variable tells the compiler how much storage to set aside. Except for C99's Variable Length Arrays, though, the storage required is determined purely at compile time, not at run time. This is why sizeof x is generally a constant-expression: the compiler can (and in fact must) determine the value of sizeof x at compile time.
C99's VLAs are a special exception to the rule:
void f(int n) {
char buf[n];
...
}
The storage required for buf is not (in general) something the compiler can find at compile time, so sizeof buf is not a compile-time constant. In this case, buf actually is allocated at run time and its size is only determined then. So sizeof buf is a runtime-computed expression.
For most cases, though, everything is sized up front, at compile time, and if an expression overflows at run-time, the behavior is undefined, implementation-defined, or well-defined depending on the type. Signed integer overflow, as in 2.5 billion multiplied by 2, when INT_MAX is just a little over 2.7 billion, results in "undefined behavior". Unsigned integers do modular arithmetic and thus allow you to calculate in GF(2k).
If you want to make sure some calculation cannot overflow, that's something you have to calculate yourself, at run time. This is a big part of what makes multiprecision libraries (like gmp) hard to write in C—it's usually a lot easier, as well as faster, to code big parts of that in assembly and take advantage of known properties of the CPU (like overflow flags, or double-wide result-register-pairs).
Luchian answered it already. Just for complete it..
C11 Standard states (C++ standard has similar lines) that the type of an integer literal with no suffix to designating the type is dertermined as follows:
From 6.4.4 Constants (C11 draft):
Semantics
4 The value of a decimal constant is computed base 10; that of an
octal constant, base 8; that of a hexadecimal constant, base 16. The
lexically first digit is the most significant.
5 The type of an integer constant is the first of the corresponding
list in which its value can be represented.
And the table is as follows:
Decimal Constant
int
int long int
long long int
Octal or Hexadecimal Constant
int
unsigned int
long int
unsigned long int
long long int
unsigned long long int
For Octal and Hexadecimal constants, even unsigned types are possible. So depending on your platform whichever in the above list (int or long int or long long int) fits first (in the order) will be the type of integer literal.
Another way to put the answer is to say that what is relevant to sizeof is not the value of the expression but it's type. sizeof returns the memory size for a type that can be provided either explicitely as a type or as an expression. In this case the compiler will compute this type at compile time without actually computing the expression (following known rules, for instance if you call a function, the resulting type is the type of the returned value).
As other poster stated there is an exception for variable length array (whose type size is only known at run time).
In other word you usually write things like sizeof(type) or sizeof expression where expression is an L-Value. Expression is almost never a complex computing (like the stupid example of calling a function above) : it would be useless anyway as it is not evaluated.
#include <stdio.h>
int main(){
struct Stype {
int a;
} svar;
printf("size=%d\n", sizeof(struct Stype));
printf("size=%d\n", sizeof svar);
printf("size=%d\n", sizeof svar.a);
printf("size=%d\n", sizeof(int));
}
Also notice that as sizeof is a language keyword, not a function parenthesis are not necessary before the trailing expression (we have the same kind of rule for return keyword).
For your follow-up question, there's no "operator", and there's no difference between the "compile time" size of an expression, and the "run time" size.
If you want to know if a given type can hold the result you're looking for, you can always try something like this:
#include <stdio.h>
#include <limits.h>
int main(void) {
int a = 1250000000;
int b = 2;
if ( (INT_MAX / (double) b) > a ) {
printf("int is big enough for %d * %d\n", a, b);
} else {
printf("int is not big enough for %d * %d\n", a, b);
}
if ( (LONG_MAX / (double) b) > a ) {
printf("long is big enough for %d * %d\n", a, b);
} else {
printf("long is not big enough for %d * %d\n", a, b);
}
return 0;
}
and a (slightly) more general solution, just for larks:
#include <stdlib.h>
#include <stdio.h>
#include <limits.h>
/* 'gssim' is 'get size of signed integral multiplication */
size_t gssim(long long a, long long b);
int same_sign(long long a, long long b);
int main(void) {
printf("size required for 127 * 1 is %zu\n", gssim(127, 1));
printf("size required for 128 * 1 is %zu\n", gssim(128, 1));
printf("size required for 129 * 1 is %zu\n", gssim(129, 1));
printf("size required for 127 * -1 is %zu\n", gssim(127, -1));
printf("size required for 128 * -1 is %zu\n", gssim(128, -1));
printf("size required for 129 * -1 is %zu\n", gssim(129, -1));
printf("size required for 32766 * 1 is %zu\n", gssim(32766, 1));
printf("size required for 32767 * 1 is %zu\n", gssim(32767, 1));
printf("size required for 32768 * 1 is %zu\n", gssim(32768, 1));
printf("size required for -32767 * 1 is %zu\n", gssim(-32767, 1));
printf("size required for -32768 * 1 is %zu\n", gssim(-32768, 1));
printf("size required for -32769 * 1 is %zu\n", gssim(-32769, 1));
printf("size required for 1000000000 * 2 is %zu\n", gssim(1000000000, 2));
printf("size required for 1250000000 * 2 is %zu\n", gssim(1250000000, 2));
return 0;
}
size_t gssim(long long a, long long b) {
size_t ret_size;
if ( same_sign(a, b) ) {
if ( (CHAR_MAX / (long double) b) >= a ) {
ret_size = 1;
} else if ( (SHRT_MAX / (long double) b) >= a ) {
ret_size = sizeof(short);
} else if ( (INT_MAX / (long double) b) >= a ) {
ret_size = sizeof(int);
} else if ( (LONG_MAX / (long double) b) >= a ) {
ret_size = sizeof(long);
} else if ( (LLONG_MAX / (long double) b) >= a ) {
ret_size = sizeof(long long);
} else {
ret_size = 0;
}
} else {
if ( (SCHAR_MIN / (long double) llabs(b)) <= -llabs(a) ) {
ret_size = 1;
} else if ( (SHRT_MIN / (long double) llabs(b)) <= -llabs(a) ) {
ret_size = sizeof(short);
} else if ( (INT_MIN / (long double) llabs(b)) <= -llabs(a) ) {
ret_size = sizeof(int);
} else if ( (LONG_MIN / (long double) llabs(b)) <= -llabs(a) ) {
ret_size = sizeof(long);
} else if ( (LLONG_MIN / (long double) llabs(b)) <= -llabs(a) ) {
ret_size = sizeof(long long);
} else {
ret_size = 0;
}
}
return ret_size;
}
int same_sign(long long a, long long b) {
if ( (a >= 0 && b >= 0) || (a <= 0 && b <= 0) ) {
return 1;
} else {
return 0;
}
}
which, on my system, outputs:
size required for 127 * 1 is 1
size required for 128 * 1 is 2
size required for 129 * 1 is 2
size required for 127 * -1 is 1
size required for 128 * -1 is 1
size required for 129 * -1 is 2
size required for 32766 * 1 is 2
size required for 32767 * 1 is 2
size required for 32768 * 1 is 4
size required for -32767 * 1 is 2
size required for -32768 * 1 is 2
size required for -32769 * 1 is 4
size required for 1000000000 * 2 is 4
size required for 1250000000 * 2 is 8
Yes, sizeof() doesn't calculate the memory required for the result of that multiplication.
In the second case both literals : 1250000000 and 2 each requires 4 bytes of memory, hence sizeof() returns 4. If one of the values was above 4294967295 (2^32 - 1), you would have got 8.
But i don't know how sizeof() returned 8 for 2500000000. It returns 4 on my VS2012 compiler
The C11 Draft is here: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf
You can find the Cx0 draft here: http://c0x.coding-guidelines.com/6.5.3.4.html
In both cases, section 6.5.3.4 is what you are looking for. Basically, your problem boils down to this:
// Example 1:
long long x = 2500000000;
int size = sizeof(x); // returns 8
// Example 2:
int x = 1250000000;
int y = 2;
int size = sizeof(x * y); // returns 4
In example 1, you have a long long (8 bytes), so it returns 8. In example 2, you have an int * int which returns an int, which is 4 bytes (so it returns 4).
To answer your bounty question: Yes and no. sizeof will not calculate the size needed for the operation you are trying to perform, but it will tell you the size of the results if you perform the operation with the proper labels:
long long x = 1250000000;
int y = 2;
int size = sizeof(x * y); // returns 8
// Alternatively
int size = sizeof(1250000000LL * 2); // returns 8
You have to tell it you are dealing with a large number or it will assume it is dealing with the smallest type it can (which in this case is int).
The most simple answer in one line is:
sizeof() is a function evaluated at COMPILE TIME who's input is a c type, the value of which is completely ignored
FURTHER DETAIL: ..therefore as 2500000000 is compiled it would have to be stored as a LONG as it is too long to fit in an int, therefore this argument is simply compiled as '(type) long'. However, 1250000000 and 2 both fit in type 'int' therefore that is the type passed to sizeof, since the resulting value is never stored as because the compiler simply is interested in the type, the multiplication is never evaluated.

Resources