why to cast pointers? - c

I have a little basic question about C pointers and casting.
I didn't understand why should I cast the pointers?
I just tried this code and I got the same output for each option:
#include <stdio.h>
int main(void)
{
int x = 3;
int *y = &x;
double *s;
option 1: s = (double*)y;
option 1: printf("%p, %d\n", (int*)s, *((int*)s));
option 2: s = y;
option 2: printf("%p, %d", s, *s);
return 0;
}
My question is why do I have to do: s = (double*)y?
Intuitively, the address is the same for all the variables. The difference should be about how many bytes to read from this address. Also, about the printing - if I use %d it will take automatically as an int.
Why do I need to cast it as an int*?
Am I wrong?

There is a great deal of confusion in your question. int and double are different things: they represent numbers in memory using different bit patterns and in most cases even a different number of bytes. The compiler produces code that converts between these representations when you implicitly or explicitly require that:
int x = 3;
double d;
d = x; // implicit conversion from integer representation to double
d = (double)x; // explicit conversion generates the same code
Here is a more subtle example:
d = x / 2; // equivalent to d = (double)(x / 2);
will store the double representation of 1 into d. Whereas this:
d = (double)x / 2; // equivalent to d = (double)x / 2.0;
will store the double representation of 1.5 into d.
Casting pointers is a completely different thing. It is not a conversion, but merely a coertion. You are telling the compiler to trust you know what you are doing. It does not change the address they point to, not does it affect the bytes in memory at that address. C does not let you store a pointer to int into a pointer to double because it is most likely an error: When you dereference the pointer using the wrong type, the content of memory is interpreted the wrong way, potentially yielding a different value or even causing a crash.
You can override the compiler's refusal by using an explicit cast to (double*), but you are asking for trouble.
When you later dereference s, you may access invalid memory and the value read is definitely meaningless.
Further confusion involves the printf function: the format specifiers in the format string are promisses by the programmer about the actual value types he passes as extra parameters.
printf("%p,%d",s,*s)
Here you pass a double (with no meaningful value) and you tell printf you passed an int. A blatant case of multiple undefined behaviour. Anything can happen, including printf producing similar output as the other option, leading to complete confusion. On 64 bit Intel systems, the way doubles and ints are passed to printf is different and quite complicated.
To avoid this kind of mistake in printf, compile with -Wall. gcc and clang will complain about type mismatches between the format specifiers and the actual values passed to printf.
As others have commented, you should not use pointer casts in C. Even the typical case int *p = (int *)malloc(100 * sizeof(int)) does not require a cast in C.

Explicit pointer casting is necessary to avoid compiler warnings. And compiler warnings on pointer type conflict exist to show possible bugs. With pointer casting you tell the compiler: "I know what I am doing here..."

Related

Whats the difference between int *p=10 and int *p = (int *)10?

Do both statements mean the same thing that p is pointing at address location 10?
On compilation, the first initialization gives some warning. What's the meaning of that?
#include <stdio.h>
int main()
{
int *p = 10;
int *q = (int *)10;
return 0;
}
output:
warning: initialization of ‘int *’ from ‘int’ makes pointer from integer without a cast [- Wint-conversion]
Both cases convert the integer 10 to a pointer type which is used to initialize an int *. The cast in the second case makes it explicit that this behavior is intentional.
While converting from an integer to pointer is allowed, the assignment operator (and by extension, initialization) does not specifically allow this conversion, so a cast it required to be conforming. Many compilers however will still allow this and simply issue a warning (as your apparently does).
Note however that actually attempting to use a pointer that is assigned a specific numeric value will most likely cause a crash unless you're on a embedded system that supports reading or writing specific memory addresses.
int *p = 10; is incorrect (constraint violation), and the compiler must produce a diagnostic message. The compiler could reject the program, and there is no behaviour defined if it doesn't. The rule is that the initializer for a pointer must be a compatible pointer value or a null pointer constant.
int *q = (int *)10; means to convert the integer 10 to a pointer. The result is implementation-defined and it could be a trap representation, meaning that the initialization causes undefined behaviour if execution reaches this line.
int and pointer to an integer int* are different types. The 10 on the first line is an int that you are trying to assign to a pointer to int type. Hence the warning. (on X86 both share the same size, but consider that mostly coincidence at this point).
By casting the int to a pointer, like you do on the second line, you are telling the compiler "Hey, I know these are different types but I know what I'm doing, so go ahead and just treat the value 10 like a pointer because I really do want to point at the memory with an address of 10". (in almost every case the memory address of 10 is not going to be usable by you)

Does casting actually DO anything?

Consider the following snippet:
char x[100];
double *p = &x;
As expected, this yields this warning:
f.c:3:15: warning: initialization of ‘double *’ from incompatible pointer type ‘char (*)[100]’
[-Wincompatible-pointer-types]
3 | double *p = &x;
| ^
This is very easy to solve by just changing to
double *p = (double*)&x;
My question here is, does the casting actually DO anything? Would the code be invalid without the cast? Or is it just a way to make the compiler quiet? When is casting necessary?
I know that you can have some effect with snippets like this:
int x = 666;
int y = (char)x;
But isn't this the same as this?
int x = 666;
char c = x;
int y = c;
If it is the same, then the casting does something, but it's not necessary. Right?
Please help me understand this.
Casting can do several different things. As other answers have mentioned, it almost always changes the type of the value being cast (or, perhaps, an attribute of the type, such as const). It may also change the numeric value in some way. But there are many possible interpretations:
Sometimes it merely silences a warning, performing no "real" conversion at all (as in many pointer casts).
Sometimes it silences a warning, leaving only a type change but no value change (as in other pointer casts).
Sometimes the type change, although it involves no obvious value change, implies very different semantics for use of the value later (again, as in many pointer casts).
Sometimes it requests a conversion which is meaningless or impossible.
Sometimes it performs a conversion that the compiler would have performed by itself (with or without a warning).
But sometimes it forces a conversion that the compiler wouldn't have performed.
Also, sometimes those warnings that the compiler tried to make, that a cast silences, are innocuous and/or a nuisance, but sometimes they're quite real, and the code is likely to fail (that is, as the silenced warning was trying to tell you).
For some more specific examples:
A pointer cast that changes the type, but not the value:
char *p1 = ... ;
const char *p2 = (const char *)p;
And another:
unsigned char *p3 = (unsigned char *)p;
A pointer cast that changes the type in a more significant way, but that's guaranteed to be okay (on some architectures this might also change the value):
int i;
int *ip = &i;
char *p = (char *)ip;
A similarly significant pointer cast, but one that's quite likely to be not okay:
char c;
char *cp = &c;
int *ip = (int *)cp;
*ip = 5; /* likely to fail */
A pointer cast that's so meaningless that the compiler refuses to perform it, even with an explicit cast:
float f = 3.14;
char *p = (char)f; /* guaranteed to fail */
A pointer cast that makes a conversion, but one that the compiler would have made anyway:
int *p = (int *)malloc(sizeof(int));
(This one is considered a bad idea, because in the case where you forget to include <stdlib.h> to declare malloc(), the cast can silence a warning that might alert you to the problem.)
Three casts from an integer to a pointer, that are actually well-defined, due to a very specific special case in the C language:
void *p1 = (void *)0;
char *p2 = (void *)0;
int *p3 = (int *)0;
Two casts from integer to pointer that are not necessarily valid, although the compiler will generally do something obvious, and the cast will silence the otherwise warning:
int i = 123;
char *p1 = (char *)i;
char *p2 = (char *)124;
*p1 = 5; /* very likely to fail, except when */
*p2 = 7; /* doing embedded or OS programming */
A very questionable cast from a pointer back to an int:
char *p = ... ;
int i = (int)p;
A less-questionable cast from a pointer back to an integer that ought to be big enough:
char *p = ... ;
uintptr_t i = (uintptr_t)p;
A cast that changes the type, but "throws away" rather than "converting" a value, and that silences a warning:
(void)5;
A cast that makes a numeric conversion, but one that the compiler would have made anyway:
float f = (float)0;
A cast that changes the type and the interpreted value, although it typically won't change the bit pattern:
short int si = -32760;
unsigned short us = (unsigned short)si;
A cast that makes a numeric conversion, but one that the compiler probably would have warned about:
int i = (int)1.5;
A cast that makes a conversion that the compiler would not have made:
double third = (double)1 / 3;
The bottom line is that casts definitely do things: some of them useful, some of them unnecessary but innocuous, some of them dangerous.
These days, the consensus among many C programmers is that most casts are or should be unnecessary, meaning that it's a decent rule to avoid explicit casts unless you're sure you know what you're doing, and it's reasonable to be suspicious of explicit casts you find in someone else's code, since they're likely to be a sign of trouble.
As one final example, this was the case that, back in the day, really made the light bulb go on for me with respect to pointer casts:
char *loc;
int val;
int size;
/* ... */
switch(size) {
case 1: *loc += val; break;
case 2: *(int16_t *)loc += val; break;
case 4: *(int32_t *)loc += val; break;
}
Those three instances of loc += val do three pretty different things: one updates a byte, one updates a 16-bit int, and one updates a 32-bit int. (The code in question was a dynamic linker, performing symbol relocation.)
The cast does at least 1 thing - it satisfies the following constraint on assignment:
6.5.16.1 Simple assignment
Constraints
1 One of the following shall hold:112)
...
— the left operand has atomic, qualified, or unqualified pointer type, and (considering
the type the left operand would have after lvalue conversion) both operands are
pointers to qualified or unqualified versions of compatible types, and the type pointed
to by the left has all the qualifiers of the type pointed to by the right;
112) The asymmetric appearance of these constraints with respect to type qualifiers is due to the conversion
(specified in 6.3.2.1) that changes lvalues to ‘‘the value of the expression’’ and thus removes any type
qualifiers that were applied to the type category of the expression (for example, it removes const but
not volatile from the type int volatile * const).
That's a compile-time constraint - it affects whether or not the source code is translated to an executable, but it doesn't necessarily affect the translated machine code.
It may result in an actual conversion being performed at runtime, but that depends on the types involved in the expression and the host system.
Casting changes the type, which can be very important when signed or unsigned type matters,
For example, character handling functions such as isupper() are defined as taking an unsigned char value or EOF:
The header <ctype.h> declares several functions useful for classifying and mapping characters. In all cases the argument is an int, the value of which shall be representable as an unsigned char or shall equal the value of the macro EOF. If the argument has any other value, the behavior is undefined.
Thus code such as
int isNumber( const char *input )
{
while ( *input )
{
if ( !isdigit( *input ) )
{
return( 0 );
}
input++;
}
// all digits
return( 1 );
}
should properly cast the const char value of *input to unsigned char:
int isNumber( const char *input )
{
while ( *input )
{
if ( !isdigit( ( unsigned char ) *input ) )
{
return( 0 );
}
input++;
}
// all digits
return( 1 );
}
Without the cast to unsigned char, when *input is promoted to int, an char value (assuming char is signed and smaller than int) that is negative will be sign-extended to a negative value that can not be represented as an unsigned char value and therefore invoke undefined behavior.
So yes, the cast in this case does something. It changes the type and therefore - on almost all current systems - avoids undefined behavior for input char values that are negative.
There are also cases where float values can be cast to double (or the reverse) to force code to behave in a desired manner.*
* - I've seen such cases recently - if someone can find an example, feel free to add your own answer...
The cast may or may not change the actual binary value. But that is not its main purpose, just a side effect.
It tells the compiler to interpret a value as a value of a different type. Any changing of binary value is a side effect of that.
It is for you (the programmer) to let the compiler know: I know what I'm doing. So you can shoot yourself in the foot without the compiler questioning you.
Don't get me wrong, cast are absolutely necessary in real world code, but they must be used with care and knowledge. Never cast just to get rid of a warning, make sure you understand the consequences.
It is theoretically possible that a system would use a different representation for a void * and a char * than for some other pointer type. The possibility exists that there could be a system that normally uses a narrow width register to hold pointer values. But, the narrow width may be insufficient if the code needed to address every single byte, and so a void * or char * would use a wider representation.
One case where casting a pointer value is useful is when the function takes a variable number of pointer arguments, and is terminated by NULL, such as with the execl() family of functions.
execl("/bin/sh", "sh", "-c", "echo Hello world!", (char *)NULL);
Without the cast, the NULL may expand to 0, which would be treated as an int argument. When the execl() function retrieves the last parameter, it may extract the expected pointer value incorrectly, since an int value was passed.
My question here is, does the casting actually DO anything?
Yes. It tells the compiler, and also other programmers including the future you, that you think you know what you're doing and you really do intend to treat a char as an int or whatever. It may or may not change the compiled code.
Would the code be invalid without the cast?
That depends on the cast in question. One example that jumps to mind involves division:
int a = 3;
int b = 5;
float c = a / b;
Questions about this sort of thing come up all the time on SO: people wonder why c gets a value of 0. The answer, of course, is that both a and b are int, and the result of integer division is also an integer that's only converted to a float upon assignment to c. To get the expected value of 0.6 in c, cast a or b to float:
float c = a / (float)b;
You might not consider the code without the cast to be invalid, but the next computation might involve division by c, at which point a division by zero error could occur without the cast above.
Or is it just a way to make the compiler quiet?
Even if the cast is a no-op in terms of changing the compiled code, preventing the compiler from complaining about a type mismatch is doing something.
When is casting necessary?
It's necessary when it changes the object code that the compiler generates. It might also be necessary if your organization's coding standards require it.
An example where cast makes a difference.
int main(void)
{
unsigned long long x = 1 << 33,y = (unsigned long long)1 << 33;
printf("%llx, %llx\n", x, y);
}
https://godbolt.org/z/b3qcPn
A cast is simply a type conversion: The implementation will represent the argument value by means of the target type. The expression of the new type (let's assume the target type is different) may have
a different size
a different value
and/or a different bit pattern representing the value.
These three changes are orthogonal to each other. Any subset, including none and all of them, can occur (all examples assume two's bit complement):
None of them: (unsigned int)1;
Size only: (char)1
Value only: (unsigned int)-1
Bit pattern only: (float)1 (my machine has sizeof(int) == sizeof(float))
Size and value, but not bit pattern (the bits present in the original value): (unsigned int)(char)-4
Size and bit pattern, but not value: (float)1l
value and bit pattern, but not size: (float)1234567890) (32 bit ints and floats)
All of them: (float)1234567890l (long is 64 bits).
The new type may, of course, influence expressions in which it is used, and will often have different text representations (e.g. via printf), but that's not really surprising.
Pointer conversions may deserve a little more discussion: The new pointer typically has the same value, size and bit representation (although, as Eric Postpischli correctly pointed out, it theoretically may not). The main intent and effect of a pointer cast is not to change the pointer; it is to change the semantics of the memory accessed through it.
In some cases a cast is the only means to perform a conversion (non-compatible pointer types, pointer vs. integer).
In other cases like narrowing arithmetic conversions (which may lose information or produce an overflow) a cast indicates intent and thus silences warnings. In all cases where a conversion could also be implicitly performed the cast does not alter the program's behavior — it is optional decoration.

What's the difference between "(type)variable" and "*((type *)&variable)", if any?

I would like to know if there is a difference between:
Casting a primitive variable to another primitive type
Dereferencing a cast of a primitive variable's address to a pointer of another primitive type
I would also like to know if there is a good reason to ever use (2) over (1). I have seen (2) in legacy code which is why I was wondering. From the context, I couldn't understand why (2) was being favored over (1). And from the following test I wrote, I have concluded that at least the behavior of an upcast is the same in either case:
/* compile with gcc -lm */
#include <stdio.h>
#include <math.h>
int main(void)
{
unsigned max_unsigned = pow(2, 8 * sizeof(unsigned)) - 1;
printf("VALUES:\n");
printf("%u\n", max_unsigned + 1);
printf("%lu\n", (unsigned long)max_unsigned + 1); /* case 1 */
printf("%lu\n", *((unsigned long *)&max_unsigned) + 1); /* case 2 */
printf("SIZES:\n");
printf("%d\n", sizeof(max_unsigned));
printf("%d\n", sizeof((unsigned long)max_unsigned)); /* case 1 */
printf("%d\n", sizeof(*((unsigned long *)&max_unsigned))); /* case 2 */
return 0;
}
Output:
VALUES:
0
4294967296
4294967296
SIZES:
4
8
8
From my perspective, there should be no differences between (1) and (2), but I wanted to consult the SO experts for a sanity check.
The first cast is legal; the second cast may not be legal.
The first cast tells the compiler to use the knowledge of the type of the variable to make a conversion to the desired type; the compiler does it, provided that a proper conversion is defined in the language standard.
The second cast tells the compiler to forget its knowledge of the variable's type, and re-interpret its internal representation as that of a different type *. This has limited applicability: as long as the binary representation matches that of the type pointed by the target pointer, this conversion will work. However, this is not equivalent to the first cast, because in this situation value conversion never takes place.
Switching the type of the variable being cast to something with a different representation, say, a float, illustrates this point well: the first conversion produces a correct result, while the second conversion produces garbage:
float test = 123456.0f;
printf("VALUES:\n");
printf("%f\n", test + 1);
printf("%lu\n", (unsigned long)test + 1);
printf("%lu\n", *((unsigned long *)&test) + 1); // Undefined behavior
This prints
123457.000000
123457
1206984705
(demo)
* This is valid only when one of the types is a character type and the pointer alignment is valid, type conversion is trivial (i.e. when there is no conversion), when you change qualifiers or signedness, or when you cast to/from a struct/union with the first member being a valid conversion source/target. Otherwise, this leads to undefined behavior. See C 2011 (N1570), 6.5 7, for complete description. Thanks, Eric Postpischil, for pointing out the situations when the second conversion is defined.
Let's look at two simple examples, with int and float on modern hardware (no funny business).
float x = 1.0f;
printf("(int) x = %d\n", (int) x);
printf("*(int *) &x = %d\n", *(int *) &x);
Output, maybe... (your results may differ)
(int) x = 1
*(int *) &x = 1065353216
What happens with (int) x is you convert the value, 1.0f, to an integer.
What happens with *(int *) &x is you pretend that the value was already an integer. It was NOT an integer.
The floating point representation of 1.0 happens to be the following (in binary):
00111111 100000000 00000000 0000000
Which is the same representation as the integer 1065353216.
This:
(type)variable
takes the value of variable and converts it to type type. This conversion does not necessarily just copy the bits of the representation; it follows the language rules for conversions. Depending on the source and target types, the result may have the same mathematical value as variable, but it may be represented completely differently.
This:
*((type *)&variable)
does something called aliasing, sometimes informally called type-punning. It takes the chunk of memory occupied by variable and treats it as if it were an object of type type. It can yield odd results, or even crash your program, if the source and target types have different representations (say, an integer and a floating-point type), or even if they're of different sizes. For example, if variable is a 16-bit integer (say, it's of type short), and type is a 32-bit integer type, then at best you'll get a 32-bit result containing 16 bits of garbage -- whereas a simple value conversion would have given you a mathematically correct result.
The pointer cast form can also give you alignment problems. If variable is byte-aligned and type requires 2-byte or 4-byte alignment, for example, you can get undefined behavior, which could result either in a garbage result or a program crash. Or, worse yet, it might appear to work (which means you have a hidden bug that may show up later and be very difficult to track down).
You can examine the representation of an object by taking its address and converting it to unsigned char*; the language specifically permits treating any object as an array of character type.
But if a simple value conversion does the job, then that's what you should use.
If variable and type are both arithmetic, the cast is probably unnecessary; you can assign an expression of any arithmetic type to an object of any arithmetic type, and the conversion will be done implicitly.
Here's an example where the two forms have very different behavior:
#include <stdio.h>
int main(void) {
float x = 123.456;
printf("d = %g, sizeof (float) = %zu, sizeof (unsigned int) = %zu\n",
x, sizeof (float), sizeof (unsigned int));
printf("Value conversion: %u\n", (unsigned int)x);
printf("Aliasing : %u\n", *(unsigned int*)&x);
}
The output on my system (it may be different on yours) is:
d = 123.456, sizeof (float) = 4, sizeof (unsigned int) = 4
Value conversion: 123
Aliasing : 1123477881
What's the difference between “(type)variable” and “*((type *)&variable)”, if any?
The second expression may lead to alignment and aliasing issues.
The first form is the natural way to convert a value to another type. But assuming there is no violation of alignment or aliasing, in some cases the second expression has an advantage over the first form. *((type *)&variable) will yield a lvalue whereas (type)variable will not yield a lvalue (the result of a cast is never a lvalue).
This allows you do things like:
(*((type *)& expr)))++
See for example this option from Apple gcc manual which performs a similar trick:
-fnon-lvalue-assign (APPLE ONLY): Whenever an lvalue cast or an lvalue conditional expression is encountered, the compiler will issue a deprecation warning
and then rewrite the expression as follows:
(type)expr ---becomes---> *(type *)&expr
cond ? expr1 : expr2 ---becomes---> *(cond ? &expr1 : &expr2)
Casting the pointer makes a difference when working on a structure:
struct foo {
int a;
};
void foo()
{
int c;
((struct foo)(c)).a = 23; // bad
(*(struct foo *)(&c)).a = 42; // ok
}
First one ((type)variable is simple casting a variable to desired type and second one (*(type*)&variable) is derefencing a pointer after being casted by the desired pointer type.
The difference is that in the second case you may have undefined behavior. The reason being that unsinged is the same as unsigned int and an unsigned long may be larger than the the unsigned int, and when casting to a pointer which you dereference you read also the uninitialized part of the unsigned long.
The first case simply converts the unsigned int to an unsigned long with extends the unsigned int as needed.

Storing Pointers difference in integers?

This is the code:
#include<stdio.h>
#include<conio.h>
int main()
{
int *p1,*p2;
int m=2,n=3;
m=p2-p1;
printf("\np2=%u",p2);
printf("\np1=%u",p1);
printf("\nm=%d",m);
getch();
return 0;
}
This gives the output as:
p2= 2686792
p1= 1993645620
m= -497739707
I have two doubts with the code and output:
Since 'm' is an int, it shouldn't take p2-p1 as an input since p1 and p2 both are pointers and m is an integer it should give an error like "invalid conversion from 'int' to 'int' " but it isn't. why?
Even after it takes the input, the difference isn't valid. Why is it?
Since 'm' is an int, it shouldn't take p2-p1 as an input since p1 and
p2 both are pointers and m is an integer it should give an error like
"invalid conversion from 'int' to 'int' " but it isn't. why?
This type of error or warning is depends on the compiler you are using. C compilers often times give programmers plenty of rope to hang themselves with...
Even after it takes the input, the difference isn't valid. Why is it?
Actually, the difference is correct! It is using pointer arithmetic to perform the calculation. So for this example..
p2= 2686792
p1= 1993645620
Since the pointers are not initialized, they are assigned some garbage values like the ones above. Now, you want to perform the operation p2 - p1, i.e. you are asking for the memory address that comes exactly p1 memory blocks before p2. Since p1 and p2 are pointers to integers, the size of a memory block is sizeof(int) (almost always 4 bytes). Therefore:
p2 - p1 = (2686792 - 1993645620) / sizeof(int) = (2686792 - 1993645620) / 4 = -497739707
Since 'm' is an int, it shouldn't take p2-p1 as an input since p1 and p2 both are pointers and m is an integer it should give an error like "invalid conversion from 'int' to 'int' " but it isn't. why?
The C++ spec declares this to be legal.
From the C++11 spec 5.7.6:
When two pointers to elements of the same array object are subtracted, the result is the difference of the subscripts of the two array elements. The type of the result is an implementation-defined signed integral type; this type shall be the same type that is defined as std::ptrdiff_t in the header (18.2).
... later in that paragraph...
Unless both pointers point to elements of the same array object, or
one past the last element of the array object, the behavior is undefined.
The result of p2 - p1 is of the same type as std::ptrdiff_t, but nothing says the compiler cannot have defined it as
namspace std {
typedef ptrdiff_t int;
}
However, you also do not get a guarentee that this will work on all platforms. For example, some platforms (especially 64-bit ones) will use a long for ptrdiff_t. On those platforms your code would fail to compile because it depended on an implementation defined type for ptrdiff_t.
As for your second question, the wording in the C++ spec 5.7.6 suggests why they work the way they work. The wording indicates that the writers of the language wanted pointer arithmetic to support quick arithmetic while working ones way through an array. Accordingly, the defined the result of pointer subtraction to yield convenient results when used in the context of manipulating an array. You could build a language where the difference between two pointers is the difference between their memory addresses in bytes, and you would have a consistent working language. However, a good language always gets the best bang for their buck. The authors felt clean manipulation of arrays was more valuable than being able to get the byte-difference. For example:
double* findADouble(double* begin, double* end, double valueToSearchFor)
{
for (double* iter = begin; iter != end; iter++) {
if (*iter == valueToSearchFor)
return iter;
}
return 0;
}
Would have to have a sizeof in it, and we would have to use += instead of ++.
double* findADouble(double* begin, double* end, double valueToSearchFor)
{
for (double* iter = begin; iter != end; iter += sizeof(double)) {
if (*iter == valueToSearchFor)
return iter;
}
return 0;
}
Also worth noting in their decision: when the rule was created in C, optimizing compilers were not very good at their job. iter += sizeof(double) could be compiled down to a much less efficient assembly code than ++iter, even if the two operations are fundamentally doing the same thing. Modern optimizers have no trouble with this, but the syntax stays.

What is the difference between float pointer and int pointer address?

I tried to run this code,
int *p;
float q;
q = 6.6;
p = &q;
Though it will be a warning, but i think &q and p are of same size, so p can have an address of q. But when I print &q and p I am getting different output.
This is my output
*p = 6.600000
q = 0.000000, p = 0x40d33333, &q = 0x7fffe2fa3c8c
What is that I am missing?
And p and &q is same when both pointer and variable type is same.
My complete code is
#include<stdio.h>
void main()
{
int *p;
float q;
q = 6.6;
p = &q;
printf("*p = %f \n q = %f, p = %p, &q = %p \n",*p,q,p,&q);
}
You need to take compiler warnings more seriously.
C doesn't require compilers to reject invalid programs, it merely requires "diagnostics" for rule violations. A diagnostic can be either a fatal error message or a warning.
Unfortunately, it's common for compilers to issue warnings for assignments of incompatible pointer types.
void main()
This is wrong; it should be int main(void). Your compiler may let you get away with it, and it may not cause any visible problems, but there's no point in not writing it correctly. (It's not quite that simple, but that's close enough.)
int *p;
float q;
q = 6.6;
That's ok.
p = &q;
p is of type int*; &q is of type float*. Assigning one to the other (without a cast) is a constraint violation. The simplest way to look at it is that it's simply illegal.
If you really want to do this assignment, you can use a cast:
p = (int*)&q; /* legal, but ugly */
but there's rarely a good reason to do so. p is a pointer to int; it should point to an int object unless you have a very good reason to make it point to something else. In some circumstances, the conversion itself can have undefined behavior.
printf("*p = %f \n q = %f, p = %p, &q = %p \n",*p,q,p,&q);
The %f format requires a double argument (a float argument is promoted to double in this context so float would be ok). But *p is of type int. Calling printf with an argument of the wrong type causes your program's behavior to be undefined.
%p requires an argument of type void*, not just of any pointer type. If you want to print a pointer value, you should cast it to void*:
printf("&q = %p\n", (void*)&q);
It's likely to work without the cast, but again, the behavior is undefined.
If you get any warnings when you compile a program, don't even bother running it. Fix the warnings first.
As for the question in your title, pointers of type int* and float* are of different types. An int* should point to an int object; a float* should point to a float object. Your compiler may let you mix them, but the result of doing so is either implementation-defined or undefined. The C language, and particularly many C compilers, will let you get away with a lot of things that don't make much sense.
The reason that they're distinct types is to (try to) prevent, or at least detect, errors in their use. If you declare an object of type int*, you're saying that you intend for it to point to an int object (if it's not a null pointer). Storing the address of a float object in your int* object is almost certainly a mistake. Enforcing type safety allows such mistakes to be detected as early as possible (when your compiler prints a warning rather than when your program crashes during a demo for an important client).
It's likely (but not guaranteed) that int* and float* are the same size and have the same internal representation. But the meaning of an int* object is not "a collection of 32 (or 64) bits containing a virtual address", but "something that points to an int object".
You're getting undefined behaviour, because you're passing the wrong types to printf. When you tell it to expect a float, it actually expects a double - but you pass an int.
As a result it prints the wrong information, because printf relies entirely on the format string to access the arguments you pass it.
In addition to what is said by teppic,
Consider,
int a = 5;
int *p = &a;
In this case we indicate to the compiler that p is going to point to an integer. So it is known that when we do something like *p , at runtime, the no. of bytes equal to size of an int would be read.
If you assign address of a datatype occupying x number of bytes to a pointer of which is declared to hold the address of datatypes of fewer bytes than x, you read the wrong number of bytes when using the indirection operator.

Resources