Here are two instructions:
int p = 0;
int size_1 = (int*)(&p+1)-(int*)&p;
int size_2 = (char*)(&p+1)-(char*)&p;
I found that size_1 is 1 and size_2 is 4. I was wondering why they vary this way
This is basic pointer arithmetic. Oversimplifying a little bit, subtracting two int* produces the number of ints that fit between the two pointers (one), while subtracting two char* produces the number of chars that fit between the pointers (on your system, it happens to be four, because an int is four-byte wide).
The root cause is that an int is 4 bytes on your system, while a char is 1 byte. The code (&p+1) will return a pointer to the memory address 4 bytes (the sizeof(int)) after p. Then when you assign to size_1 you are asking for your answer in terms on int sizes, thus you get 1. For size_2, you are asking the difference between the address in char sizes, which gives 4.
Related
A
int *numptr = malloc(sizeof(int)*10);
B
int *numptr = malloc(sizeof(40));
it's on the 32bit
i can't understand what is difference.
there is no information in the book i have.
is A and B 100% same thing?
You're allocating a different amount of space in each case.
For case A, you have first have sizeof(int). Presumably, an int is 4 bytes on your system, so this expression evaluates to 4. So malloc(sizeof(int)*10) is allocating space for 4 * 10 = 40 bytes.
For case B, you have sizeof(40). This is giving you the size of the constant 40 whose type is int, so sizeof(40) is 4. This then means that malloc(sizeof(40)) is allocating space for 4 bytes.
An int isn’t guaranteed to be 4 bytes wide. It’s only guaranteed to represent values in the range [-32767..32767], so it’s only guaranteed to be 16 bits (2 bytes) wide.
Yes, it’s 4 bytes on most modern desktop platforms, but it doesn’t have to be.
Besides, 10 * sizeof (int) more clearly conveys that you’re allocating space for 10 int objects.
40 is an integer, so sizeof(40) should return the same thing as sizeof(int). Thus, sizeof(int) * 10 is the size of 10 integers, but sizeof(40) is the size of a single integer.
I am on a x32-based processor where char = 1 byte, short = 2 bytes and int = 4 bytes.
When I create an array of type char with 20 elements in it, I expect to see 20 memory spaces allocated to that array with the addresses differing by only 1 byte because of the type of the array.
If I take two consecutive elements from the array and subtract their addresses, should I then not get 1 in this case?
And in the case of arrays with types short and int, I am expecting to get 2 and 4. This due to the fact that the short and int elements need be aligned in memory. short elements will be on even addresses (diff 2) and int elements will be on addresses divisible by 4.
Though, how come when I run the following code I get 1,1,1 and not 1,2,4?
I suspect I am missing some cruical detail when it comes to pointer arithmetics.
char vecc[20];
printf("%i\n", &vecc[1]-&vecc[0]);
short vecs[20];
printf("%i\n", &vecs[1]-&vecs[0]);
int veci[20];
printf("%i\n", &veci[1]-&veci[0]);
Pointer subtraction yields the result as difference in the indexes, not the size of the gap between the addresses.
Quoting C11, chapter 6.5.6, (emphasis mine)
When two pointers are subtracted, both shall point to elements of the same array object, or one past the last element of the array object; the result is the difference of the subscripts of the two array elements. [...]
If you write the code in this way:
printf("%i\n", (char*)(&vecs[1]) - (char*)(&vecs[0]));
printf("%i\n", (char*)(&veci[1]) - (char*)(&veci[0]));
the output will be 2 and 4.
I'm learning pointers in C. I'm having confusion in Pointer arithmetic. Have a look at below program :
#include<stdio.h>
int main()
{
int a[] = 2,3,4,5,6;
int *i=a;
printf("value of i = %d\n", i); ( *just for the sake of simplicity I have use %d* )
printf("value of i+2 = %d\n", i+2);
return 0;
}
My question is if value of i is 653000 then why the value of i+2 is 653008 As far as I know every bit in memory has its address specified then according to this value of i+2 should be 653064 because 1 byte = 8 bit. Why pointer arithmetic is scaled with byte why not with bit?
THANKS in advance and sorry for my bad English!
As far as I know every bit in memory has its address specified
Wrong.
Why pointer arithmetic is scaled with byte why not with bit?
The byte is the minimal addressable unit of storage on a computer, not the bit. Addresses refer to bytes - you cannot create a pointer that points to a specific bit in memory1.
Addresses refer to *bytes*
|
|
v _______________
0x1000 |_|_|_|_|_|_|_|_| \
0x1001 |_|_|_|_|_|_|_|_| > Each row is one byte
0x1002 |_|_|_|_|_|_|_|_| /
\_______ _______/
v
Each column is one bit
As others have explained, this is basic pointer arithmetic in action. When you add n to a pointer *p, you're adding n elements, not n bytes. You're effectively adding n * sizeof(*p) bytes to the pointer's address.
1 - without using architecture-specific tricks like Bit-banding on ARM, as myaut pointed out
You should read about the pointer arithmetic.link given in the comment.
While incrementing the position of pointer , that will incremented based on the data type of that pointer.In this case i+2 will increment the byte into
eight bytes.
Integer is four bytes.(system defined). So i+2 will act as i+(2*sizeof(int)). So it will became i+8. So the answer is incremented by eight.
Addresses are calculating by the byte. Not the bit. Take a character pointer. Each byte having the 255 bits.
Consider the string like this. `"hi". It will stored like this.
h i
1001 1002
Ascii value of h is 104. It will be stored in one byte. signed character we can store positive in 0 to 127. So storing the one value we need the one byte in character dataype. Using the bits we cannot store the only value. so the pointer arithmetic is based on bytes.
When you do PTR + n then simple maths will be like
PTR + Sizeof(PTR)*n.
Here size of integer pointer is 4 Byte.
In my course for intro to operating systems, our task is to determine if a system is big or little endian. There's plenty of results I've found on how to do it, and I've done my best to reconstruct my own version of a code. I suspect it's not the best way of doing it, but it seems to work:
#include <stdio.h>
int main() {
int a = 0x1234;
unsigned char *start = (unsigned char*) &a;
int len = sizeof( int );
if( start[0] > start[ len - 1 ] ) {
//biggest in front (Little Endian)
printf("1");
} else if( start[0] < start[ len - 1 ] ) {
//smallest in front (Big Endian)
printf("0");
} else {
//unable to determine with set value
printf( "Please try a different integer (non-zero). " );
}
}
I've seen this line of code (or some version of) in almost all answers I've seen:
unsigned char *start = (unsigned char*) &a;
What is happening here? I understand casting in general, but what happens if you cast an int to a char pointer? I know:
unsigned int *p = &a;
assigns the memory address of a to p, and that can you affect the value of a through dereferencing p. But I'm totally lost with what's happening with the char and more importantly, not sure why my code works.
Thanks for helping me with my first SO post. :)
When you cast between pointers of different types, the result is generally implementation-defined (it depends on the system and the compiler). There are no guarantees that you can access the pointer or that it correctly aligned etc.
But for the special case when you cast to a pointer to character, the standard actually guarantees that you get a pointer to the lowest addressed byte of the object (C11 6.3.2.3 §7).
So the compiler will implement the code you have posted in such a way that you get a pointer to the least significant byte of the int. As we can tell from your code, that byte may contain different values depending on endianess.
If you have a 16-bit CPU, the char pointer will point at memory containing 0x12 in case of big endian, or 0x34 in case of little endian.
For a 32-bit CPU, the int would contain 0x00001234, so you would get 0x00 in case of big endian and 0x34 in case of little endian.
If you de reference an integer pointer you will get 4 bytes of data(depends on compiler,assuming gcc). But if you want only one byte then cast that pointer to a character pointer and de reference it. You will get one byte of data. Casting means you are saying to compiler that read so many bytes instead of original data type byte size.
Values stored in memory are a set of '1's and '0's which by themselves do not mean anything. Datatypes are used for recognizing and interpreting what the values mean. So lets say, at a particular memory location, the data stored is the following set of bits ad infinitum: 01001010 ..... By itself this data is meaningless.
A pointer (other than a void pointer) contains 2 pieces of information. It contains the starting position of a set of bytes, and the way in which the set of bits are to be interpreted. For details, you can see: http://en.wikipedia.org/wiki/C_data_types and references therein.
So if you have
a char *c,
an short int *i,
and a float *f
which look at the bits mentioned above, c, i, and f are the same, but *c takes the first 8 bits and interprets it in a certain way. So you can do things like printf('The character is %c', *c). On the other hand, *i takes the first 16 bits and interprets it in a certain way. In this case, it will be meaningful to say, printf('The character is %d', *i). Again, for *f, printf('The character is %f', *f) is meaningful.
The real differences come when you do math with these. For example,
c++ advances the pointer by 1 byte,
i++ advanced it by 4 bytes,
and f++ advances it by 8 bytes.
More importantly, for
(*c)++, (*i)++, and (*f)++ the algorithm used for doing the addition is totally different.
In your question, when you do a casting from one pointer to another, you already know that the algorithm you are going to use for manipulating the bits present at that location will be easier if you interpret those bits as an unsigned char rather than an unsigned int. The same operatord +, -, etc will act differently depending upon what datatype the operators are looking at. If you have worked in Physics problems wherein doing a coordinate transformation has made the solution very simple, then this is the closest analog to that operation. You are transforming one problem into another that is easier to solve.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
C question with pointers
I need some help with pointers, specifically the following example:
#include <stdio.h>
int main()
{
int *i, *j;
i = (int *) 60;
j = (int *) 40;
printf("%d", i - j);
return 0;
}
This code generates 10 as output. I just need to know what exactly i - j does here.
i and j point to memory locations 60 and 40, respectively.
What you're doing here is pointer subtraction. If i and j were byte pointers (char *), i-j would be 20, as one might expect.
However, with other pointers, it returns the number of elements between the two pointers. On most systems, (int *)60 - (int *)40 would be 5, as there is room for five 4-byte integers in those twenty bytes. Apparently, your platform has 16 bit integers.
The program is probably supposed to print the pointer difference between 60 and 40, cast to pointer to int. The pointer difference is the number of ints that would fit in the array from address 40 to address 60 (exclusive).
That said, the program violates the C standard. Pointer arithmetic is undefined except with pointers pointing into the same (static, automatic or malloc'd) array, and you cannot reliably print a pointer difference with %d (use %td instead).
This is pointer arithmetic, the code i - j subtracts two pointers to int. Such arithmetic is aware of the data sizes involved, and so in this case will return the number of ints between the two addresses.
A result of 10 indicates that you're running this on a system with 2-byte integers: 20 memory addresses between i and j, and your code prints 10, so there are 10 2-byte ints between the two addresses.
But on another system, with 4-byte integers, this would print 5: 20 memory addresses between i and j, so there are 5 4-byte ints between the two addresses.
printf("%d",i-j); return 0;
both i and j are pointer to an integer.So they follow pointer arithematics.
As per pointer mathematics pointer to an integer always shift sizeof(int).I
think you use gcc compiler where sizeof int is 4.So 60-40=20 but as unit is 4 so out put is
5.
but if you use turbo c where sizeof int is 2.then out put is 10.
NOTE
if pointers are included in some expression evaluation then they follow pointer arithematics.
i an j are the pointer to int variable, that means which is going to store virtual address of an int variable.
If we do any pointer arithmatic on this variable it will preform based on size of the type of variable which is pointing. For example i++ will increase the value from 60 to 64 if size of int is 4 bytes.
So you are getting 10 for i - j, that means size of int is 2 in your environment. Always i - j will give you how much element(of type int) that can accomodate in that range.
So between 60 and 40, we can store 10 elements of type int if size of int is 2 bytes.
First, two integer pointers are declared which are called i and j. Note that their values are memory addresses to where pointers are stored, not integers themselves (the concept of a pointer).
Next, the pointers i & j are changed to 60 and 40, respectively. Now this represents a spot in memory and not the integers sixty and forty because i and j was never dereferenced.
Then it prints the memory address of i-j which will subtract that two memory addresses.