I just hit some unexpected behavior while porting some code. I've boiled it down to this example:
#include <stdint.h>
#include <stdio.h>
uint32_t swap_16_p(uint8_t *value)
{
return (*(uint16_t*)value << 8 | *(uint16_t*)value >> 8);
}
int main()
{
uint8_t start[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xBE, 0xEF };
printf("0x%08x\n", swap_16_p(start));
return 0;
}
On a Little Endian system like x86-64 I would expect this to print 0x0000dead but instead it prints 0x00addead. Looking at the assembly output makes the issue more clear:
uint32_t swap_16_p(uint8_t *value)
{
400506: 55 push %rbp
400507: 48 89 e5 mov %rsp,%rbp
40050a: 48 89 7d f8 mov %rdi,-0x8(%rbp)
return (*(uint16_t*)value << 8 | *(uint16_t*)value >> 8);
40050e: 48 8b 45 f8 mov -0x8(%rbp),%rax
400512: 0f b7 00 movzwl (%rax),%eax
400515: 0f b7 c0 movzwl %ax,%eax
400518: c1 e0 08 shl $0x8,%eax
40051b: 89 c2 mov %eax,%edx
40051d: 48 8b 45 f8 mov -0x8(%rbp),%rax
400521: 0f b7 00 movzwl (%rax),%eax
400524: 66 c1 e8 08 shr $0x8,%ax
400528: 0f b7 c0 movzwl %ax,%eax
40052b: 09 d0 or %edx,%eax
}
40052d: 5d pop %rbp
40052e: c3 retq
By using eax as the scratch area for doing the computation, the extra byte gets shifted past the 16-bit boundary with shl $0x8,%eax. I wouldn't have expected the computation to be treated as a 32-bit value until just before the return (as it would need to promote it to a uint32_t); similar behavior is seen when storing the value in a temporary uint32_t and the printing that instead.
Have I gone against (or improperly interpreted) the C spec, or is this a compiler bug (seems unlikely since this happens in both clang and GCC)?
The integer promotions are done at the "read side", therefore while the expression is evaluated. This means that after reading an integer value that has a smaller size than int resp. unsigned it is immediately converted:
The following may be used in an expression wherever an int or unsigned int may be used:
— An object or expression with an integer type whose integer conversion rank is less than or equal to the rank of int and unsigned int.
— A bit-field of type _Bool, int, signed int, or unsigned int.
If an int can represent all values of the original type, the value is converted to an int; otherwise, it is converted to an unsigned int. These are called the integer promotions. 48)
48) The integer promotions are applied only: as part of the usual arithmetic conversions, to certain argument expressions, to the operands of the unary +, -, and ~ operators, and to both operands of the shift operators, as specified by their respective subclauses.
ISO/IEC 9899:TC3 6.3.1.1-2
Therefore
*(uint16_t*)value
is immediately converted to int and then shifted.
On a little endian system you are reading a unit16_t memory location that contains value 0xADDE. Before performing shifts, the value is promoted to int type, which is probably 32-bit wide on your platform, producing 0x0000ADDE. Shifts produce 0x00ADDE00 and 0x000000AD respectively. Bitwise OR produces 0x00ADDEAD.
Everything is as expected.
C language does not perform any arithmetic operations within types smaller than int (or unsigned int). Any smaller type is always promoted to int (or unsigned int) before performing the operation. This is what happens with your shifts. Your shifts are int shifts. C does not have "narrower" shifts. C does not have "narrower" additions and multiplications. C does not have "narrower" anything.
If you want a "narrower" shift (or any other operation) you have to simulate it by meticulously manually truncating the intermediate results in order to force them into a smaller type
(uint16_t) (*(uint16_t*) value << 8) | (uint16_t) (*(uint16_t*) value >> 8);
They will constantly spring back to int and you have to constantly beat them back into uint16_t.
This is what the compiler does:
uint32_t swap_16_p(uint8_t *value)
{
uint16_t v1 = *(uint16_t*)value; // -> 0x0000ADDE
int v2 = v1 << 8; // -> 0x00ADDE00
int v3 = v1 >> 8; // -> 0x000000AD
uint32_t v4 = v2 | v3; // -> 0x00ADDEAD
return v4;
}
So the result is well-justified.
Please note that v2 and v3 are results of integral promotion.
Let's look at your logic:
return (*(uint16_t*)value << 8 | *(uint16_t*)value >> 8);
*(uint16_t*)value is 0xADDE since your system is little-endian. (Subject to some caveats which I will mention below).
0xADDE << 8 is 0xADDE00, assuming you have 32-bit (or larger) int. Remember that left-shifting is equivalent to multiplying by a power of 2.
0xADDE >> 8 is 0xAD.
0xADDE00 | 0xAD is 0xADDEAD which is what you observed.
If you expected to 0xDEAD then you are going about it completely the wrong way. Instead the following code would work (and be endian-agnostic):
return (value[0] << 8) | value[1];
although my personal preference, since we are doing arithmetic, is to write it as value[0] * 0x100u + value[1].
*(uint16_t *)value has other problems. Firstly it will cause undefined behaviour if your system has an alignment restriction on integers. Secondly, it violates the strict aliasing rule: objects of type uint8_t may not be read through an lvalue of type uint16_t, again causing undefined behaviour.
If you are porting code that uses aliasing casts like this, I'd suggest disabling type-based aliasing optimization in your compiler until you fully understand the issues. In gcc the flag is -fno-strict-aliasing.
Related
When performing subtraction of pointers and the first pointer is less than the second, I'm getting an underflow error with the ARM processor.
Example code:
#include <stdint.h>
#include <stdbool.h>
uint8_t * p_formatted_data_end;
uint8_t formatted_text_buffer[10240];
static _Bool
Flush_Buffer_No_Checksum(void)
{
_Bool system_failure_occurred = false;
p_formatted_data_end = 0; // For demonstration puposes.
const signed int length =
p_formatted_data_end - &formatted_text_buffer[0];
if (length < 0)
{
system_failure_occurred = true;
}
//...
return true;
}
The assembly code generated by the IAR compiler is:
807 static _Bool
808 Flush_Buffer_No_Checksum(void)
809 {
\ Flush_Buffer_No_Checksum:
\ 00000000 0xE92D4070 PUSH {R4-R6,LR}
\ 00000004 0xE24DD008 SUB SP,SP,#+8
810 _Bool system_failure_occurred = false;
\ 00000008 0xE3A04000 MOV R4,#+0
811 p_formatted_data_end = 0; // For demonstration purposes.
\ 0000000C 0xE3A00000 MOV R0,#+0
\ 00000010 0x........ LDR R1,??DataTable3_7
\ 00000014 0xE5810000 STR R0,[R1, #+0]
812 const signed int length =
813 p_formatted_data_end - &formatted_text_buffer[0];
\ 00000018 0x........ LDR R0,??DataTable3_7
\ 0000001C 0xE5900000 LDR R0,[R0, #+0]
\ 00000020 0x........ LDR R1,??DataTable7_7
\ 00000024 0xE0505001 SUBS R5,R0,R1
814 if (length < 0)
\ 00000028 0xE3550000 CMP R5,#+0
\ 0000002C 0x5A000009 BPL ??Flush_Buffer_No_Checksum_0
815 {
816 system_failure_occurred = true;
\ 00000030 0xE3A00001 MOV R0,#+1
\ 00000034 0xE1B04000 MOVS R4,R0
The subtraction instruction SUBS R5,R0,R1 is equivalent to:
R5 = R0 - R1
The N bit in the CPSR register will be set if the result is negative.
Ref: Section A4.1.106 SUB of ARM Architecture Reference Manual
Let:
R0 == 0x00000000
R1 == 0x802AC6A5
Register R5 will have the value 0x7FD5395C.
The N bit of the CPSR register is 0, indicating the result is not negative.
The Windows 7 Calculator application is reporting negative, but only when expressed as 64-bits: FFFFFFFF7FD5395C.
As an experiment, I used the ptrdiff_t type for the length, and the same assembly language was generated.
Questions:
Is this valid behavior, to have the result of pointer subtraction to
underflow?
What is the recommended data type to view the distance as negative?
Platform:
Target Processor: ARM Cortex A8 (TI AM3358)
Compiler: IAR 7.40
Development platform: Windows 7.
Is this valid behavior, to have the result of pointer subtraction to underflow?
Yes, because the behavior in your case is undefined. Any behavior is valid there. As was observed in comments, the difference between two pointers is defined only for pointers that point to elements of the same array object, or one past the last element of the array object (C2011, 6.5.6/9).
What is the recommended data type to view the distance as negative?
Where it is defined, the result of subtracting two pointers is specified to be of type ptrdiff_t, a signed integer type of implementation-defined size. If you evaluate p1 - p2, where p1 points to an array element and p2 points to a later element of the same array, then the result will be a negative number representable as a ptrdiff_t.
Although this is UB as stated in the other answer, most C implementations will simply subtract these pointers anyway ptrdiff_t size (or possibly using appropriate arithmetic for their word size which might also be different if both operands are near/far/huge pointers). The result should fit inside ptrdiff_t, which is usually a typedef-ed int on ARM:
typedef int ptrdiff_t;
So the issue with your code in this particular case will simply be that you are treating an unsigned int value as signed, and it doesn't fit. As specified in your question, the address of formatted_text_buffer is 0x802AC6A5, which fits inside unsigned int, but (int)0x802AC6A5 in two's complement form is actually a negative number (-0x7FD5395B). So subtracting a negative number from 0 will return a positive int as expected.
Signed 32-bit integer subtraction will work correctly if both operands are less than 0x7FFFFFFF apart, and it's reasonable to expect your arrays to be smaller than that:
// this will work
const int length = &formatted_text_buffer[0] - &formatted_text_buffer[100];
Or, if you really need to do subtract pointers which don't fit into signed 32-bit ints, use long long instead:
// ...but I doubt you really want this
const long long length = (long long)p_formatted_data_end -
(long long)&formatted_text_buffer[0];
I am using the following to convert a char[4] to a uint32_t.
frameSize = (uint32_t)(frameSizeBytes[0] << 24) | (frameSizeBytes[1] << 16) | (frameSizeBytes[2] << 8) | frameSizeBytes[3];
frameSize is a uint32_t variable, and frameSizeBytes is a char[4] array. When the array contains, for example, the following values (in hex)
00 00 02 7b
frameSize is set to 635, which is the correct value. This method also works for other combinations of bytes, with the exception of the following
00 00 9e ba
for this case, frameSize is set to 4294967226, which, according to this website, is incorrect, as it should be 40634 instead. Why is this behavior happening?
Your char type is signed in your specific implementation and undergoes integer promotion with most operators. Use a cast to unsigned char where the signed array elements are used.
EDIT: actually as pointed out by Olaf in the comment, you should actually prefer casts to unsigned int (assuming common 32-bit unsigned int) or uint32_t to avoid potential undefined behavior with the << 24 shift operation.
To keep things tidy I'd suggest an inline function along the lines of:
static inline uint32_t be_to_uint32(void const *ptr)
{
unsigned char const *p = ptr;
return p[0] * 0x1000000ul + p[1] * 0x10000 + p[2] * 0x100 + p[3];
}
Note: by using an unsigned long constant this code avoids the problem of unsigned char being promoted to signed int and then having the multiplication/shift cause integer overflow (an annoying historical feature of C). Of course you could also use ((uint32_t)p[0]) << 24 as suggested by Olaf.
I need your help at understanding how bit fields work in C programming.
I have declared this struct:
struct message
{
unsigned char first_char : 6;
unsigned char second_char : 6;
unsigned char third_char : 6;
unsigned char fourth_char : 6;
unsigned char fifth_char : 6;
unsigned char sixth_char : 6;
unsigned char seventh_char : 6;
unsigned char eigth_char : 6;
}__packed message;
I saved the size of the struct into an integer using sizeof(message).
I thought the value of the size will be 6 since 6 * 8 = 48 bits, which is 6 bytes,
but instead it has the size value of 8 bytes.
Can anyone explain to me why, and how exactly bit fields and their alignments work?
EDIT
i forgot to explain the situation where i use the struct.
lets say i receive packet of 6 bytes in this form:
void * packet
i then cast the data like this:
message * msg = (message *)packet;
now i want to print the value of each member, so although i declared the members as 6 bits, the members use 8 bits which cause to wrong result when printing.
for example i receive the next data:
00001111 11110000 00110011 00001111 00111100 00011100
i thought the value of the members will be as shown below:
first_char = 000011
second = 111111
third = 000000
fourth = 110011
fifth = 000011
sixth = 110011
seventh = 110000
eigth = 011100
but that is not what hapening, i hope i explained it well, if not please tell me.
Bit-fields don't have to run across different underlying elements ("units"), so you're witnessing that each of your fields occupies an entire unsigned char. The behaviour is implemention-defined, thoug; cf. C11 6.7.2.1/11:
An implementation may allocate any addressable storage unit large enough to hold a bit-field. If enough space remains, a bit-field that immediately follows another bit-field in a
structure shall be packed into adjacent bits of the same unit. If insufficient space remains,
whether a bit-field that does not fit is put into the next unit or overlaps adjacent units is
implementation-defined. The order of allocation of bit-fields within a unit (high-order to
low-order or low-order to high-order) is implementation-defined. The alignment of the
addressable storage unit is unspecified.
Additionally, no bit-field may be larger than what would fit into one single unit, by the constraint in 6.7.2.1/4:
The expression that specifies the width of a bit-field shall be an integer constant
expression with a nonnegative value that does not exceed the width of an object of the
type that would be specified were the colon and expression omitted.
Almost everything about bit-fields is implementation defined. In particular, how bit-fields are packed together is implementation defined. An implementation need not let bit-fields cross the boundaries of addressable storage units, and it appears that yours does not.
ISO/IEC 9899:2011 §6.7.2.1 Structure and union specifiers
¶11 An implementation may allocate any addressable storage unit large enough to hold a bit-field.
If enough space remains, a bit-field that immediately follows another bit-field in a
structure shall be packed into adjacent bits of the same unit. If insufficient space remains,
whether a bit-field that does not fit is put into the next unit or overlaps adjacent units is
implementation-defined. The order of allocation of bit-fields within a unit (high-order to
low-order or low-order to high-order) is implementation-defined. The alignment of the
addressable storage unit is unspecified
And that is by no means the end of the 'implementation-defined' features of bit-fields.
[Please choose the answer by Kerek SB rather than this one as it has the crucial information about §6.7.2.1 ¶4 as well.]
Example code
#include <stdio.h>
#if !defined(BITFIELD_BASE_TYPE)
#define BITFIELD_BASE_TYPE char
#endif
int main(void)
{
typedef struct Message
{
unsigned BITFIELD_BASE_TYPE first_char : 6;
unsigned BITFIELD_BASE_TYPE second_char : 6;
unsigned BITFIELD_BASE_TYPE third_char : 6;
unsigned BITFIELD_BASE_TYPE fourth_char : 6;
unsigned BITFIELD_BASE_TYPE fifth_char : 6;
unsigned BITFIELD_BASE_TYPE sixth_char : 6;
unsigned BITFIELD_BASE_TYPE seventh_char : 6;
unsigned BITFIELD_BASE_TYPE eighth_char : 6;
} Message;
typedef union Bytes_Message
{
Message m;
unsigned char b[sizeof(Message)];
} Bytes_Message;
Bytes_Message u;
printf("Message size: %zu\n", sizeof(Message));
u.m.first_char = 0x3F;
u.m.second_char = 0x15;
u.m.third_char = 0x2A;
u.m.fourth_char = 0x11;
u.m.fifth_char = 0x00;
u.m.sixth_char = 0x23;
u.m.seventh_char = 0x1C;
u.m.eighth_char = 0x3A;
printf("Bit fields: %.2X %.2X %.2X %.2X %.2X %.2X %.2X %.2X\n",
u.m.first_char, u.m.second_char, u.m.third_char,
u.m.fourth_char, u.m.fifth_char, u.m.sixth_char,
u.m.seventh_char, u.m.eighth_char);
printf("Bytes: ");
for (size_t i = 0; i < sizeof(Message); i++)
printf(" %.2X", u.b[i]);
putchar('\n');
return 0;
}
Sample compilations and runs
Testing on Mac OS X 10.9.2 Mavericks with GCC 4.9.0 (64-bit build; sizeof(int) == 4 and sizeof(long_ == 8). Source code is in bf.c; the program created is bf.
$ gcc -DBITFIELD_BASE_TYPE=char -O3 -g -std=c11 -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wold-style-definition -Werror bf.c -o bf
$ ./bf
Message size: 8
Bit fields: 3F 15 2A 11 00 23 1C 3A
Bytes: 3F 15 2A 11 00 23 1C 3A
$ gcc -DBITFIELD_BASE_TYPE=short -O3 -g -std=c11 -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wold-style-definition -Werror bf.c -o bf
$ ./bf
Message size: 8
Bit fields: 3F 15 2A 11 00 23 1C 3A
Bytes: 7F 05 6A 04 C0 08 9C 0E
$ gcc -DBITFIELD_BASE_TYPE=int -O3 -g -std=c11 -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wold-style-definition -Werror bf.c -o bf
$ ./bf
Message size: 8
Bit fields: 3F 15 2A 11 00 23 1C 3A
Bytes: 7F A5 46 00 23 A7 03 00
$ gcc -DBITFIELD_BASE_TYPE=long -O3 -g -std=c11 -Wall -Wextra -Wmissing-prototypes -Wstrict-prototypes -Wold-style-definition -Werror bf.c -o bf
$ ./bf
Message size: 8
Bit fields: 3F 15 2A 11 00 23 1C 3A
Bytes: 7F A5 46 C0 C8 E9 00 00
$
Note that there are 4 different sets of results for the 4 different type sizes. Note, too, that a compiler is not required to allow these types. The standard says (§6.7.2.1 again):
¶4 The expression that specifies the width of a bit-field shall be an integer constant
expression with a nonnegative value that does not exceed the width of an object of the
type that would be specified were the colon and expression omitted.122) If the value is
zero, the declaration shall have no declarator.
¶5 A bit-field shall have a type that is a qualified or unqualified version of _Bool, signed
int, unsigned int, or some other implementation-defined type.
122) While the number of bits in a _Bool object is at least CHAR_BIT, the width (number of sign and
value bits) of a _Bool may be just 1 bit.
Another sub-question
Can you explain to me why I was wrong with thinking I would get the size of 6? I asked a lot of my friends but they don't know much about bit-fields.
I'm not sure I know all that much about bit-fields. I've never used them except in answers to questions on Stack Overflow. They're of no use when writing portable software, and I aim to write portable software — or, at least, software that is not gratuitously non-portable.
I imagine that you assumed a layout of the bits roughly equivalent to this:
+------+------+------+------+------+------+------+------+
| f1 | f2 | f3 | f4 | f5 | f6 | f7 | f8 |
+------+------+------+------+------+------+------+------+
It is supposed to represent 48 bits in 8 groups of 6 bits, laid out contiguously with no spaces or padding.
Now, one reason why that can't happen is the rule from §6.7.2.1 ¶4 that when you use a type T for a bit-field, then the width of the bit-field cannot be larger than CHAR_BIT * sizeof(T). In your code, T was unsigned char, so bit-fields cannot be larger than 8 bits or else they cross storage unit boundaries. Of course, yours are only 6 bits, but it means that you can't fit a second bit-field into the storage unit. If T is unsigned short, then only two 6-bit fields fit into a 16-bit storage unit; if T is a 32-bit int, then five 6-bit fields can fit; if T is a 64-bit unsigned long, then 10 6-bit fields can fit.
Another reason is that access to such bit-fields that cross byte boundaries would be moderately inefficient. For example, given (Message as defined in my example code):
Message bf = …initialization code…
int nv = 0x2A;
bf.second_char = nv;
Suppose that the code treated the values as being stored in a packed byte array with fields overlapping byte boundaries. Then the code needs to set the bits marked y below:
Byte 0 | Byte 1
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
| x | x | x | x | x | x | y | y | y | y | y | y | z | z | z | z |
+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
This is a pattern of bits. The x bits might correspond to first_char; the z bits might correspond to part of third_char; and the y bits to the old value of second_char. So, the assignment has to copy the first 6 bits of Byte 0 and assign 2 bits of the new value to the last two bits:
((unsigned char *)&bf)[0] = (((unsigned char *)&bf)[0] & 0xFC) | ((nv >> 4) & 0x03);
((unsigned char *)&bf)[1] = (((unsigned char *)&bf)[1] & 0x0F) | ((nv << 4) & 0xF0);
If it is treated as a 16-bit unit, then the code would be equivalent to:
((unsigned short *)&bf)[0] = (((unsigned char *)&bf)[0] & 0xFC0F) | ((nv << 4) & 0x03F0);
The 32-bit or 64-bit assignments are somewhat similar to the 16-bit version:
((unsigned int *)&bf)[0] = (((unsigned int *)&bf)[0] & 0xFC0FFFFF) |
((nv << 20) & 0x03F00000);
((unsigned long *)&bf)[0] = (((unsigned long *)&bf)[0] & 0xFC0FFFFFFFFFFFFF) |
((nv << 52) & 0x03F0000000000000);
This makes a particular set of assumptions about the way the bits are laid out inside the bit-field. Different assumptions come up with slightly different expressions, but something analogous to this is needed if the bit-field is treated as a contiguous array of bits.
By comparison, with the 6-bits per byte layout actually used, the assignment becomes much simpler:
((unsigned char *)&bf)[1] = nv & 0x3F;
and it would be legitimate for the compiler to omit the mask operation shown as the values in the padding bits is indeterminate (but the value would have to be an 8-bit assignment).
The amount of code needed to access a bit-field is one reason why most people avoid them. The fact that different compilers can make different layout assumptions for the same definition means that values cannot be reliably passed between machines of different types. Usually, an ABI will define the details that Standard C does not, but if one machine is a PowerPC or SPARC and the other is based on Intel, then all bets are off. It becomes better to do the shifting and masking yourself; at least the cost of the computation is visible.
I recently discovered a discrepancy between two ways to set a variable to all 1s in C.
Here is a small code sample to illustrate the odd behaviour on my 64 bit Linux system.
// compile with `gcc -o weird_shift_behaviour weird_shift_behaviour.c`
#include <stdio.h>
int main(void){
long long foo = 0;
long long bar = 0;
int i;
puts("<foo> will be set to all 1s by repeatedly shifting 1 to the left and OR-ing the result with <foo>.");
puts("<bar> will be set to all 1s by repeatedly OR-ing it with 1 and shifting <bar> to the left one step.");
for(i=0;i<8*(int)sizeof(long long)-1;++i){
foo |= (1<<i);
bar = bar<<1 | 1;
printf("<i>: %02d <foo>: %016llx <bar>: %016llx \n",i,foo,bar);
}
return 0;
}
I do know that this is not the canonical way to set an integer type to all 1s in C, but I did try it nonetheless. Here is the interesting part of the output the sample program generates:
<i>: 29 <foo>: 000000003fffffff <bar>: 000000003fffffff
<i>: 30 <foo>: 000000007fffffff <bar>: 000000007fffffff
<i>: 31 <foo>: ffffffffffffffff <bar>: 00000000ffffffff
<i>: 32 <foo>: ffffffffffffffff <bar>: 00000001ffffffff
Why does this odd behaviour occur? I could not think of any reasonable explanation so far.
1<<i
1 is of type int and 1 << 31 is undefined behavior when int is 32-bit wide.
From the C Standard:
(C99, 6.5.7p4) "The result of E1 << E2 is E1 left-shifted E2 bit positions; vacated bits are filled with zeros. [...] If E1 has a signed type and nonnegative value, and E1 x 2 ^ E2 is representable in the result type, then that is the resulting value; otherwise, the behavior is undefined."
To fix your issue, change 1<<i with 1ULL << i.
I have this program in C:
int main(int argc, char *argv[])
{
int i=300;
char *ptr = &i;
*++ptr=2;
printf("%d",i);
return 0;
}
The output is 556 on little endian.
I tried to understand the output. Here is my explanation.
Question is Will the answer remains the same in the big endian machine?
i = 300;
=> i = 100101100 //in binary in word format => B B Hb 0001 00101100 where B = Byte and Hb = Half Byte
(A)=> in memory (assuming it is Little endian))
0x12345678 - 1100 - 0010 ( Is this correct for little endian)
0x12345679 - 0001 - 0000
0x1234567a - 0000 - 0000
0x1234567b - 0000 - 0000
0x1234567c - Location of next intezer(location of ptr++ or ptr + 1 where ptr is an intezer pointer as ptr is of type int => on doing ++ptr it will increment by 4 byte(size of int))
when
(B)we do char *ptr = &i;
ptr will become of type char => on doing ++ptr it will increment by 1 byte(size of char)
so on doing ++ptr it will jump to location -> 0x12345679 (which has 0001 - 0000)
now we are doing
++ptr = 2
=> 0x12345679 will be overwritten by 2 => 0x12345679 will have 00*10** - 0000 instead of 000*1* - 0000
so the new memory content will look like this :
(C)
0x12345678 - 1100 - 0010
0x12345679 - 0010 - 0000
0x1234567a - 0000 - 0000
0x1234567b - 0000 - 0000
which is equivalent to => B B Hb 0010 00101100 where B = Byte and Hb = Half Byte
Is my reasoning correct?Is there any other short method for this?
Rgds,
Softy
In a little-endian 32-bit system, the int 300 (0x012c) is typically(*) stored as 4 sequential bytes, lowest first: 2C 01 00 00. When you increment the char pointer that was formerly the int pointer &i, you're pointing at the second byte of that sequence, and setting it to 2 makes the sequence 2C 02 00 00 -- which, when turned back into an int, is 0x22c or 556.
(As for your understanding of the bit sequence...it seems a bit off. Endianness affects byte order in memory, as the byte is the smallest addressable unit. The bits within the byte don't get reversed; the low-order byte will be 2C (00101100) whether the system is little-endian or big-endian. (Even if the system did reverse the bits of a byte, it'd reverse them again to present them to you as a number, so you wouldn't notice a difference.) The big difference is where that byte appears in the sequence. The only places where bit order matters, is in hardware and drivers and such where you can receive less than a byte at a time.)
In a big-endian system, the int is typically(*) represented by the byte sequence 00 00 01 2C (differing from the little-endian representation solely in the byte order -- highest byte comes first). You're still modifying the second byte of the sequence, though...making 00 02 01 2C, which as an int is 0x02012c or 131372.
(*) Lots of things come into play here, including two's complement (which almost all systems use these days...but C doesn't require it), the value of sizeof(int), alignment/padding, and whether the system is truly big- or little-endian or a half-assed implementation of it. This is a big part of why mucking around with the bytes of a bigger type so often leads to undefined or implementation-specific behavior.
This is implementation defined. The internal representation of an int is not known according to the standard, so what you're doing is not portable. See section 6.2.6.2 in the C standard.
However, as most implementations use two's complement representation of signed ints, the endianness will affect the result as described in cHaos answer.
This is your int:
int i = 300;
And this is what the memory contains at &i: 2c 01 00 00
With the next instruction you assign the address of i to ptr, and then you move to the next byte with ++ptr and change its value to 2:
char *ptr = &i;
*++ptr = 2;
So now the memory contains: 2c 02 00 00 (i.e. 556).
The difference is that in a big-endian system in the address of i you would have seen 00 00 01 2C, and after the change: 00 02 01 2C.
Even if the internal rappresentation of an int is implementation-defined:
For signed integer types, the bits of the object representation shall
be divided into three groups: value bits, padding bits, and the sign
bit. There need not be any padding bits; signed char shall not have
any padding bits. There shall be exactly one sign bit. Each bit that
is a value bit shall have the same value as the same bit in the object
representation of the corresponding unsigned type (if there are M
value bits in the signed type and N in the unsigned type, then M ≤ N).
If the sign bit is zero, it shall not affect the resulting value. If
the sign bit is one, the value shall be modified in one of the
following ways: — the corresponding value with sign bit 0 is negated
(sign and magnitude); — the sign bit has the value −(2M) (two’s
complement); — the sign bit has the value −(2M − 1) (ones’
complement). Which of these applies is implementation-defined, as
is whether the value with sign bit 1 and all value bits zero (for the
first two), or with sign bit and all value bits 1 (for ones’
complement), is a trap representation or a normal value. In the case
of sign and magnitude and ones’ complement, if this representation is
a normal value it is called a negative zero.
I like experiments and that's the reason for having the PowerPC G5.
stacktest.c:
int main(int argc, char *argv[])
{
int i=300;
char *ptr = &i;
*++ptr=2;
/* Added the Hex dump */
printf("%d or %x\n",i, i);
return 0;
}
Build command:
powerpc-apple-darwin9-gcc-4.2.1 -o stacktest stacktest.c
Output:
131372 or 2012c
Resume: the cHao's answer is complete and in case you're in doubt here is the experimental evidence.