int to char casting - c

int i = 259; /* 03010000 in Little Endian ; 00000103 in Big Endian */
char c = (char)i; /* returns 03 in both Little and Big Endian?? */
In my computer it assigns 03 to char c and I have Little Endian, but I don't know if the char casting reads the least significant byte or reads the byte pointed by the i variable.

Endianness doesn't actually change anything here. It doesn't try to store one of the bytes (MSB, LSB etc).
If char is unsigned it will wrap around. Assuming 8-bit char 259 % 256 = 3
If char is signed the result is implementation defined. Thank you pmg: 6.3.1.3/3 in the C99 Standard

Since you're casting from a larger integer type to a smaller one, it takes the least significant part regardless of endianness. If you were casting pointers instead, though, it would take the byte at the address, which would depend on endianness.
So c = (char)i assigns the least-significant byte to c, but c = *((char *)(&i)) would assign the first byte at the address of i to c, which would be the same thing on little-endian systems only.

If you want to test for little/big endian, you can use a union:
int isBigEndian (void)
{
union foo {
size_t i;
char cp[sizeof(size_t)];
} u;
u.i = 1;
return *u.cp != 1;
}
It works because in little endian, it would look like 01 00 ... 00, but in big endian, it would be 00 ... 00 01 (the ... is made up of zeros). So if the first byte is 0, the test returns true. Otherwise it returns false. Beware, however, that there also exist mixed endian machines that store data differently (some can switch endianness; others just store the data differently). The PDP-11 stored a 32-bit int as two 16-bit words, except the order of the words was reversed (e.g. 0x01234567 was 4567 0123).

When casting from int(4 bytes) to char(1 byte), it will preserve the last 1 byte.
Eg:
int x = 0x3F1; // 0x3F1 = 0000 0011 1111 0001
char y = (char)x; // 1111 0001 --> -15 in decimal (with Two's complement)
char z = (unsigned char)x; // 1111 0001 --> 241 in decimal

Related

C, Little and Big Endian confusion

I try to understand C programming memory Bytes order, but I'm confuse.
I try my app with some value on this site for my output verification : www.yolinux.com/TUTORIALS/Endian-Byte-Order.html
For the 64bits value I use in my C program:
volatile long long ll = (long long)1099511892096;
__mingw_printf("\tlong long, %u Bytes, %u bits,\t%lld to %lli, %lli, 0x%016llX\n", sizeof(long long), sizeof(long long)*8, LLONG_MIN, LLONG_MAX , ll, ll);
void printBits(size_t const size, void const * const ptr)
{
unsigned char *b = (unsigned char*) ptr;
unsigned char byte;
int i, j;
printf("\t");
for (i=size-1;i>=0;i--)
{
for (j=7;j>=0;j--)
{
byte = b[i] & (1<<j);
byte >>= j;
printf("%u", byte);
}
printf(" ");
}
puts("");
}
Out
long long, 8 Bytes, 64 bits, -9223372036854775808 to 9223372036854775807, 1099511892096, 0x0000010000040880
80 08 04 00 00 01 00 00 (Little-Endian)
10000000 00001000 00000100 00000000 00000000 00000001 00000000 00000000
00 00 01 00 00 04 08 80 (Big-Endian)
00000000 00000000 00000001 00000000 00000000 00000100 00001000 10000000
Tests
0x8008040000010000, 1000000000001000000001000000000000000000000000010000000000000000 // online website hex2bin conv.
1000000000001000000001000000000000000000000000010000000000000000 // my C app
0x8008040000010000, 1000010000001000000001000000000000000100000000010000000000000000 // yolinux.com
0x0000010000040880, 0000000000000000000000010000000000000000000001000000100010000000 //online website hex2bin conv., 1099511892096 ! OK
0000000000000000000000010000000000000000000001000000100010000000 // my C app, 1099511892096 ! OK
[Convert]::ToInt64("0000000000000000000000010000000000000000000001000000100010000000", 2) // using powershell for other verif., 1099511892096 ! OK
0x0000010000040880, 0000000000000000000000010000010000000000000001000000100010000100 // yolinux.com, 1116691761284 (from powershell bin conv.) ! BAD !
Problem
yolinux.com website announce 0x0000010000040880 for BIG ENDIAN ! But my computer use LITTLE ENDIAN I think (Intel proc.)
and I get same value 0x0000010000040880 from my C app and from another website hex2bin converter.
__mingw_printf(...0x%016llX...,...ll) also print 0x0000010000040880 as you can see.
Following yolinux website I have inverted my "(Little-Endian)" and "(Big-Endian)" labels in my output for the moment.
Also, the sign bit must be 0 for a positive number it's the case on my result but also yolinux result.(can not help me to be sure.)
If I correctly understand Endianness only Bytes are swapped not bits and my groups of bits seems to be correctly inverted.
It is simply an error on yolinux.com or is I missing a step about 64-bit numbers and C programming?
When you print some "multi-byte" integer using printf (and the correct format specifier) it doesn't matter whether the system is little or big endian. The result will be the same.
The difference between little and big endian is the order that multi-byte types are stored in memory. But once data is read from memory into the core processor, there is no difference.
This code shows how an integer (4 bytes) is placed in memory on my machine.
#include <stdio.h>
int main()
{
unsigned int u = 0x12345678;
printf("size of int is %zu\n", sizeof u);
printf("DEC: u=%u\n", u);
printf("HEX: u=0x%x\n", u);
printf("memory order:\n");
unsigned char * p = (unsigned char *)&u;
for(int i=0; i < sizeof u; ++i) printf("address %p holds %x\n", (void*)&p[i], p[i]);
return 0;
}
Output:
size of int is 4
DEC: u=305419896
HEX: u=0x12345678
memory order:
address 0x7ffddf2c263c holds 78
address 0x7ffddf2c263d holds 56
address 0x7ffddf2c263e holds 34
address 0x7ffddf2c263f holds 12
So I can see that I'm on a little endian machine as the LSB (least significant byte, i.e. 78) is stored on the lowest address.
Executing the same program on a big endian machine would (assuming same address) show:
size of int is 4
DEC: u=305419896
HEX: u=0x12345678
memory order:
address 0x7ffddf2c263c holds 12
address 0x7ffddf2c263d holds 34
address 0x7ffddf2c263e holds 56
address 0x7ffddf2c263f holds 78
Now it is the MSB (most significant byte, i.e. 12) that are stored on the lowest address.
The important thing to understand is that this only relates to "how multi-byte type are stored in memory". Once the integer is read from memory into a register inside the core, the register will hold the integer in the form 0x12345678 on both little and big endian machines.
There is only a single way to represent an integer in decimal, binary or hexadecimal format. For example, number 43981 is equal to 0xABCD when written as hexadecimal, or 0b1010101111001101 in binary. Any other value (0xCDAB, 0xDCBA or similar) represents a different number.
The way your compiler and cpu choose to store this value internally is irrelevant as far as C standard is concerned; the value could be stored as a 36-bit one's complement if you're particularly unlucky, as long as all operations mandated by the standard have equivalent effects.
You will rarely have to inspect your internal data representation when programming. Practically the only time when you care about endiannes is when working on a communication protocol, because then the binary format of the data must be precisely defined, but even then your code will not be different regardless of the architecture:
// input value is big endian, this is defined
// by the communication protocol
uint32_t parse_comm_value(const char * ptr)
{
// but bit shifts in C have the same
// meaning regardless of the endianness
// of your architecture
uint32_t result = 0;
result |= (*ptr++) << 24;
result |= (*ptr++) << 16;
result |= (*ptr++) << 8;
result |= (*ptr++);
return result;
}
Tl;dr calling a standard function like printf("0x%llx", number); always prints the correct value using the specified format. Inspecting the contents of memory by reading individual bytes gives you the representation of the data on your architecture.

unable to understand the output of union program in C

I know the basic properties of union in C but still couldn't understand the output, can somebody explain this?
#include <stdio.h>
int main()
{
union uni_t{
int i;
char ch[2];
};
union uni_t z ={512};
printf("%d%d",z.ch[0],z.ch[1]);
return 0;
}
The output when running this program is
02
union a
{
int i;
char ch[2];
}
This declares a type union a, the contents of which (i.e. the memory area of a variable of this type) could be accessed as either an integer (a.i) or a 2-element char array (a.ch).
union a z ={512};
This defines a variable z of type union a and initializes its first member (which happens to be a.i of type int) to the value of 512. (Cantfindname has the binary representation of that.)
printf( "%d%d", z.ch[0], z.ch[1] );
This takes the first character, then the second character from a.ch, and prints their numerical value. Again, Cantfindname talks about endianess and how it affects the results. Basically, you are taking apart an int byte-by-byte.
And the whole shebang is apparently assuming that sizeof( int ) == 2, which hasn't been true for desktop computers for... quite some time, so you might want to be looking at a more up-to-date tutorial. ;-)
What you get here is the result of endianess (http://en.wikipedia.org/wiki/Endianness).
512 is 0b0000 0010 0000 0000 in binary, which in little endian is stored in the memory as 0000 0000 0000 0010. Then ch[0] reads the last 8 bits (0b0000 0010 = 2 in decimal) and ch[1] reads the first 8 bits (0b0000 0000 = 0 in decimal).
Using int will not lead to this output in 32 bit machines as sizeof(int) = 4. This output will occur only if we use a 16 bit system or we use short int having memory size of 2 bytes.
A Union is a variable that may hold (at different times) objects of different types and sizes, with the compiler keeping track of size and alignment requirements.
union uni_t
{
short int i;
char ch[2];
};
This code snippet declares a union having two members- a integer and a character array.
The union can be used to hold different values at different times by simply allocating the values.
union uni_t z ={512};
This defines a variable z of type union uni_t and initializes the integer member ( i ) to the value of 512.
So the value stored in z becomes : 0b0000 0010 0000 0000
When this value is referenced using character array then ch[1] refers to first byte of data and ch[0] refers to second byte.
ch[1] = 0b00000010 = 2
ch[0] = ob00000000 = 0
So printf("%d%d",z.ch[0],z.ch[1]) results to
02

Perl unpack "S*" equivalent in C

I'm having a little trouble understanding Perl's unpack in some code I'm reading, specifically with the S* template.
$data = "FF";
print "$data - ", unpack("S*", $data), "\n";
# > FF - 17990
What is the equivalent of this in C?
Why?
Thanks very much for your help
Your code in C would look (roughly) like this:
const char *data = "FA";
unsigned short s;
memcpy( &s, data, strlen(data) );
printf("%s = %d\n", data, s);
This only handles your case with two characters, while unpack('S*',...) will return a list of shorts corresponding to its input.
Why? One of the primary motivations for pack and unpack was to make it easier interchange binary data with C structures.
perlpacktut is a good place to start.
unpack 'S' casts two bytes into a uint16_t.
#include <stdint.h>
const char *data = "\x46\x41";
uint16_t n;
memcpy(&n, data, sizeof(n)); // n = 0x4146 or 0x4641
Don't forget to check the number of bytes in data before doing this!
Notice that it can give two different results based on the system.
On a little-endian system (e.g. x86, x64), unpack 'S' is also equivalent to
uint16_t n = (data[1] << 8) | data[0]; // 0x4146
On a big-endian system, unpack 'S' is also equivalent to
uint16_t n = (data[0] << 8) | data[1]; // 0x4641
By the way, you might be tempted to do the following, but it's not portable due to memory alignment issues:
uint16_t n = *((const uint16_t *)data);
I’m answering my own question, so I might have some things wrong, but I'll leave this here for anyone coming in the future.
First, let's change my example to
$data = "FA";
print "$data - ", unpack("S*", $data), "\n";
# > FA - 16710
since having “FF” wasn’t particularly helpful.
The question is: how did we get from “FA” to 16710?
First, the character ‘F’ is converted to its ASCII value—70. In binary, this is 0100 0110 (note that I padded a leading zero so it’s clear that it’s a whole byte).
Then, we need the ASCII value of ‘A’—65. In binary, 0100 0001.
So we have F corresponding to 0100 0110 and A corresponding to 0100 0001.
Then we just glue these two binary values together, except we put the A first:
0100 0001 0100 0110
And converting 0100 0001 0100 0110 to decimal gives 16,710.
Note: I think the order in which the bytes are glued together might be different on different computers, so while the principle here should apply everywhere, the numbers might be different.

Unsigned Char pointing to unsigned integer

I don't understand why the following code prints out 7 2 3 0 I expected it to print out 1 9 7 1. Can anyone explain why it is printing 7230?:
unsigned int e = 197127;
unsigned char *f = (char *) &e;
printf("%ld\n", sizeof(e));
printf("%d ", *f);
f++;
printf("%d ", *f);
f++;
printf("%d ", *f);
f++;
printf("%d\n", *f);
Computers work with binary, not decimal, so 197127 is stored as a binary number and not a series of single digits separately in decimal
19712710 = 0003020716 = 0011 0000 0010 0000 01112
Suppose your system uses little endian, 0x00030207 would be stored in memory as 0x07 0x02 0x03 0x00 which is printed out as (7 2 3 0) as expected when you print out each byte
Because with your method you print out the internal representation of the unsigned and not its decimal representation.
Integers or any other data are represented as bytes internally. unsigned char is just another term for "byte" in this context. If you would have represented your integer as decimal inside a string
char E[] = "197127";
and then done an anologous walk throught the bytes, you would have seen the representation of the characters as numbers.
Binary representation of "197127" is "00110000001000000111".
The bytes looks like "00000111" (is 7 decimal), "00000010" (is 2), "0011" (is 3). the rest is 0.
Why did you expect 1 9 7 1? The hex representation of 197127 is 0x00030207, so on a little-endian architecture, the first byte will be 0x07, the second 0x02, the third 0x03, and the fourth 0x00, which is exactly what you're getting.
The value of e as 197127 is not a string representation. It is stored as a 16/32 bit integer (depending on platform). So, in memory, e is allocated, say 4 bytes on the stack, and would be represented as 0x30207 (hex) at that memory location. In binary, it would look like 110000001000000111. Note that the "endian" would actually backwards. See this link account endianess. So, when you point f to &e, you are referencing the 1st byte of the numeric value, If you want to represent a number as a string, you should have
char *e = "197127"
This has to do with the way the integer is stored, more specifically byte ordering. Your system happens to have little-endian byte ordering, i.e. the first byte of a multi byte integer is least significant, while the last byte is most significant.
You can try this:
printf("%d\n", 7 + (2 << 8) + (3 << 16) + (0 << 24));
This will print 197127.
Read more about byte order endianness here.
The byte layout for the unsigned integer 197127 is [0x07, 0x02, 0x03, 0x00], and your code prints the four bytes.
If you want the decimal digits, then you need to break the number down into digits:
int digits[100];
int c = 0;
while(e > 0) { digits[c++] = e % 10; e /= 10; }
while(c > 0) { printf("%u\n", digits[--c]); }
You know the type of int often take place four bytes. That means 197127 is presented as 00000000 00000011 00000010 00000111 in memory. From the result, your memory's address are Little-Endian. Which means, the low-byte 0000111 is allocated at low address, then 00000010 and 00000011, finally 00000000. So when you output f first as int, through type cast you obtain a 7. By f++, f points to 00000010, the output is 2. The rest could be deduced by analogy.
The underlying representation of the number e is in binary and if we convert the value to hex we can see that the value would be(assuming 32 bit unsigned int):
0x00030207
so when you iterate over the contents you are reading byte by byte through the *unsigned char **. Each byte contains two 4 bit hex digits and the byte order endiannes of the number is little endian since the least significant byte(0x07) is first and so in memory the contents are like so:
0x07020300
^ ^ ^ ^- Fourth byte
| | |-Third byte
| |-Second byte
|-First byte
Note that sizeof returns size_t and the correct format specifier is %zu, otherwise you have undefined behavior.
You also need to fix this line:
unsigned char *f = (char *) &e;
to:
unsigned char *f = (unsigned char *) &e;
^^^^^^^^
Because e is an integer value (probably 4 bytes) and not a string (1 byte per character).
To have the result you expect, you should change the declaration and assignment of e for :
unsigned char *e = "197127";
unsigned char *f = e;
Or, convert the integer value to a string (using sprintf()) and have f point to that instead :
char s[1000];
sprintf(s,"%d",e);
unsigned char *f = s;
Or, use mathematical operation to get single digit from your integer and print those out.
Or, ...

Output Explanation of this program in C?

I have this program in C:
int main(int argc, char *argv[])
{
int i=300;
char *ptr = &i;
*++ptr=2;
printf("%d",i);
return 0;
}
The output is 556 on little endian.
I tried to understand the output. Here is my explanation.
Question is Will the answer remains the same in the big endian machine?
i = 300;
=> i = 100101100 //in binary in word format => B B Hb 0001 00101100 where B = Byte and Hb = Half Byte
(A)=> in memory (assuming it is Little endian))
0x12345678 - 1100 - 0010 ( Is this correct for little endian)
0x12345679 - 0001 - 0000
0x1234567a - 0000 - 0000
0x1234567b - 0000 - 0000
0x1234567c - Location of next intezer(location of ptr++ or ptr + 1 where ptr is an intezer pointer as ptr is of type int => on doing ++ptr it will increment by 4 byte(size of int))
when
(B)we do char *ptr = &i;
ptr will become of type char => on doing ++ptr it will increment by 1 byte(size of char)
so on doing ++ptr it will jump to location -> 0x12345679 (which has 0001 - 0000)
now we are doing
++ptr = 2
=> 0x12345679 will be overwritten by 2 => 0x12345679 will have 00*10** - 0000 instead of 000*1* - 0000
so the new memory content will look like this :
(C)
0x12345678 - 1100 - 0010
0x12345679 - 0010 - 0000
0x1234567a - 0000 - 0000
0x1234567b - 0000 - 0000
which is equivalent to => B B Hb 0010 00101100 where B = Byte and Hb = Half Byte
Is my reasoning correct?Is there any other short method for this?
Rgds,
Softy
In a little-endian 32-bit system, the int 300 (0x012c) is typically(*) stored as 4 sequential bytes, lowest first: 2C 01 00 00. When you increment the char pointer that was formerly the int pointer &i, you're pointing at the second byte of that sequence, and setting it to 2 makes the sequence 2C 02 00 00 -- which, when turned back into an int, is 0x22c or 556.
(As for your understanding of the bit sequence...it seems a bit off. Endianness affects byte order in memory, as the byte is the smallest addressable unit. The bits within the byte don't get reversed; the low-order byte will be 2C (00101100) whether the system is little-endian or big-endian. (Even if the system did reverse the bits of a byte, it'd reverse them again to present them to you as a number, so you wouldn't notice a difference.) The big difference is where that byte appears in the sequence. The only places where bit order matters, is in hardware and drivers and such where you can receive less than a byte at a time.)
In a big-endian system, the int is typically(*) represented by the byte sequence 00 00 01 2C (differing from the little-endian representation solely in the byte order -- highest byte comes first). You're still modifying the second byte of the sequence, though...making 00 02 01 2C, which as an int is 0x02012c or 131372.
(*) Lots of things come into play here, including two's complement (which almost all systems use these days...but C doesn't require it), the value of sizeof(int), alignment/padding, and whether the system is truly big- or little-endian or a half-assed implementation of it. This is a big part of why mucking around with the bytes of a bigger type so often leads to undefined or implementation-specific behavior.
This is implementation defined. The internal representation of an int is not known according to the standard, so what you're doing is not portable. See section 6.2.6.2 in the C standard.
However, as most implementations use two's complement representation of signed ints, the endianness will affect the result as described in cHaos answer.
This is your int:
int i = 300;
And this is what the memory contains at &i: 2c 01 00 00
With the next instruction you assign the address of i to ptr, and then you move to the next byte with ++ptr and change its value to 2:
char *ptr = &i;
*++ptr = 2;
So now the memory contains: 2c 02 00 00 (i.e. 556).
The difference is that in a big-endian system in the address of i you would have seen 00 00 01 2C, and after the change: 00 02 01 2C.
Even if the internal rappresentation of an int is implementation-defined:
For signed integer types, the bits of the object representation shall
be divided into three groups: value bits, padding bits, and the sign
bit. There need not be any padding bits; signed char shall not have
any padding bits. There shall be exactly one sign bit. Each bit that
is a value bit shall have the same value as the same bit in the object
representation of the corresponding unsigned type (if there are M
value bits in the signed type and N in the unsigned type, then M ≤ N).
If the sign bit is zero, it shall not affect the resulting value. If
the sign bit is one, the value shall be modified in one of the
following ways: — the corresponding value with sign bit 0 is negated
(sign and magnitude); — the sign bit has the value −(2M) (two’s
complement); — the sign bit has the value −(2M − 1) (ones’
complement). Which of these applies is implementation-defined, as
is whether the value with sign bit 1 and all value bits zero (for the
first two), or with sign bit and all value bits 1 (for ones’
complement), is a trap representation or a normal value. In the case
of sign and magnitude and ones’ complement, if this representation is
a normal value it is called a negative zero.
I like experiments and that's the reason for having the PowerPC G5.
stacktest.c:
int main(int argc, char *argv[])
{
int i=300;
char *ptr = &i;
*++ptr=2;
/* Added the Hex dump */
printf("%d or %x\n",i, i);
return 0;
}
Build command:
powerpc-apple-darwin9-gcc-4.2.1 -o stacktest stacktest.c
Output:
131372 or 2012c
Resume: the cHao's answer is complete and in case you're in doubt here is the experimental evidence.

Resources