Unsigned Char pointing to unsigned integer - c

I don't understand why the following code prints out 7 2 3 0 I expected it to print out 1 9 7 1. Can anyone explain why it is printing 7230?:
unsigned int e = 197127;
unsigned char *f = (char *) &e;
printf("%ld\n", sizeof(e));
printf("%d ", *f);
f++;
printf("%d ", *f);
f++;
printf("%d ", *f);
f++;
printf("%d\n", *f);

Computers work with binary, not decimal, so 197127 is stored as a binary number and not a series of single digits separately in decimal
19712710 = 0003020716 = 0011 0000 0010 0000 01112
Suppose your system uses little endian, 0x00030207 would be stored in memory as 0x07 0x02 0x03 0x00 which is printed out as (7 2 3 0) as expected when you print out each byte

Because with your method you print out the internal representation of the unsigned and not its decimal representation.
Integers or any other data are represented as bytes internally. unsigned char is just another term for "byte" in this context. If you would have represented your integer as decimal inside a string
char E[] = "197127";
and then done an anologous walk throught the bytes, you would have seen the representation of the characters as numbers.

Binary representation of "197127" is "00110000001000000111".
The bytes looks like "00000111" (is 7 decimal), "00000010" (is 2), "0011" (is 3). the rest is 0.

Why did you expect 1 9 7 1? The hex representation of 197127 is 0x00030207, so on a little-endian architecture, the first byte will be 0x07, the second 0x02, the third 0x03, and the fourth 0x00, which is exactly what you're getting.

The value of e as 197127 is not a string representation. It is stored as a 16/32 bit integer (depending on platform). So, in memory, e is allocated, say 4 bytes on the stack, and would be represented as 0x30207 (hex) at that memory location. In binary, it would look like 110000001000000111. Note that the "endian" would actually backwards. See this link account endianess. So, when you point f to &e, you are referencing the 1st byte of the numeric value, If you want to represent a number as a string, you should have
char *e = "197127"

This has to do with the way the integer is stored, more specifically byte ordering. Your system happens to have little-endian byte ordering, i.e. the first byte of a multi byte integer is least significant, while the last byte is most significant.
You can try this:
printf("%d\n", 7 + (2 << 8) + (3 << 16) + (0 << 24));
This will print 197127.
Read more about byte order endianness here.

The byte layout for the unsigned integer 197127 is [0x07, 0x02, 0x03, 0x00], and your code prints the four bytes.
If you want the decimal digits, then you need to break the number down into digits:
int digits[100];
int c = 0;
while(e > 0) { digits[c++] = e % 10; e /= 10; }
while(c > 0) { printf("%u\n", digits[--c]); }

You know the type of int often take place four bytes. That means 197127 is presented as 00000000 00000011 00000010 00000111 in memory. From the result, your memory's address are Little-Endian. Which means, the low-byte 0000111 is allocated at low address, then 00000010 and 00000011, finally 00000000. So when you output f first as int, through type cast you obtain a 7. By f++, f points to 00000010, the output is 2. The rest could be deduced by analogy.

The underlying representation of the number e is in binary and if we convert the value to hex we can see that the value would be(assuming 32 bit unsigned int):
0x00030207
so when you iterate over the contents you are reading byte by byte through the *unsigned char **. Each byte contains two 4 bit hex digits and the byte order endiannes of the number is little endian since the least significant byte(0x07) is first and so in memory the contents are like so:
0x07020300
^ ^ ^ ^- Fourth byte
| | |-Third byte
| |-Second byte
|-First byte
Note that sizeof returns size_t and the correct format specifier is %zu, otherwise you have undefined behavior.
You also need to fix this line:
unsigned char *f = (char *) &e;
to:
unsigned char *f = (unsigned char *) &e;
^^^^^^^^

Because e is an integer value (probably 4 bytes) and not a string (1 byte per character).
To have the result you expect, you should change the declaration and assignment of e for :
unsigned char *e = "197127";
unsigned char *f = e;
Or, convert the integer value to a string (using sprintf()) and have f point to that instead :
char s[1000];
sprintf(s,"%d",e);
unsigned char *f = s;
Or, use mathematical operation to get single digit from your integer and print those out.
Or, ...

Related

What happen when we do AND operation of 4 byte number with 2 byte number

I am trying to write a piece of code to extract every single byte of data out of a 4 byte integer
after little bit of searching i actually found the code with which this can be achieved , but i am just curious to know why it behaves the way output is created , below is the code
#include <stdio.h>
int getByte(int x, int n);
void main()
{
int x = 0xAABBCCDD;
int n;
for (n=0; n<=3; n++) {
printf("byte %d of 0x%X is 0x%X\n",n,x,getByte(x,n));
}
}
// extract byte n from word x
// bytes numbered from 0 (LSByte) to 3 (MSByte)
int getByte(int x, int n)
{
return (x >> (n << 3)) & 0xFF;
}
Output:
byte 0 of 0xAABBCCDD is 0xDD
byte 1 of 0xAABBCCDD is 0xCC
byte 2 of 0xAABBCCDD is 0xBB
byte 3 of 0xAABBCCDD is 0xAA
the result is self explanatory but why the compiler didn't converted 0xFF into 0x000000FF , since it has to perform AND operation with a 4 byte number and instead generated the output consisting of only 1 byte of data and not 4 byte of data.
The compiler did convert 0xFF to 0x000000FF. Since the value is an integer constant, 0xFF gets converted internally to a 32-bit value (assuming that your platform has a 32-bit int).
Note that the values you get back, i.e. 0xDD, 0xCC, 0xBB, and 0xAA, are also ints, so they have leading zeros as well, so you actually get 0x000000DD, 0x000000CC, and so on. Leading zeros do not get printed automatically, though, so if you wish to see them, you'd need to change the format string to request leading zeros to be included:
for (n=0; n<=3; n++) {
printf("byte %d of 0x%08X is 0x%08X\n", n, x, getByte(x,n));
}
Demo.
The output is a 4-byte number, but printf("%X") doesn't print any leading zero digits.
If you do printf("foo 0x%X bar\n", 0x0000AA) you'll get foo 0xAA bar as output.
0xFF and 0x000000FF are both exactly the same thing, by default the formatting will drop the leading 0's if you want to print them in your output, you just need to specify it:
printf("byte %d of 0x%X is 0x08%X\n",n,x,getByte(x,n));
But since you are printing Bytes I'm not quite sure why you would expect the leading 0's.

Output of the following C code

What will be the output of the following C code. Assuming it runs on Little endian machine, where short int takes 2 Bytes and char takes 1 Byte.
#include<stdio.h>
int main() {
short int c[5];
int i = 0;
for(i = 0; i < 5; i++)
c[i] = 400 + i;
char *b = (char *)c;
printf("%d", *(b+8));
return 0;
}
In my machine it gave
-108
I don't know if my machine is Little endian or big endian. I found somewhere that it should give
148
as the output. Because low order 8 bits of 404(i.e. element c[4]) is 148. But I think that due to "%d", it should read 2 Bytes from memory starting from the address of c[4].
The code gives different outputs on different computers because on some platforms the char type is signed by default and on others it's unsigned by default. That has nothing to do with endianness. Try this:
char *b = (char *)c;
printf("%d\n", (unsigned char)*(b+8)); // always prints 148
printf("%d\n", (signed char)*(b+8)); // always prints -108 (=-256 +148)
The default value is dependent on the platform and compiler settings. You can control the default behavior with GCC options -fsigned-char and -funsigned-char.
c[4] stores 404. In a two-byte little-endian representation, that means two bytes of 0x94 0x01, or (in decimal) 148 1.
b+8 addresses the memory of c[4]. b is a pointer to char, so the 8 means adding 8 bytes (which is 4 two-byte shorts). In other words, b+8 points to the first byte of c[4], which contains 148.
*(b+8) (which could also be written as b[8]) dereferences the pointer and thus gives you the value 148 as a char. What this does is implementation-defined: On many common platforms char is a signed type (with a range of -128 .. 127), so it can't actually be 148. But if it is an unsigned type (with a range of 0 .. 255), then 148 is fine.
The bit pattern for 148 in binary is 10010100. Interpreting this as a two's complement number gives you -108.
This char value (of either 148 or -108) is then automatically converted to int because it appears in the argument list of a variable-argument function (printf). This doesn't change the value.
Finally, "%d" tells printf to take the int argument and format it as a decimal number.
So, to recap: Assuming you have a machine where
a byte is 8 bits
negative numbers use two's complement
short int is 2 bytes
... then this program will output either -108 (if char is a signed type) or 148 (if char is an unsigned type).
To see what sizes types have in your system:
printf("char = %u\n", sizeof(char));
printf("short = %u\n", sizeof(short));
printf("int = %u\n", sizeof(int));
printf("long = %u\n", sizeof(long));
printf("long long = %u\n", sizeof(long long));
Change the lines in your program
unsigned char *b = (unsigned char *)c;
printf("%d\n", *(b + 8));
And simple test (I know that it is not guaranteed but all C compilers I know do it this way and I do not care about old CDC or UNISYS machines which had different addresses and pointers to different types of data
printf(" endianes test: %s\n", (*b + (unsigned)*(b + 1) * 0x100) == 400? "little" : "big");
Another remark: it is only because in your program c[0] == 400

C programming why does the address of char array increment from 0012FF74 to 0012FF75?

Heres the code:
char chararray[] = {68, 97, 114, 105, 110};
/* 1 byte each*/
int i;
printf("chararray intarray\n");
printf("-------------------\n");
for(i = 0; i < 5; i++)
printf("%p\n", (chararray + i));
Output:
chararray
---------
0012FF74
0012FF75
0012FF76
0012FF77
Now im trying to understand this in terms of hexadecimal, bits and bytes.
I understand that a char is 1 byte and its supposed to increment by 1 byte which is 8 bits.
But I dont understand how its only increasing by 1 in hex? 1 hexadecimal only represents 4 bits correct? so Im kind of confused, it seems like its only incrementing by 4 bits.
Any help on clearing this up is greatly appreciated thanks!
It's true that if you represent a byte in hexa then it is made out of 2 hexa digits where each one stands for 4 bits.
However, the addresses you are seeing are addresses of bytes, and not the content of them. Each byte receives its own address, and the addresses are sequential, just like if we gave each byte a number: byte 0, byte 1, byte 2, byte 3,....
The address in a pointer points to a byte, not to a bit. Your pointer is of type char *, so when it is incremented, the address increases by sizeof(char). If, however, you used a different type, such as int, your pointer would increase by sizeof(int) on each increment, even if it is pointing to a char [] array.
On my machine, sizeof(int)==4, for example.
I wrote this code:
#include <stdio.h>
int main()
{
char str[] = "ACBDEFGHIJKLMNOPQRSTUVWXYZ";
int *a = str;
printf("Char\tAddr\n");
while(a <= &str[25])
{
printf("%c\t%p\n", *a, (void *)a);
a++;
}
return 0;
}
Output:
Char Addr
A 00D5F9BC
E 00D5F9C0
I 00D5F9C4
M 00D5F9C8
Q 00D5F9CC
U 00D5F9D0
Y 00D5F9D4
Every fourth character in the string is outputted.
First, pointer arithmetics like (chararray + i), where chararray points to a char (i.e. is of type char*) increases the value of pointer chararray by i * sizeof(char). Note that sizeof(char) is 1 by definition.
Second, a pointer represents a memory address, which is represented by an integral value that indicates a position in an (absolutely or relatively) addressed memory block, e.g. on the heap, on the stack, on some other data segment, ... . Confer, for example, the following statement in this online C standard draft:
6.3.2.3 Pointers
(5) An integer may be converted to any pointer type. ...
(6) Any pointer type may be converted to an integer type. ...
So when viewing the value of a pointer, we can think of an integral value, just like 256 or 1024 (when "viewed" in decimal format), or 0x100 or 0x400 (when viewed in hexadecimal format). Note that 256 in decmial is equivalent to 100 in hexadecimal, and this has nothing to do with bits and bytes.
Adding 1 to an integral value of 256 (or 0x100) gives 257 (or 0x101), regardless of whether this value stands for a position in a memory block or for oranges sold in the department store. So it's all about "outputting" integral values in hex format.
See the following code illustrating this:
int main()
{
char chararray[] = {68, 97, 114, 105, 110};
for(int i = 0; i < 5; i++) {
char *ptr = (chararray + i);
unsigned long ptrAsIntegralVal = (unsigned long)ptr;
printf("ptr: %p; in decmial format: %lu\n", ptr, ptrAsIntegralVal);
}
}
Output:
ptr: 0x7fff5fbff767; in decmial format: 140734799804263
ptr: 0x7fff5fbff768; in decmial format: 140734799804264
ptr: 0x7fff5fbff769; in decmial format: 140734799804265
ptr: 0x7fff5fbff76a; in decmial format: 140734799804266
ptr: 0x7fff5fbff76b; in decmial format: 140734799804267
Using hexadecimal numbers is just another way of representing any number. It has nothing to do with bits and bytes. One byte is 8 bits, no matter if you represent it as hexadecimal number or decimal number. So it just increases by one = 1 Byte = 8 Bits.

declaring string using pointer to int

I am trying to initialize a string using pointer to int
#include <stdio.h>
int main()
{
int *ptr = "AAAA";
printf("%d\n",ptr[0]);
return 0;
}
the result of this code is 1094795585
could any body explain this behavior and why the code gave this answers ?
I am trying to initialize a string using pointer to int
The string literal "AAAA" is of type char[5], that is array of five elements of type char.
When you assign:
int *ptr = "AAAA";
you actually must use explicit cast (as types don't match):
int *ptr = (int *) "AAAA";
But, still it's potentially invalid, as int and char objects may have different alignment requirements. In other words:
alignof(char) != alignof(int)
may hold. Also, in this line:
printf("%d\n", ptr[0]);
you are invoking undefined behavior (so it might print "Hello from Mars" if compiler likes so), as ptr[0] dereferences ptr, thus violating strict aliasing rule.
Note that it is valid to make transition int * ---> char * and read object as char *, but not the opposite.
the result of this code is 1094795585
The result makes sense, but for that, you need to rewrite your program in valid form. It might look as:
#include <stdio.h>
#include <string.h>
union StringInt {
char s[sizeof("AAAA")];
int n[1];
};
int main(void)
{
union StringInt si;
strcpy(si.s, "AAAA");
printf("%d\n", si.n[0]);
return 0;
}
To decipher it, you need to make some assumptions, depending on your implementation. For instance, if
int type takes four bytes (i.e. sizeof(int) == 4)
CPU has little-endian byte ordering (though it's not really matter, since every letter is the same)
default character set is ASCII (the letter 'A' is represented as 0x41, that is 65 in decimal)
implementation uses two's complement representation of signed integers
then, you may deduce, that si.n[0] holds in memory:
0x41 0x41 0x41 0x41
that is in binary:
01000001 ...
The sign (most-significant) bit is unset, hence it is just equal to:
65 * 2^24 + 65 * 2^16 + 65 * 2^8 + 65 =
65 * (2^24 + 2^16 + 2^8 + 1) = 65 * 16843009 = 1094795585
1094795585 is correct.
'A' has the ASCII value 65, i.e. 0x41 in hexadecimal.
Four of them makes 0x41414141 which is equal to 1094795585 in decimal.
You got the value 65656565 by doing 65*100^0 + 65*100^1 + 65*100^2 + 65*100^3 but that's wrong since a byte1 can contain 256 different values, not 100.
So the correct calculation would be 65*256^0 + 65*256^1 + 65*256^2 + 65*256^3, which gives 1094795585.
It's easier to think of memory in hexadecimal because one hexadecimal digit directly corresponds to half a byte1, so two hex digits is one full byte1 (cf. 0x41). Whereas in decimal, 255 fits in a single byte1, but 256 does not.
1 assuming CHAR_BIT == 8
65656565 this is a wrong representation of the value of "AAAA" you are seprately representing each character and "AAAA" is stored as array.Its converting into 1094795585 because %d identifier prints decimal value. Run this in gdb with following command:
x/8xb (pointer) //this will show you the memory hex value
x/d (pointer) //this will show you the converted decimal value
#zenith gave you the answer you expected, but your code invokes UB. Anyway, you could demonstrate the same in an almost correct way :
#include <stdio.h>
int main()
{
int i, val;
char *pt = (char *) &val; // cast a pointer to any to a pointer to char : valid
for (i=0; i<sizeof(int); i++) pt[i] = 'A'; // assigning bytes of int : UB in general case
printf("%d 0x%x\n",val, val);
return 0;
}
Assigning bytes of an int is UB in the general case because C standard says that [for] signed integer types, the bits of the object representation shall be divided into three groups: value bits, padding bits, and the sign bit. And a remark adds Some combinations of padding bits might generate trap representations, for example, if one padding
bit is a parity bit.
But in common architectures, there are no padding bits and all bits values correspond to valid numbers, so the operation is valid (but implementation dependant) on all common systems. It is still implementation dependant because size of int is not fixed by standard, nor is endianness.
So : on a 32 bit system using no padding bits, above code will produce
1094795585 0x41414141
indepentantly of endianness.

Signed/Unsigned int, short and char

I am trying to understand the output of the code given at : http://phrack.org/issues/60/10.html
Quoting it here for reference:
#include <stdio.h>
int main(void){
int l;
short s;
char c;
l = 0xdeadbeef;
s = l;
c = l;
printf("l = 0x%x (%d bits)\n", l, sizeof(l) * 8);
printf("s = 0x%x (%d bits)\n", s, sizeof(s) * 8);
printf("c = 0x%x (%d bits)\n", c, sizeof(c) * 8);
return 0;
}
The output i get on my machine is:-
l = 0xdeadbeef (32 bits)
s = 0xffffbeef (16 bits)
c = 0xffffffef (8 bits)
Here is my understanding:-
The assignments s=l, c=l will result in s and c being promoted to ints and they will have the last 16 bits (0xbeef) and last 8 bits (0xef) of l respectively.
Printf tries to interpret each of the above values (l,s and c) as unsigned integers (as %x is passed as the format specifier). From the output i see that sign extension has taken place. My doubt is that since %x represents unsigned int, why has the sign extension taken place while printing s and c? Should not the output for s be 0x0000beef and for c be 0x000000ef?
why has the sign extension taken place while printing s and c
Let's see the following code:
unsigned char ucr8bit; /* Range is 0 to 255 on my machine */
signed char cr8bit; /* Range is -128 to 127 on my machine */
int i32bit;
cr8bit = MINUS_100; /* (char)(-100) or 0x9C */
i32bit = cr8bit; /* i32 bit is -100 or 0xFFFFFF9C */
As you can see, althout the number -100 is same, its representation is not mere prepending 0s in wider character but may be prepending the MSB or sign bit of the signed type in 2s complement system and 1s complement system.
In your example you are trying to print s and c as wider type and hence getting the sign bit replication.
Also your code contains many sources of undefined and unspecified behavior and thus may give different output on different compilers.
(For instance, you should use signed char instead of char as char may behave as unsigned char on some implementation and as signed char on some other implmentations)
l = 0xdeadbeef; /* Initializing l from an unsigned
if sizeof l is 32 bit UB as l is signed */
s = l; /* Initializing with an undefined value. Moreover
implicit conversion wider to narrower type */
printf("l = 0x%x (%d bits)\n", l, sizeof(l) * 8); /* Using %x
to print signed number and %d to print size_t */
You are using a 32-bit signed integer. That means that only 31 bits can be used for positive numbers. 0xdeadbeef uses 32 bits. Therefore, assigning it to a 32-bit signed integer makes it a negative number.
When shown with an unsigned conversion specifier, %x, it looks like the negative number that it is (with the sign extension).
When copying it into a short or char, the property of it being a negative number is retained.
To further show this, try setting:
l = 0xef;
The output is now:
l = 0xef (32 bits)
s = 0xef (16 bits)
c = 0xffffffef (8 bits)
0xef uses 8 bits which is positive when placed into a 32-bit or 16-bit variable. When you place an 8-bit number into a signed 8-bit variable (char), you are creating a negative number.
To see the retention of the negative number, try the reverse:
c = 0xef;
s = c;
l = c;
The output is:
l = 0xffffffef (32 bits)
s = 0xffffffef (16 bits)
c = 0xffffffef (8 bits)

Resources