I am very much confused, I have a small program where I am printing the value at different address location.
int main ()
{
// unsigned int x = 0x15711056;
unsigned int x = 0x15b11056;
char *c = (char*) &x;
printf ("*c is: 0x%x\n", *c);
printf("size of %d\n", sizeof(x));
printf("Value at first address %x\n", *(c+0));
printf("Value at second address %x\n", *(c+1));
printf("Value at third address %x\n", *(c+2));
printf("Value at fourth address %x\n", *(c+3));
For the commented unsigned int x the printf values are as expected i.e.
printf("Value at first address %x\n", *(c+0)) = 56
printf("Value at second address %x\n", *(c+1))= 10
printf("Value at third address %x\n", *(c+2))= 71
printf("Value at fourth address %x\n", *(c+3))= 15
But for un-commented int x why I am getting below result for *(c+2) It should be b1 not ffffffb1. Please help me to understand this I am running this on an online IDE https://www.onlinegdb.com/online_c_compiler. My PC is i7 intel.
printf("Value at first address %x\n", *(c+0)) = 56
printf("Value at second address %x\n", *(c+1))= 10
printf("Value at third address %x\n", *(c+2))= ffffffb1
printf("Value at fourth address %x\n", *(c+3))= 15
The value is signed as 0xB1 is 10110001 in binary, you need to use an unsigned char pointer:
unsigned char *c = (unsigned char*) &x;
Your code would work for any bytes up to 0x7F.
c is a signed char, 0xB1 (which is signed) is 1011 0001, you see that
the most significant bit is 1, so it's a negative number.
When you pass *(c+2) to printf, it gets promoted to an int which is
signed. Sign extension fills the rest of the bits with the same value as the
most significant bit from your char, which is 1. At this point printf
gets 1111 1111 1111 1111 1111 1111 1011 0001.
%x in printf prints it as an unsigned int, thus it prints 0xFFFFFFB1.
You have to declare your pointer as an unsigned char.
unsigned char *c = (unsigned char*) &x;
unsigned int x = 0x15b11056; /*lets say starting address of x is 0x100 */
char *c = (char*) &x; /** c is char pointer i.e at a time it can fetch 1 byte and it points to 0x100 **/
x looks like as below
------------------------------------------------------
| 0001 0101 | 1011 0001 | 0001 0000 | 0101 0110 |
------------------------------------------------------
0x104 0x103 0x102 0x101 0x100
x
c
Next, when you are doing *(c+2)); Lets expand it
*(c+2)) = *(0x100 + 2*1) /** increment by 1 byte */
= *(0x102)
= 1011 0001 (in binary) Notice here that sign bit is 1
means sign bit is going to copy to remaining bytes
As you are printing in %x format which expects unsigned type but c is of signed byte,sign bit gets copied into remaining bytes.
for *(c+2) input will be looks like
0000 0000 | 0000 0000 | 0000 0000 | 1011 0001
|
sign bit is one so this bit will be copied into remaining bytes, resultant will look like below
1111 1111 | 1111 1111 | 1111 1111 | 1011 0001
f f f f f f b 1
I explained particular part which you had doubt, I hope it helps.
Related
Why the output of below code is -127. Why shouldn't it be -1?
#include<stdio.h>
int main(){
int a = 129;
char *ptr;
ptr = (char *)&a;
printf("%d ",*ptr);
return 0;
}
If you will output the variable a in hexadecimal you will see that it is represented like 0x81.
Here is a demonstrative program.
#include <stdio.h>
int main(void)
{
int a = 129;
printf( "%#x\n", a );
return 0;
}
Its output is
0x81
0x80 is the minimal value of an object of the type char if the type char behaves as the type signed char. This value is equal to the decimal value -128.
Here is another demonstrative program.
#include <stdio.h>
int main(void)
{
char c = -128;
printf( "%#hhx\n", c );
return 0;
}
Its output is
0x80
If to add 1 you will get the value -127.
The two's complement representation of the character value -1 looks in hexadecimal like 0xff. If to add 1 you will get 0.
It can be understood as follows:
We all know that the range of (signed) char is from [-128 to 127], now let's break this range into binary format, so, the bits required is 8 (1 Byte)
0 : 0000 0000
1 : 0000 0001
2 : 0000 0010
3 : 0000 0011
...
126 : 0111 1110
127 : 0111 1111 <-- max +ve number as after that we will overflow in sign bit
-128 : 1000 0000 <-- weird number as the number and its 2's Complement are same.
-127 : 1000 0001
-126 : 1000 0010
...
-3 : 1111 1101
-2 : 1111 1110
-1 : 1111 1111
So, now coming back to the question, we had int a = 129;, clearly 129 when stored inside the char data type it is going to overflow as the max positive permissible value is 127. But why we got -127 and not something else?
Simple, binary equivalent of 129 is 1000 0001 and for char data-type that comes somewhere around,
127 : 0111 1111
-128 : 1000 0000
-127 : 1000 0001<-- here!
-126 : 1000 0010
...
So, we get -127 when 129 is stored in it.
When you convert a signed char to int, what will be filled for left 24 bits?
The answer is not always '0', but the 'sign bit' -- highest bit -- .
You may interest the following demo code:
#include<stdio.h>
int main(){
int a = 129;
char *ptr;
ptr = (char *)&a;
// output "0x81", yes 129 == 0x81
printf("0x%hhx\n",*ptr);
// output 0xffffff81
int b = *ptr; // convert a 'signed char' to 'int'
// since the 'sign bit' of the char is 'minus', so left padding that much '1'
printf("0x%x\n",b);
unsigned char *ptru = (unsigned char *)&a;
b = *ptru;
printf("0x%x\n",b); // still, output "0x81"
return 0;
}
#include<stdio.h>
void main()
{
int a = -1;
unsigned int b =15;
if(b==a)
printf ("b is equal to a");
}
The output is empty. negative integers are stored as 2's complement of same postive number .When a integer is compared to unsigned int integer is promoted to unsigned int by considering the 2's complement as unsigned int which is 15 here but the output is empty eventhough the 2's complement of -1 is 15
The variable a and b
int a = -1; /* its a sign ed int, it takes 4 bytes */
unsigned int b =15;/* its a unsigned int */
And its looks like below
a = -1 => 1111 1111 | 1111 1111 | 1111 1111 | 1111 1111
b = 15 => 0000 0000 | 0000 0000 | 0000 0000 | 0000 1111
MSB LSB
Now when you compare a and b like
if(b==a) {
/* some code */
}
Here you are doing comparison == between two different types(a is of signed type & b is of unsigned type). So implicitly compiler will convert/promote sign int to unsigned int & then it will perform comparison ==. See this http://port70.net/~nsz/c/c11/n1570.html#6.3.1.8 for arithmetic conversation rules.
Now What is the unsigned equivalent of a or -1 ? Its all one's in 4 byte i.e 4294967295. So now it looks like
if(15==4294967295) { /*its false */
/* some code */
}
Since if condition is false, your code doesn't print anything.
If you change variable b to be
unsigned int b = 4294967295;
(This does assume int is being stored as 32 bits)
The print statement will run. See, https://stackoverflow.com/a/2084968/7529976
I was recently studying union and end up with a confusion even after reading a lot about it.
#include<stdio.h>
union test
{
int x;
char arr[4];
int y;
};
int main()
{
union test t;
t.x = 0;
t.arr[1] = 'G';
printf("%s\n", t.arr);
printf("%d\n",t.x);
return 0;
}
What I understood is :
Since x and arr[4] share the same memory, when we set x = 0, all characters of arr are set as 0. 0 is ASCII value of '\0'. When we do "t.arr[1] = 'G'", arr[] becomes "\0G\0\0". When we print a string using "%s", the printf function starts from the first character and keeps printing till it finds a \0. Since the first character itself is \0, nothing is printed.
What I don't get is second printf statement
Now since arr[] is "\0G\0\0" , the same location is shared with x and y.
So what I think x to be is the following
00000000 01000111 00000000 00000000 ("\0G\0\0")
so t.x should print 4653056.
But what it's printing is 18176.
Where am I going wrong?
Is this technically undefined or is it due to some silly mistake or am I missing some concept??
All members of Union will share the same common memory. assume starting address of union is 0x100.
when you wrote t.x = 0; whole 4 bytes got initialize with zero as
-------------------------------------------------
| 0000 0000 | 0000 0000 | 0000 0000 | 0000 0000 |
-------------------------------------------------
0x104 0x103 0x102 0x101 0x100
x,arr,y
when you are writing t.arr[1] = 'G'; arr[1] will overwritten with 'G' ascii value, it looks like
-------------------------------------------------
| 0000 0000 | 0000 0000 | 0100 0111 | 0000 0000 |
-------------------------------------------------
0x104 0x103 0x102 0x101 0x100
now calculates this value which is 18176.
tl;dr: endianity!
When printf reads the data from the memory pointed at by your union, it looks at the endianity of your system and reads the data stored in little endian.
So, instead of printing the data as it is stored in memory (0x00470000) it gets the number 0x00004700, which correlates to 18176, like you get.
Code example:
#include<stdio.h>
union test
{
int x;
char arr[4];
int y;
};
int main()
{
union test t;
t.x = 0;
t.arr[1] = 'G';
printf("%s\n", t.arr);
printf("%d\n",t.x); // prints 18176
t.x = 0;
t.arr[2] = 'G';
printf("%d\n",t.x); // prints 4653056
return 0;
}
Or in Python:
import struct
union_data = "\x00G\x00\x00"
print struct.unpack("<I", a)[0] # This is little endian. Prints 18176
print struct.nupack(">I", a)[0] # This is big endian. Prints 4653056
Bonus! You can also use the function htonl to convert integers read as little endian to big endian. See more at the docs.
Can somebody explain me how does hexadecimal printfs work in this program?
For example why does the first line prints 100 instead of 50?
#include <stdio.h>
typedef struct
{
int age;
char name[20];g
}inf;
int main(int argc, char *argv[])
{
char m[]="Kolumbia",*p=m;
int x,y;
char* ar[]= {"Kiro","Pijo","Penda"};
int st[] = {{22,"Simeon"}, {19,"Kopernik"}};
x=0x80;y=2;x<<=1;y>>=1;printf("1:%x %x\n",x,y);
x=0x9;printf("2:%x %x %x\n",x|3,x&3,x^3);
x=0x3; y=1;printf("3:%x\n", x&(~y));
printf("4: %c %c %s\n", *(m+1), *m+1, m+1);
printf ("5: %c %s\n", p[3],p+3);
printf ("6: %s %s \n", *(ar+1), *ar+1);
printf("7: %c %c\n", **(ar+1), *(*ar+1));
printf("8: %d %s \n",st[0].age, st[1].name+1);
printf("9:%d %s\n",(st+1)->age,st->name+2);
printf("10:%c %c %c\n",*(st->name),*((st+1)->name),*(st->name+1));
return 0;
}
For example why does the first line prints 100 instead of 50?
It doesn't print 100 instead of 50, but instead of 80. Your initial value of x, given in
x=0x80;(...)
// bits:
// 1000 0000
is hexadecimal 80, 0x80, which is 128 decimal.
Then you shift it by 1 to the left
(...)y=2;x<<=1;(...)
which is the same as multiply it by 2. So x becomes 256 which is hexadecimal 0x100:
// bits:
// 1 0000 0000
And you print it in hexadecimal format which omits 0x and prints just 100 to the standard output.
You asked:
What about the "OR" operator in the second line? x=0x9;printf("2:%x %x
%x\n",x|3,x&3,x^3); 9 -> 1001 3 -> 0011, but how do we get B as a
result, from what i see we need 11-> 1011 convert to hex and get B.
x=0x9;printf("2:%x %x %x\n",x|3,x&3,x^3);
Here:
x=0x9;
// 0000 1001
x|3:
// x: 0000 1001
// 3: 0000 0011
// |
// =: 0000 1011 = 0x0B , and here is your B
x&3
// x: 0000 1001
// 3: 0000 0011
// &
// =: 0000 0001 = 0x01
x^3
// x: 0000 1001
// 3: 0000 0011
// ^
// =: 0000 1010 = 0x0A
In this first line containing a printf
x=0x80;y=2;x<<=1;y>>=1;printf("1:%x %x\n",x,y);
x is shifted left once, giving it the (doubled) value of 0x100.
This is printed as 100 because the %x format does not prepend 0x.
As to why it does not print 50 I can only imagine you think that x=0x80 is assigning a decimal value which would be 50h, and also not noticed the x<<=1.
I am making a program to communicate with a serial device. Device is giving me the data in hex format. Hex format which I am getting is FFFFFF84 but I am interested in extracting the last two bits that is 84 . So how can i extract it?
while(1)
{
int i;
char receivebuffer [1];
read (fd, receivebuffer, sizeof receivebuffer);
for ( i = 0; i < sizeof (receivebuffer); i++)
{
printf("value of buffer is %X\n\n", (char)receivebuffer[i]);
}
return 0;
}
I am getting the data in receivebuffer. Please help thanks.
you want to extract the last 2 byte? you need operator '&' extract it:
FFFFFF84 -> 1111 1111 1111 1111 1111 1111 1000 0100
000000FF -> 0000 0000 0000 0000 0000 0000 1111 1111
---------------------------------------------------
after & -> 0000 0000 0000 0000 0000 0000 1000 0100
so the anwser is do assignment:
last2 = input & 0xFF
hope this anwser help you understand bit operation.
You're just confused because printf is printing your data as a sign-extended int (this means that char on your system char is treated as signed - note that this is implementation-defined).
Change your printf to:
printf("value of buffer is %#X\n\n", (unsigned char)receivebuffer[i]);
or just make the type of receivebuffer unsigned:
unsigned char receivebuffer[1];
// ...
printf("value of buffer is %#X\n\n", receivebuffer[i]);
Device is giving me the data in hex format.
This contradicts your code. It seems the device gives you the data in binary (raw) format and you covert it to hex for printing. That is a huge difference.
If you do
printf("value of buffer is %X\n\n", (char)receivebuffer[i]);
the char (whose cast is unnecessary as it is already a char) gets converted to int. As your system has char signed, the resulting int is negative and thus the FFF... at the start.
You can do any of
printf("value of buffer is %X\n\n", receivebuffer[i] & 0xFF);
printf("value of buffer is %X\n\n", (unsigned char)receivebuffer[i]);
printf("value of buffer is %X\n\n", (uint8_t)receivebuffer[i]);
I know this is an old topic but I just want to add another option:
printf("value of buffer is %hhx\n\n", receivebuffer[i]);
%hhx translates to a "short short hex" value, or in other words, an 8-bit hex value.
The device just returns bytes. It is the printf which displays a byte in a certain format (decimal, hex, etc.). To show bytes in hex you should use "0x%02x" format.