Can somebody explain me how does hexadecimal printfs work in this program?
For example why does the first line prints 100 instead of 50?
#include <stdio.h>
typedef struct
{
int age;
char name[20];g
}inf;
int main(int argc, char *argv[])
{
char m[]="Kolumbia",*p=m;
int x,y;
char* ar[]= {"Kiro","Pijo","Penda"};
int st[] = {{22,"Simeon"}, {19,"Kopernik"}};
x=0x80;y=2;x<<=1;y>>=1;printf("1:%x %x\n",x,y);
x=0x9;printf("2:%x %x %x\n",x|3,x&3,x^3);
x=0x3; y=1;printf("3:%x\n", x&(~y));
printf("4: %c %c %s\n", *(m+1), *m+1, m+1);
printf ("5: %c %s\n", p[3],p+3);
printf ("6: %s %s \n", *(ar+1), *ar+1);
printf("7: %c %c\n", **(ar+1), *(*ar+1));
printf("8: %d %s \n",st[0].age, st[1].name+1);
printf("9:%d %s\n",(st+1)->age,st->name+2);
printf("10:%c %c %c\n",*(st->name),*((st+1)->name),*(st->name+1));
return 0;
}
For example why does the first line prints 100 instead of 50?
It doesn't print 100 instead of 50, but instead of 80. Your initial value of x, given in
x=0x80;(...)
// bits:
// 1000 0000
is hexadecimal 80, 0x80, which is 128 decimal.
Then you shift it by 1 to the left
(...)y=2;x<<=1;(...)
which is the same as multiply it by 2. So x becomes 256 which is hexadecimal 0x100:
// bits:
// 1 0000 0000
And you print it in hexadecimal format which omits 0x and prints just 100 to the standard output.
You asked:
What about the "OR" operator in the second line? x=0x9;printf("2:%x %x
%x\n",x|3,x&3,x^3); 9 -> 1001 3 -> 0011, but how do we get B as a
result, from what i see we need 11-> 1011 convert to hex and get B.
x=0x9;printf("2:%x %x %x\n",x|3,x&3,x^3);
Here:
x=0x9;
// 0000 1001
x|3:
// x: 0000 1001
// 3: 0000 0011
// |
// =: 0000 1011 = 0x0B , and here is your B
x&3
// x: 0000 1001
// 3: 0000 0011
// &
// =: 0000 0001 = 0x01
x^3
// x: 0000 1001
// 3: 0000 0011
// ^
// =: 0000 1010 = 0x0A
In this first line containing a printf
x=0x80;y=2;x<<=1;y>>=1;printf("1:%x %x\n",x,y);
x is shifted left once, giving it the (doubled) value of 0x100.
This is printed as 100 because the %x format does not prepend 0x.
As to why it does not print 50 I can only imagine you think that x=0x80 is assigning a decimal value which would be 50h, and also not noticed the x<<=1.
Related
This question already has answers here:
Best practices for circular shift (rotate) operations in C++
(16 answers)
Closed 2 years ago.
I'm trying to rotate hexadecimal numbers in C. The problem I have is that, with each loop. more zeros occur in the number.
Here is my code:
int main (void) {
int hex = 0x1234ABCD;
for(int i=0; i<12;i++,hex <<=4){
printf("0x%04x %d ",hex,hex );
pattern(hex);
}
return 0;
}
I saw some other code on this site which added & 0x0F to the shift, but it is not working for me.
Here are my results with the compiler
0x1234abcd 305441741 0001 0010 0011 0100 1010 1011 1100 1101
0x234abcd0 592100560 0010 0011 0100 1010 1011 1100 1101 0000
0x34abcd00 883674368 0011 0100 1010 1011 1100 1101 0000 0000
0x4abcd000 1253888000 0100 1010 1011 1100 1101 0000 0000 0000
0xabcd0000 -1412628480 1010 1011 1100 1101 0000 0000 0000 0000
Thank you for your help.
There is no operator that does a rotation for you. You need to combine 2 shift operations. Also you should use unsigned values when doing bitshift operations.
int main (void) {
unsigned int hex = 0x1234ABCD;
for(int i=0; i<12;i++) {
printf("0x%04x %d ",hex,hex );
pattern(hex);
unsigned int upper = hex >> (sizeof(hex)*CHAR_BIT - 4);
hex <<= 4;
hex |= upper & 0x0F;
}
return 0;
}
When you use the left-shift (<<) operator on a number, shifting by n bits, then the most significant n bits of that number are lost, and the least significant n bits are filled with zeros (as you have noticed).
So, in order to perform a bitwise rotation, you need to first 'save' those top 4 bits, then put them back (using the | operator after shifting them down to the bottom).
So, assuming a 32-bit int size, something like this:
#include <stdio.h>
int main(void)
{
int hex = 0x1234ABCD;
for (int i = 0; i < 12; i++) {
printf("0x%04x %d ", hex, hex);
pattern(hex);
int save = (hex >> 28) & 0x0F; // Save TOP four bits of "hex" in BOTTOM 4 of "save"
hex <<= 4; // Now shift the bits to the left ...
hex |= save; // ... then put those saved bits in!
}
return 0;
}
Note: We mask the save value with 0x0F after the shift to make sure that all other bits are 0; if we don't do this then, with negative numbers, we are likely to get those other bits filled with 1s.
Why the output of below code is -127. Why shouldn't it be -1?
#include<stdio.h>
int main(){
int a = 129;
char *ptr;
ptr = (char *)&a;
printf("%d ",*ptr);
return 0;
}
If you will output the variable a in hexadecimal you will see that it is represented like 0x81.
Here is a demonstrative program.
#include <stdio.h>
int main(void)
{
int a = 129;
printf( "%#x\n", a );
return 0;
}
Its output is
0x81
0x80 is the minimal value of an object of the type char if the type char behaves as the type signed char. This value is equal to the decimal value -128.
Here is another demonstrative program.
#include <stdio.h>
int main(void)
{
char c = -128;
printf( "%#hhx\n", c );
return 0;
}
Its output is
0x80
If to add 1 you will get the value -127.
The two's complement representation of the character value -1 looks in hexadecimal like 0xff. If to add 1 you will get 0.
It can be understood as follows:
We all know that the range of (signed) char is from [-128 to 127], now let's break this range into binary format, so, the bits required is 8 (1 Byte)
0 : 0000 0000
1 : 0000 0001
2 : 0000 0010
3 : 0000 0011
...
126 : 0111 1110
127 : 0111 1111 <-- max +ve number as after that we will overflow in sign bit
-128 : 1000 0000 <-- weird number as the number and its 2's Complement are same.
-127 : 1000 0001
-126 : 1000 0010
...
-3 : 1111 1101
-2 : 1111 1110
-1 : 1111 1111
So, now coming back to the question, we had int a = 129;, clearly 129 when stored inside the char data type it is going to overflow as the max positive permissible value is 127. But why we got -127 and not something else?
Simple, binary equivalent of 129 is 1000 0001 and for char data-type that comes somewhere around,
127 : 0111 1111
-128 : 1000 0000
-127 : 1000 0001<-- here!
-126 : 1000 0010
...
So, we get -127 when 129 is stored in it.
When you convert a signed char to int, what will be filled for left 24 bits?
The answer is not always '0', but the 'sign bit' -- highest bit -- .
You may interest the following demo code:
#include<stdio.h>
int main(){
int a = 129;
char *ptr;
ptr = (char *)&a;
// output "0x81", yes 129 == 0x81
printf("0x%hhx\n",*ptr);
// output 0xffffff81
int b = *ptr; // convert a 'signed char' to 'int'
// since the 'sign bit' of the char is 'minus', so left padding that much '1'
printf("0x%x\n",b);
unsigned char *ptru = (unsigned char *)&a;
b = *ptru;
printf("0x%x\n",b); // still, output "0x81"
return 0;
}
I am very much confused, I have a small program where I am printing the value at different address location.
int main ()
{
// unsigned int x = 0x15711056;
unsigned int x = 0x15b11056;
char *c = (char*) &x;
printf ("*c is: 0x%x\n", *c);
printf("size of %d\n", sizeof(x));
printf("Value at first address %x\n", *(c+0));
printf("Value at second address %x\n", *(c+1));
printf("Value at third address %x\n", *(c+2));
printf("Value at fourth address %x\n", *(c+3));
For the commented unsigned int x the printf values are as expected i.e.
printf("Value at first address %x\n", *(c+0)) = 56
printf("Value at second address %x\n", *(c+1))= 10
printf("Value at third address %x\n", *(c+2))= 71
printf("Value at fourth address %x\n", *(c+3))= 15
But for un-commented int x why I am getting below result for *(c+2) It should be b1 not ffffffb1. Please help me to understand this I am running this on an online IDE https://www.onlinegdb.com/online_c_compiler. My PC is i7 intel.
printf("Value at first address %x\n", *(c+0)) = 56
printf("Value at second address %x\n", *(c+1))= 10
printf("Value at third address %x\n", *(c+2))= ffffffb1
printf("Value at fourth address %x\n", *(c+3))= 15
The value is signed as 0xB1 is 10110001 in binary, you need to use an unsigned char pointer:
unsigned char *c = (unsigned char*) &x;
Your code would work for any bytes up to 0x7F.
c is a signed char, 0xB1 (which is signed) is 1011 0001, you see that
the most significant bit is 1, so it's a negative number.
When you pass *(c+2) to printf, it gets promoted to an int which is
signed. Sign extension fills the rest of the bits with the same value as the
most significant bit from your char, which is 1. At this point printf
gets 1111 1111 1111 1111 1111 1111 1011 0001.
%x in printf prints it as an unsigned int, thus it prints 0xFFFFFFB1.
You have to declare your pointer as an unsigned char.
unsigned char *c = (unsigned char*) &x;
unsigned int x = 0x15b11056; /*lets say starting address of x is 0x100 */
char *c = (char*) &x; /** c is char pointer i.e at a time it can fetch 1 byte and it points to 0x100 **/
x looks like as below
------------------------------------------------------
| 0001 0101 | 1011 0001 | 0001 0000 | 0101 0110 |
------------------------------------------------------
0x104 0x103 0x102 0x101 0x100
x
c
Next, when you are doing *(c+2)); Lets expand it
*(c+2)) = *(0x100 + 2*1) /** increment by 1 byte */
= *(0x102)
= 1011 0001 (in binary) Notice here that sign bit is 1
means sign bit is going to copy to remaining bytes
As you are printing in %x format which expects unsigned type but c is of signed byte,sign bit gets copied into remaining bytes.
for *(c+2) input will be looks like
0000 0000 | 0000 0000 | 0000 0000 | 1011 0001
|
sign bit is one so this bit will be copied into remaining bytes, resultant will look like below
1111 1111 | 1111 1111 | 1111 1111 | 1011 0001
f f f f f f b 1
I explained particular part which you had doubt, I hope it helps.
I am making a program to communicate with a serial device. Device is giving me the data in hex format. Hex format which I am getting is FFFFFF84 but I am interested in extracting the last two bits that is 84 . So how can i extract it?
while(1)
{
int i;
char receivebuffer [1];
read (fd, receivebuffer, sizeof receivebuffer);
for ( i = 0; i < sizeof (receivebuffer); i++)
{
printf("value of buffer is %X\n\n", (char)receivebuffer[i]);
}
return 0;
}
I am getting the data in receivebuffer. Please help thanks.
you want to extract the last 2 byte? you need operator '&' extract it:
FFFFFF84 -> 1111 1111 1111 1111 1111 1111 1000 0100
000000FF -> 0000 0000 0000 0000 0000 0000 1111 1111
---------------------------------------------------
after & -> 0000 0000 0000 0000 0000 0000 1000 0100
so the anwser is do assignment:
last2 = input & 0xFF
hope this anwser help you understand bit operation.
You're just confused because printf is printing your data as a sign-extended int (this means that char on your system char is treated as signed - note that this is implementation-defined).
Change your printf to:
printf("value of buffer is %#X\n\n", (unsigned char)receivebuffer[i]);
or just make the type of receivebuffer unsigned:
unsigned char receivebuffer[1];
// ...
printf("value of buffer is %#X\n\n", receivebuffer[i]);
Device is giving me the data in hex format.
This contradicts your code. It seems the device gives you the data in binary (raw) format and you covert it to hex for printing. That is a huge difference.
If you do
printf("value of buffer is %X\n\n", (char)receivebuffer[i]);
the char (whose cast is unnecessary as it is already a char) gets converted to int. As your system has char signed, the resulting int is negative and thus the FFF... at the start.
You can do any of
printf("value of buffer is %X\n\n", receivebuffer[i] & 0xFF);
printf("value of buffer is %X\n\n", (unsigned char)receivebuffer[i]);
printf("value of buffer is %X\n\n", (uint8_t)receivebuffer[i]);
I know this is an old topic but I just want to add another option:
printf("value of buffer is %hhx\n\n", receivebuffer[i]);
%hhx translates to a "short short hex" value, or in other words, an 8-bit hex value.
The device just returns bytes. It is the printf which displays a byte in a certain format (decimal, hex, etc.). To show bytes in hex you should use "0x%02x" format.
I read a 17 byte hex string from command line "13:22:45:33:99:cd" and want to convert it into binary value of length 6 bytes.
For this, I read a 17 byte hex string from command line "13:22:45:33:99:cd" and converted it into binary digits as 0001 0011 0010 0010 0100 0101 0011 0011 1001 1001 1100 1101. However, since I am using a char array, it is of length 48 bytes. I want the resulting binary to be of length 6 bytes i.e, I want 0001 0011 0010 0010 0100 0101 0011 0011 1001 1001 1100 1101 to be of 6 bytes (48 bits) and not 48 bytes. Please suggest how to accomplish this.
Split the string on the ':' character, and each substring is now a string you can pass to e.g. strtoul and then put in a unsigned char array of size 6.
#include <stdio.h>
const char *input = "13:22:45:33:99:cd";
int output[6];
unsigned char result[6];
if (6 == sscanf(input, "%02x:%02x:%02x:%02x:%02x:%02x",
&output[0], &output[1], &output[2],
&output[3], &output[4], &output[5])) {
for (int i = 0; i < 6; i++)
result[i] = output[i];
} else {
whine("something's wrong with the input!");
}
You pack them together. Like high nibble with low nibble: (hi<<4)|lo (assuming both are 4-bits).
Although you may prefer to convert them byte by byte, not digit by digit in the first place.