Bitoperation results not as expected - c

Here is this simple code:
char a = '10';
char b = '6';
printf("%d\n", a | b);
printf("%d\n", a ^ b);
printf("%d\n", a << 2);
and the output is
54
6
192
Now my question is, why these results. I checked it on paper and what I have is
1110 for a | b = 14
1100 for a ^ b = 12
00101000 for a << 2 = 40
So why this different result?

You are declaring:
char a = '10';
char b = '6';
In this case b is 00110110 (0x36) because you are declaring a character, not an integer.
I also don't know why char a = '10'; even works because single quotes (') are used to create only single chars literals and you're declaring two there.
The correct way should be:
char a = 10;
char b = 6;
printf("%d\n", a | b);
printf("%d\n", a ^ b);
printf("%d\n", a << 2);

You solved this on paper for int 10 and 6 not for chars '10' = 49 or 48 (interpretation of multi-character constant is compiler dependent) and '6' = 54.

This is because you defined the variables as character(char) but in the notebook
you are calculating the result by treating them as Integer(int)
If you want the correct answer try this code and check -
int a = 10;
int b = 6;
printf("%d\n", a | b);
printf("%d\n", a ^ b);
printf("%d\n", a << 2);

Related

Why is "0" "1" sometimes printed as a character and sometimes as ASCII 48/49?

I noticed this when I was writing code.
To xor the elements in the character array, why do some display 0/1 and some display ASCII? How do I get them all to behave like number 0 or 1?
In function XOR, I want to xor the elements in two arrays and store the result in another array.
In main, I do some experiments.
And by the way, besides printing the results, I want to do 0 1 binary operations. Such as encryption and decryption.
Here is a piece of C code.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int XOR(char *u, char *w, char *v)
{
for(int i = 0; i < 16; i++)
{
u[i] = w[i] ^ v[i];
}
return 0;
}
int PrintList(char *list, int n)
{
for(int i = 0; i < n; i++)
{
printf("%d", list[i]);
}
return 0;
}
int main()
{
char u[17] = "";
char w[17] = "0001001000110100";
char v[17] = "0100001100100001";
XOR(u, w, v);
PrintList(u, 16);
printf("\n");
char w2[17] = "1000110011101111";
XOR(u, w2, v);
PrintList(u, 16);
printf("\n");
char v2[17] = "1111111011001000";
XOR(u, w2, v2);
PrintList(u, 16);
printf("\n");
char x[17] = "0101101001011010";
XOR(u, x, u);
PrintList(u, 16);
printf("\n");
memcpy(w, u, 16);
XOR(u, w, v);
PrintList(u, 16);
printf("\n");
return 0;
}
The result
0101000100010101
1100111111001110
0111001000100111
48484948494848484849494949494849
0110101101011100
Process returned 0 (0x0) execution time : 0.152 s
Press any key to continue.
Well, change my declarations from char to unsigned char, maybe due to printf("%d", list[i]);The print results no changes. Change to printf("%c", list[i]);The print results:
0010100001111101
Process returned 0 (0x0) execution time : 0.041 s
Press any key to continue.
Character '0' is ‭00110000‬ in binary. '1' is 00110001.
'0' ^ '0' = 00000000 (0)
'0' ^ '1' = 00000001 (1)
'1' ^ '1' = 00000000 (0)
But then you reuse the u array.
'0' ^ 0 = 0011000 (48)
'0' ^ 1 = 0011001 (49)
'1' ^ 0 = 0011001 (49)
'1' ^ 1 = 0011000 (48)
These are strings so you initially have the ASCII codes 48 (0011 0000) and 49 (0011 0001). The ^ operator is bitwise XOR so the result of two operands with the values 48 and 49 can either be 0 or 1. When you print that result as integer, you get 0 or 1 as expected.
If you later use the result of that operation though, you no longer have an array of ASCII codes, but an array of integers with the value 0 or 1. If you XOR that one with an array that is still an ASCII code array, for example 0011 0000 ^ 0, you will get the result 0011 0000, not 0. And so printf gives you 48 etc.

The right order of shifting bits in c

What is the difference and why are there 3(???) different results?
signed char b;
b = 66 << 2 >> 8;
fprintf(stdout, "%d\n", b);
Output: "1"
signed char b;
b = 66 << 2;
b = b >> 8;
fprintf(stdout, "%d\n", b);
Output: "0"
signed char b;
b = 2 >> 8;
b = 66 << b;
fprintf(stdout, "%d\n", b);
Output: "66"
thanks for help!
signed char b = 66 << 2 >> 8;
Here, 66 << 2 becomes a signed int 264 (signed int because it is an intermediate result), which is shifted >> 8, which becomes 1.
signed char b = 66 << 2;
Here, the 264 (same as above) is "pressed" into a signed char, turning it to 8. Applying >> 8 here results in 0.
Well, and your 3rd example, 2 >> 8 is obvously 0, so the 66 is left unchanged.

C buffer conversion to int

this snippet
unsigned char len_byte[4+1];
...
for(i=0; i < 4; i++) {
printf("%02x ", len_byte[i]);
}
prints
8a 00 00 00
I need now to set a integer value to 168 (0x000000a8).
Can sameone help me?
Thanks to all,
Riccardo
edit, I tried:
uint32_t len_dec=0;
len_dec += (uint32_t)len_byte[0] | ((uint32_t)len_byte[1]<<8) | ((uint32_t)len_byte[2]<<16) | ((uint32_t)len_byte[3]<<24);
printf("%" PRIu32 "\n",len_dec);
--> 4522130
With this code, I got 168 as answer :
int main(void) {
unsigned char len_byte[4] = {0x8a,0,0,0};
unsigned int len_dec = 0;
int i;
for(i = 3; i >= 0; --i)
{
len_dec |= ((len_byte[i] >> 4) << (8*i)) | ((len_byte[i] & 0xF) << ((8*i) + 4));
}
printf("%lu\n", len_dec);
return 0;
}
Tested here
The trick is to group each byte by 4 bits. 138 = 10001010 in binary. Grouping by 4 bits, you have 2 groups : 1000 and 1010. Now you swap both groups : 10101000 which gives 168. You do this action for each byte starting at the last element of the array.

accessing bytes of an integer in c

i have values like 12, 13 which i want assign to single integer example, k.
i tried the following program but i am not getting expected results.
enter code here
#include <stdio.h>
int main()
{
int k = 0;
printf("k address is %u\n", &k);
char* a = &k;
printf("%u\n", a);
*(a) = 12;
a++;
printf("%u\n", a);
*(a) = 13;
printf("k is %d\n",k);
return 0;
}
and the output is:
k address is 3213474664
3213474664
3213474665
k is 3340
On your system, ints are evidently stored in little-endian format, because 13*256 + 12 = 3340.
If you want to modify bytes in an integer in an endian-independent way, you should use shifts and bitwise operators.
For example, if you were trying to store an IP address of 1.2.3.4 into a 32-bit integer, you could do:
unsigned int addr = (1 << 24) | (2 << 16) | (3 << 8) | 4;
This guarantees that 1 is the most significant byte and so forth.

convert number to slash divided hex path

I need to generate a path string from a number (in C)
e.g:
53431453 -> 0003/2F4/C9D
what I have so far is this:
char *id_to_path(long long int id, char *d)
{
char t[MAX_PATH_LEN];
sprintf(t, "%010llX", id);
memcpy(d, t, 4);
memcpy(d+5, t+4, 3);
memcpy(d+9, t+7, 4);
d[4] = d[8] = '/';
return d;
}
I'm wondering if there's a better way, e.g to generate the final string in one step instead of doing sprintf and then moving the bytes around.
Thanks
Edit:
I benchmarked the given solutions
results in operations per second (higher is better):
(1) sprintf + memcpy : 3383005
(2) single sprintf : 2219253
(3) not using sprintf : 10917996
when compiling with -O3 the difference is even greater:
(1) 4422101
(2) 2207157
(3) 178756551
Since this function will be called a lot, I'll use the fastest solution even though the single sprintf is the shortest and most readable.
Thanks for your answers!
Not tested, but you can split the int into three then print it:
char *id_to_path(long long int id, char *d)
{
sprintf(d, "%04llX/%03llX/%03llX", ( id >> 24 ) & 0xffff, ( id >> 12 ) & 0xfff, id & 0xfff);
return d;
}
Since the string uses hex, it can be quite easily done using shift and bit operators.
Getting the 4 highest bits from the value can be done like this:
id >> 28
Converting this to a digit simply means adding the character '0' to it, like this:
'0' + (id >> 28)
However, since A, B, C, ... don't immediately follow the character 9, we have to perform an additional check, something like:
if (c > '9') c = c - '9' - 1 'A'
If we want the next 4 bits, we should only shift 24 bits, but then we still have the highest 4 bits left, so we should mask them out, like this:
(id >> 24) & 0xf
If we pour this into your function, we get this:
char convert (int value)
{
char c = value + '0';
if (c > '9') c = c - '9' - 1 + 'A';
return c;
}
void main()
{
long id = 53431453;
char buffer[20];
buffer[0] = convert(id >> 28);
buffer[1] = convert((id >> 24) & 0xf);
buffer[2] = convert((id >> 20) & 0xf);
buffer[3] = convert((id >> 16) & 0xf);
buffer[4] = convert((id >> 12) & 0xf);
buffer[5] = convert((id >> 8) & 0xf);
buffer[6] = convert((id >> 4) & 0xf);
buffer[7] = convert((id >> 0) & 0xf);
buffer[8] = '\0';
}
Now adjust this to add the slashes in between, the extra zeroes in the beginning, ...
EDIT:
I know this is not in one step, but it is better extensible if you later want to change the places of the slashes, ...
Did you try this option yet?
typedef struct {
unsigned f7 : 4;
unsigned f6 : 4;
unsigned f5 : 4;
unsigned f4 : 4;
unsigned f3 : 4;
unsigned f2 : 4;
unsigned f1 : 4;
unsigned f0 : 4;
} lubf;
#define convert(a) ( a > 9 ? a + 'A' - 10 : a + '0' )
int main()
{
lubf bf;
unsigned long a = 0xABCDE123;
memcpy(&bf, &a, sizeof(a));
char arr[9];
arr[0] = convert(bf.f0);
arr[1] = convert(bf.f1);
arr[2] = convert(bf.f2);
arr[3] = convert(bf.f3);
arr[4] = convert(bf.f4);
arr[5] = convert(bf.f5);
arr[6] = convert(bf.f6);
arr[7] = convert(bf.f7);
arr[8] = '\0';
printf("%lX : %s\n", a, arr);
};

Resources