What does XORing for 2 decimal numbers mean? [closed] - xor

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I know that XORing 2 decimal numbers mean that, they binary representations are XORed.
But what does it mean non-mathematical sense ?
What significance does it have?

If you XOR the result with one of the original numbers, you get the other original number.
a ^ b = X
X ^ a = b
X ^ b = a
This is used commonly in cryptography and hashing algorithms.

In simple words, the XOR of two decimal means, first converting the two decimals to binary and then performing the bit-wise XOR and then again converting the result to decimal.
Let's take an example,
Suppose we wanna find out what does 3^7 results? (where ^ is symbolising XOR operation)
Step 1 : Converting the numbers from decimal to binary
3 => 0011 and 7 => 0111
Step 2 : Taking the bit-wise XOR of the two binary numbers
3 : 0011
7 : 0111
3^7 : 0100
(hint : 1^0 = 1 and 1^1 = 0^0 = 0)
Step 3 : Convert the answer to decimal
0100 => 4
Hence, the 3^7 = 4
Click here for more details.

if i correctly understand, you are searching for the physical meaning of XOR.
xor is an odd parity counter, i mean if you xor(A,B), you are counting the number of ones in each bit, if the number is odd, the xor output will be one
Ex:
3 = 0011
5 = 0101
3 ^ 5 = 0110
in the LSB there are (1 and 1) which represent two ones, and since two is even, therefore you put 0, in the next bit, there are (1 and 0) and the count of ones is one which is odd, therefore you put 1 and so on.
if you want to experiment this, try this online calculator
this is used alot in telecommunication for parity check and in error correcting algorithms, and generation of orthogonal code sequencye; see this website about CDMA for more details.
a Simple example for parity checking, assume we send 8 Bits, 7 Bits data and 8th bit is parity check.
we send :
0110011X: X in this case is the parity checker, is 0 because we have 4 ones and 4 is even.
if there is a mistake in the data tranmitted and we receive for example this
11100110: Number of ones in the Byte Stream is 5 with a mistake in MSB. 5 should produce a parity bit of 1 because 5 is odd, but since the parity is 0, this means there is a mistake in the data transmitted. This is the simplest parity check used in serial data transfer. Error Correcting codes build on it.

Related

Long multiplication of a pair of uint64 values [duplicate]

This question already has answers here:
How many 64-bit multiplications are needed to calculate the low 128-bits of a 64-bit by 128-bit product?
(2 answers)
Closed 2 years ago.
How can I multiply a pair of uint64 values safely in order to get the result as a pair of LSB and MSB of the same type?
typedef struct uint128 {
uint64 lsb;
uint64 msb;
};
uint128 mul(uint64 x, uint64 y)
{
uint128 z = {0, 0};
z.lsb = x * y;
if (z.lsb / x != y)
{
z.msb = ?
}
return z;
}
Am I computing the LSB correctly?
How can I compute the MSB correctly?
As said in the comments, the best solution would probably using a library which does that for you. But i will explain how you can do it without a library, because i think you asked to learn something. It is probably not a very efficient way but it works.
When we where in school and we had to multiply 2 numbers without a calculator, we multiplied 2 digits, had a result with 1-2 digits, and wrote them down and in the end we added them all up. We spited the multiplication up so we only had to calculate a single digit multiplication at once. A similar thing is possible with higher numbers on a CPU. But there we do not use decimal digits, we use half of the register size as digit. With that, we can multiply 2 digits and become 2 digits, in one register. In decimal 13*42 can be calculated as:
3* 2 = 0 6
10* 2 = 2 0
3*40 = 1 2 0
10*40 = 0 4 0 0
--------
0 5 4 6
A similar thing can be done with integers. To make it simple, i multiply 2 8 bit numbers to a 16 bit number on a 8 bit CPU, for that i only multiple 4 bit with 4 bit at a time. Lets multiply 0x73 with 0x4F.
0x03*0x0F = 0x002D
0x70*0x0F = 0x0690
0x03*0x40 = 0x00C0
0x70*0x40 = 0x1C00
-------
0x22BD
You basically create an array with 4 elements, in your case each element has the type uint32_t, store or add the result of a single multiplication in the right element(s) of the array, if the result of a single multiplication is too large for a single element, store the higher bits in the higher element. If an addition overflows carry 1 to the next element. In the end you can combine 2 elements of the array, in your case to two uint64_t.

make sense of a c define macro?

I am new to c development.
I am trying to understand a snippet of code related to a midi application:
#define GETCMD(p) ((p.data.midi.h& 0x70)>>4)
#define GETCH(p) ((p.data.midi.h& 0x0F)+1)
I presume the above are 2 macros.
What is not really clear are the hex values 0x70 and 0x0F.
In the first line from my understanding it is a right shift of 4 on the h pointer?
The following makes less sense
#define SETCMD_CH(p, c1, c2) p.data.midi.h=0x80|(c2-1)|((c1&7)<<4)
Can please anyone let me understand these 3 defines?
Thanks in advance
GETCMD extracts 3 command bits (from bits 4..6) and returns them as a value in the range 0..7.
GETCH returns 4 channel bits (from bits 0..3) and returns them as a value in the range 1..16.
SETCMD_CH sets the above command and channel bits, i.e. it's just the reverse operation of the above two macros combined.
This bitwise operations are just the required shifts and masks to get/set the appropriate bits within p.data.midi.h. You might want to read up on bitwise operations if it's not clear to you how these work.
Take a look of the structure of "p.data.midi.h"
Which data type do you have, especially in the .h?
I think that is a bitwise operation between the data you have *.data.midi.h and 0x70 (DEC = 112; BIN = 0111 0000) and then a shift right of 4 as you guess.
Suppose you have in the *.data.midi.h data the value in Binary 0101 0000 after the GETCMD you'll have 101.
In this way you have discovered which bits are to value 1 in your data. (2 Nibble)
GETCH is working on first nibble (0x0F = Bin 0000 1111) then adding 1 for some reason which I don't know.
SETCMD_CH seams to set some bits of the *.data.midi.h that you can pass in the c1, c2 params.
*.data.midi.h =0x80|(c2-1)|((c1&7)<<4)
*.data.midi.h = 1000 0000 | (c2-1) | ((c1 & 0000 0111) << 4)
With the c1 I'm quite sure you can set one of the "commands".
I think you must think in binary in this case to solve and understand.
Sorry for my solution to your problem that maybe cause to you even more confusion :).

Single floating point decimal to binary in C [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I am trying to write C program that initializes a variable of type float to 1000.565300 and extracts relevant fields of the IEEE 754 representation. I should extract the sign bit (bit 31), the exponent field (bits 30 to 23) and the significant field (bits 22 to 0). I should use bit masking and shift operations to extract these fields. My program should keep the extracted fields in 32-bit unsigned integers and should print their values in the hexadecimal and binary formats. And here is my program. I do not know how to bit masking
Well, one easy way to do all this is:
Interpret a float bits as an unsigned: uint32_t num = *(uint32_t*)&value
It means: I want to treat the address of value as the address of a 32 bit unsigned and then I want to take the value stored at that address
Sign: int sign = (~(~0u>>1)&num) ? -1 : 1 //checks if first bit of float is 1 or 0 , if it's 1 then it's a negative number
exp part: uint32_t exp = num&0x7F800000
mantissa : uint32_t mant = num&0x007FFFFF
If you don't know masks :
0x7F800000 : 0 11111111 00000000000000000000000
0x007FFFFF : 0 00000000 11111111111111111111111
As for printing bits , you can use this function:
void printbits(uint32_t num)
{
for(uint32_t m=~(~0u>>1);m;m>>=1) // initial m = 100..0 then 0100..0 and so on
putchar(n&m ? '1' : '0');
putchar('\n');
}

Explanation of Output of Bitwise operations [duplicate]

This question already has answers here:
What are bitwise operators?
(9 answers)
Closed 9 years ago.
void main()
{
int x=7;
printf("%d",x&(x-1));
int y=6;
printf("%d",y&(y-1));
printf("%d",y>>2);
}
When I put an odd number I get output n-1 where n is a odd number but when i put y= even number I get output 0.I am not able to understand this please help.
My second question is that when i print y>>2 that is 6>>2 I get ouput 1.Please explain me this also. I know these are bitwise operations but my concept is not clear.Thanks
Let's break each line up:
x&(x-1) => 0x111 & 0x110 => 0x110 => 6
... and:
y&(y-1)) => ox110 & 0x101 => 0x100 => 4
... and finally:
y>>2 => 0x110 >> 2 => 0x001 => 1
Remark: It's probably a good idea to review your knowledge of bitwise operations.
Bitwise operations are exactly that. you take your number and you and each bit with the other number.
What that means, is if both numbers have a 1 at the slot, than you output 1, else, you output 0
so, for your example of 7 you have
0111
0110
result:
0110 (6)
for your example of 6 you have
0110
0101
result:
0100 (4)
right shift (>>) just shifts all the bits to the right, so if you take 6
0110
and shift all the bits to the right twice, you end up with
0001
or 1
When I put an odd number I get output n-1 where n is a odd number but when i put y= even number I get output 0.I am not able to understand this please help.
With binary storage the lowest bit is always 1 for odd numbers, but since you are ANDing, you are in effect simply returning the original value-1 always (because no bits have shifted). In the case of even numbers not all of the will be 0: 8 will: 1000 & 0111 = 0. 6 will not: 0110 & 0101 => 0100 = 4.
My second question is that when i print y>>2 that is 6>>2 I get ouput 1.Please explain me >>this also. I know these are bitwise operations but my concept is not clear.Thanks
It like dividing by 2 twice. so 6->3->1.5 But the fraction part is truncated, so you are left with 1. In binary, this would be 0110 -> 0011 -> 001.1 = 1.5 (decimal) but with truncation = 0001.

c: bit reversal logic

I was looking at the below bit reversal code and just wondering how does one come up with these kind of things. (source : http://www.cl.cam.ac.uk/~am21/hakmemc.html)
/* reverse 8 bits (Schroeppel) */
unsigned reverse_8bits(unsigned41 a) {
return ((a * 0x000202020202) /* 5 copies in 40 bits */
& 0x010884422010) /* where bits coincide with reverse repeated base 2^10 */
/* PDP-10: 041(6 bits):020420420020(35 bits) */
% 1023; /* casting out 2^10 - 1's */
}
Can someone explain what does comment "where bits coincide with reverse repeated base 2^10" mean?
Also how does "%1023" pull out the relevent bits? Is there any general idea in this?
It is a very broad question you are asking.
Here is an explanation of what % 1023 might be about: you know how computing n % 9 is like summing the digits of the base-10 representation of n? For instance, 52 % 9 = 7 = 5 + 2.
The code in your question is doing the same thing with 1023 = 1024 - 1 instead of 9 = 10 - 1. It is using the operation % 1023 to gather multiple results that have been computed “independently” as 10-bit slices of a large number.
And this is the beginning of a clue as to how the constants 0x000202020202 and 0x010884422010 are chosen: they make wide integer operations operate as independent simpler operations on 10-bit slices of a large number.
Expanding on Pascal Cuoq idea, here is an explaination.
The general idea is, in any base, if any number is divided by (base-1), the remainder will be sum of all the digits in the number.
For example, 34 when divided by 9 leaves 7 as remainder. This is because 34 can be written as 3 * 10 + 4
i.e. 34 = 3 * 10 + 4
= 3 * (9 +1) + 4
= 3 * 9 + (3 +4)
Now, 9 divides 3 * 9, leaving remainder (3 + 4). This process can be extended to any base 'b', since (b^n - 1) is always divided by (b-1).
Now, coming to the problem, if a number is represented in base 1024, and if the number is divided by 1023, the remainder will be sum of its digits.
To convert a binary number to base 1024, we can group bits of 10 from the right side into single number
For example, to convert binary number 0x010884422010(0b10000100010000100010000100010000000010000) to base 1024, we can group it into 10 bits number as follows
(1) (0000100010) (0001000100) (0010001000) (0000010000) =
(0b0000000001)*1024^4 + (0b0000100010)*1024^3 + (0b0001000100)*1024^2 + (0b0010001000)*1024^1 + (0b0000010000)*1024^0
So, when this number is divided by 1023, the remainder will sum of
0b0000000001
+ 0b0000100010
+ 0b0001000100
+ 0b0010001000
+ 0b0000010000
--------------------
0b0011111111
If you observe the above digits closely, the '1' bits in each above digit occupy complementay positions. So, when added together, it should pull all the 8 bits in the original number.
So, in the above code, "a * 0x000202020202", creates 5 copies of the byte "a". When the result is ANDed with 0x010884422010, we selectively choose 8 bits in the 5 copies of "a". When "% 1023" is applied, we pull all the 8 bits.
So, how does it actually reverse bits? That is bit clever. The idea is, the "1" bit in the digit 0b0000000001 is actually aligned with MSB of the original byte. So, when you "AND" and you are actually ANDing MSB of the original byte with LSB of the magic number digit. Similary the digit 0b0000100010 is aligned with second and sixth bits from MSB and so on.
So, when you add all the digits of the magic number, the resulting number will be reverse of the original byte.

Resources