MIL-STD-1750A to Decimal Conversion Examples - c

I am looking at some examples in the 1750A format webpage and some of the examples do not really make sense. I have included the 1750A format specification at the bottom of this post in case anyone isn't familiar with it.
Take this example from Table 3 of the 1750A format webpage:
.625x2^4 = 5000 00 04
In binary 5000 00 04 is 0101 0000 0000 0000 0000 0000 0000 0100
If you convert this to decimal, it does not equal 10, which is .625x2^4. Maybe I am converting it incorrectly.
Take the mantissa, 101 0000 0000 0000 0000 0000 and subtract 1 giving 100 1111 1111 1111 1111 1111. Then flip the bits, giving 011 0000 0000 0000 0000 0000. Move the decimal 4 places (since our exponent, 0100 is 4), giving us 0110.0000 0000 0000 0000 000. This equals 6.0, which is not .625x2^4.
I believe the actual value, should be 0011 0000 0000 0000 0000 0000 0000 01000 or 30000004 in hex.
Can anyone else confirm my suspicions that this value is labeled incorrectly in Table 3 of the 1750A format page above?
Thank you

As explained previously, the sign+mantissa is interpreted as a 2's-complement value between -1 and +1.
In your case, it's 0.101000000... (base-2). Which is 1/2 + 1/8 = 0.625 (base-10).

It all makes perfect sense.
Here:
0101 0000 0000 0000 0000 0000 0000 0100
you've got:
(0*20 + 1*2-1 + 0*2-2 + 1*2-3 + 0*2-4 + ... + 0*2-23) * 24 = (0.5 + 0.125) * 16 = 0.625 * 16 = 10
Just do the math.

Related

m1 = 001, m2=101 How can I calculate Bitwise XOR between m1 and m2?

m1 = 001, m2=101 How can I calculate Bitwise XOR between m1 and m2?
Could you check if my solution is correct or not? please tell me your opinions.
m1 = 001 => in 8bit(8byte?) anyway, I can express it 0000 0001
m2 = 101 => same way as m1, I can express it 0000 0101
0000 0001
0000 0101
Same part will be 0,
Different part will be 1
so, the answer is
100(0000 0100)

Tsql & operator

I just came across this command that I have never is seen being used before. What is the & operator doing in this line of code? It seams like ( #MyVar & 64 ) another way of writing #MyVar = 64
DECLARE #MyVar INT
SET #MyVar = 16 -- Prints yes
SET #MyVar = 64 -- Prints no
IF ( ( #MyVar & 64 ) = 0 )
BEGIN
SELECT 'yes'
END
ELSE
BEGIN
SELECT 'no'
END
This is bitwise AND. In fact, as written, what you return in the SELECT alternates between 0 and 64 and no other numbers.
This is verbatim from https://learn.microsoft.com/en-us/sql/t-sql/language-elements/bitwise-and-transact-sql :
The & bitwise operator performs a bitwise logical AND between the two
values, taking each corresponding bit for both expressions. The bits
in the result are set to 1 if and only if both bits (for the current
bit being resolved) in the input expressions have a value of 1;
otherwise, the bit in the result is set to 0.
Lets see what this does:
DECLARE #myint int = 16
SELECT #myint & 64 [myint 0] --0
/*
--This is the bitwise AND representation for 16 &64:
0000 0000 0100 0000 --&64
0000 0000 0001 0000 --#MyVar = 16
-------------------
0000 0000 0000 0000 -- = 0 = 'Yes'
*/
SET #myint = 64
SELECT #myint & 64 [myint 64] --64
/*
--This is the bitwise AND representation for 64 &64:
0000 0000 0100 0000 --&64
0000 0000 0100 0000 --#MyVar = 64
-------------------
0000 0000 0100 0000 -- = 64 = 'No'
*/
This applies for other numbers as well, try 127 and 128:
/*
0000 0000 0100 0000 --&64
0000 0000 0111 1111 --#MyVar = 127
-------------------
0000 0000 0100 0000 --64 = 'No'
0000 0000 0100 0000 --&64
0000 0000 1000 0001 --#MyVar = 128
-------------------
0000 0000 0000 0000 --0 = 'Yes'
*/
127 &64 = 64.
128 &64 = 0.

Bit masking confusion

I get this result when I bitwise & -4227072 and 0x7fffff:
0b1111111000000000000000
These are the bit representations for the two values I'm &'ing:
-0b10000001000000000000000
0b11111111111111111111111
Shouldn't &'ing them together instead give this?
0b10000001000000000000000
Thanks.
-4227072 == 0xFFBF8000 == 1111 1111 1011 1111 1000 0000 0000
-4227072 & 0x7fffff should be
0xFFBF8000 == 1111 1111 1011 1111 1000 0000 0000 0000
& 0x7fffff == 0000 0000 0111 1111 1111 1111 1111 1111
-----------------------------------------------------
0x003F8000 == 0000 0000 0011 1111 1000 0000 0000 0000
The negative number is represented as its 2's complement inside the computer's memory. The binary representation you have posted is thus misleading. In 2's complement, the most significant digit (at bit k) has value –2k–1. The remaining digits are positive as you expect.
Assuming you are dealing with 32 bit signed integers, we have:
1111 1111 1011 1111 1000 0000 0000 0000 = −422707210
& 0000 0000 0111 1111 1111 1111 1111 1111 = 7fffff16
————————————————————————————————————————————————————————————
0000 0000 0011 1111 1000 0000 0000 0000
Which is what you got.
To verify the first line:
−1 × 231 = −214748364810
1 × 230 = 107374182410
1 × 229 = 53687091210
1 × 228 = 26843545610
1 × 227 = 13421772810
1 × 226 = 6710886410
1 × 225 = 3355443210
1 × 224 = 1677721610
1 × 223 = 838860810
1 × 221 = 209715210
1 × 220 = 104857610
1 × 219 = 52428810
1 × 218 = 26214410
1 × 217 = 13107210
1 × 216 = 6553610
1 × 215 = 3276810
——————————————————————————————
−422707210 ✓
0b10000001000000000000000 is correct - if your integer encoding was signed-magnitude.
This is possible on some early or novel machines. Another answer well explains how negative integers are typically represented as 2's complement numbers and then the result is as you observed: 0b1111111000000000000000.

Why is ~b=-6 if b=5?

I can't get the 2-complement calculation to work.
I know C compiles ~b that would invert all bits to -6 if b=5. But why?
int b=101, inverting all bits is 010 then for 2 complement's notation I just add 1 but that becomes 011 i.e. 3 which is wrong answer.
How should I calculate with bit inversion operator ~?
Actually, here's how 5 is usually represented in memory (16-bit integer):
0000 0000 0000 0101
When you invert 5, you flip all the bits to get:
1111 1111 1111 1010
That is actually -6 in decimal form. I think in your question, you were simply flipping the last three bits only, when in fact you have to consider all the bits that comprise the integer.
The problem with b = 101 (5) is that you have chosen one too few binary digits.
binary | decimal
~101 = 010 | ~5 = 2
~101 + 1 = 011 | ~5 + 1 = 3
If you choose 4 bits, you'll get the expected result:
binary | decimal
~0101 = 1010 | ~5 = -6
~0101 + 1 = 1011 | ~5 + 1 = -5
With only 3 bits you can encode integers from -4 to +3 in 2's complement representation.
With 4 bits you can encode integers from -8 to +7 in 2's complement representation.
-6 was getting truncated to 2 and -5 was getting truncated to 3 in 3 bits. You needed at least 4 bits.
And as others have already pointed out, ~ simply inverts all bits in a value, so, ~~17 = 17.
~b is not a 2-complement operation. It is a bitwise NOT operation. It just inverts every bit in a number, therefore ~b is unequal to -b.
Examples:
b = 5
binary representation of b: 0000 0000 0000 0101
binary representation of ~b: 1111 1111 1111 1010
~b = -6
b = 17
binary representation of b: 0000 0000 0001 0001
binary representation of ~b: 1111 1111 1110 1110
~b = -18
binary representation of ~(~b): 0000 0000 0001 0001
~(~b) = 17
~ simply inverts all the bits of a number:
~(~a)=17 if a=17
~0...010001 = 1...101110 ( = -18 )
~1...101110 = 0...010001 ( = 17 )
You need to add 1 only in case you want to negate a number (to get a 2-s complement) i.e. get -17 out of 17.
~b + 1 = -b
So:
~(~b) equals ~(-b - 1) equals -(-b - 1) -1 equals b
In fact, ~ reverse all bits, and if you do ~ again, it will reverse back.
I can't get the 2-completement calculation to work.
I know C compiles ~b that whould invert all bits to -6 if b=5. But why?
Because you are using two's complement. Do you know what two's complement is?
Lets say that we have a byte variable (signed char). Such a variable can have the values from 0 to 127 or from -128 to 0.
Binary, it works like this:
0000 0000 // 0
...
0111 1111 // 127
1000 0000 // -128
1000 0001 // -127
...
1111 1111 // -1
Signed numbers are often described with a circle.
If you understand the above, then you understand why ~1 equals -2 and so on.
Had you used one's complement, then ~1 would have been -1, because one's complement uses a signed zero. For a byte, described with one's complement, values would go from 0 to 127 to -127 to -0 back to 0.
you declared b as an integer. That means the value of b will be stored in 32 bits and the complement (~) will take place on the 32 bit word and not the last 3 bits as you are doing.
int b=5 // b in binary: 0000 0000 0000 0101
~b // ~b in binary: 1111 1111 1111 1010 = -6 in decimal
The most significant bit stores the sign of the integer (1:negetive 0:positive) so 1111 1111 1111 1010 is -6 in decimal.
Similarly:
b=17 // 17 in binary 0000 0000 0001 0001
~b // = 1111 1111 1110 1110 = -18

n & (n-1) what does this expression do? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
Query about working out whether number is a power of 2
What does this function do?
n & (n-1) - where can this expression be used ?
It's figuring out if n is either 0 or an exact power of two.
It works because a binary power of two is of the form 1000...000 and subtracting one will give you 111...111. Then, when you AND those together, you get zero, such as with:
1000 0000 0000 0000
& 111 1111 1111 1111
==== ==== ==== ====
= 0000 0000 0000 0000
Any non-power-of-two input value (other than zero) will not give you zero when you perform that operation.
For example, let's try all the 4-bit combinations:
<----- binary ---->
n n n-1 n&(n-1)
-- ---- ---- -------
0 0000 0111 0000 *
1 0001 0000 0000 *
2 0010 0001 0000 *
3 0011 0010 0010
4 0100 0011 0000 *
5 0101 0100 0100
6 0110 0101 0100
7 0111 0110 0110
8 1000 0111 0000 *
9 1001 1000 1000
10 1010 1001 1000
11 1011 1010 1010
12 1100 1011 1000
13 1101 1100 1100
14 1110 1101 1100
15 1111 1110 1110
You can see that only 0 and the powers of two (1, 2, 4 and 8) result in a 0000/false bit pattern, all others are non-zero or true.
It returns 0 if n is a power of 2 (NB: only works for n > 0). So you can test for a power of 2 like this:
bool isPowerOfTwo(int n)
{
return (n > 0) && ((n & (n - 1)) == 0);
}
It checks if n is a power of 2: What does the bitwise code "$n & ($n - 1)" do?
It's a bitwise operation between a number and its previous number. Only way this expression could ever be false is if n is a power of 2, so essentially you're verifying if it isn't a power of 2.

Resources