C: The Math Behind Negatives and Remainder - c

This seems to be the #1 thing that is asked when dealing with Remainder/Mod, and I'm kind of hitting a wall with it. I'm teaching myself to program with a textbook and a chuck of C code.
Seeing as I don't really have an instructor to say, "No, no. It actually works like this", I thought I'd try my hand here. I haven't found a conclusive answer to the mathematical part of this, though.
So... I'm under the impression that this is a pretty rare occurrence, but I'd still like to know what it is that happens underneath the shiny compiling. Plus, this textbook would like for me to supply all values that are possible when using negative remainders, per the C89 Standard. Would it be much to ask if someone could check to see if this math is sound?
1) 9%4
9 - (2) * 4 = 1 //this is a value based on x - (x/y) * y
(2) * 4 + (1) = 9 //this is a check based on (x/y) * y + (x%y) = x
2) -9%4
9 - (2) * 4 = 1; 9 - (3) * 4 = -3 //these are the possible values
(2) * 4 + (1) = 9; (3) * 4 + (-3) = 9 //these are the checks
3) 9%-4
Same values as #2??
I tried computing with negatives in the expressions, and came up with ridiculous things such as 17 and -33. Are they 1 and -3 for #3 as well??
4) -9%-4
Same as #1??
In algebraic division, negative signs "cancel". Do they do the same here, or is there something else going on?
I think the thing that gets me confused the most is the negatives. The way I learned algebra in school (5-6 years ago), they are "attached" to their numbers. In programming, since they are unary operators, is that not so? Example: When filling in the value for x on #2, x = 9 instead of x = -9.
I sincerely appreciate any help.

Here you need the mathematical definition on remainder.
Given two integer numbers m, d, we say that r is the remainder of the division of m and d if r satisfies two conditions:
There exists another integer k such that m == k * d + r , and
0 <= r < d.
For positive numbers, in C, we have m % d == r and m / d == k, just by following the definition above.
From the definition, it can be obtainded that 3 % 2 == 1 and 3 / 2 == 1.
Other examples:
4 / 3 == 1 and 5 / 3 == 1, in despite of 5.0/3.0 == 1.6666 (which
would round to 2.0).
4 % 3 == 1 and 5 % 3 == 2.
You can trust also in the formula r = m - k * d, which in C is written as:
m % d == m - (m / d) * d
However, in the standard C, the integer division follows the rule: round to 0.
Thus, with negative operands C offer different results that the mathematical ones.
We would have:
(-4) / 3 == -1, (-4) % 3 == -1 (in C), but in plain maths: (-4) / 3 = -2, (-4) % 3 = 2.
In plain maths, the remainder is always nonnegative, and less than the abs(d).
In standard C, the remainder always has the sign of the first operand.
+-----------------------+
| m | d | / | % |
+-----+-----+-----+-----+
| 4 | 3 | 1 | 1 |
+-----+-----+-----+-----+
| -4 | 3 | -1 | -1 |
+-----+-----+-----+-----+
| 4 | -3 | -1 | 1 |
+-----+-----+-----+-----+
| -4 | -3 | 1 | -1 |
+-----------------------+
Remark: This description (in the negative case) is for standard C99/C11 only. You must be carefull with your compiler version, and do some tests.

Like Barmar's linked answer says modulus in a mathematical sense means that numbers are the same class for a ring (my algebra theory is a bit rusty so sorry the terms might be a bit loosely used:)).
So modulus 5 means that you have a ring of size 5. i.e. 0,1,2,3,4 when you add 1 to 4 you are back at zero. so -9,-4,1,6,11,16 are all the same modulo 5 because they are all equivalent. This is actually very important for various algebra theorems but for normal programmers it's pretty much useless.
Basically the standards were unspecified so the modulus returned for negative numbers just has to be one of those equivalent classes of numbers. It's not a remainder. Your best bet in situations like this is to operate on absolute values when doing modulo operators if you want basic integer division. If you are using more advanced techniques (like public key encryption) you'll probably need to brush up on your math a little more.
For now I'd say still with positive ints in this case and have fun programming something interesting.

Related

Is there any possibility to recover A in "A & B = C" with given B and C?

I would like to ask: with A, B, and C are any binary number. After getting C = A & B (& is AND operator), is there any possibility to recover A from B and C?
I know that the information of A will be lost through the operation. Can we form a function like B <...> C = A, and how complexity it can be?
For example:
A = 0011
B = 1010
C = A & B = 0010
The 2nd bit of C is 1, i.e. 2nd bit of A and B must be 1. However, the other bits lack information to be recovered.
Thank you in advance.
No, it's not possible. You can see this from the truth table for AND:
A B C (A & B)
0 0 0
0 1 0
1 0 0
1 1 1
Suppose you know that B is 0 and C is 0. A could be either 1 or 0, so it cannot be deduced from B and C.
You can recover only bits of A that have 1s in the corresponding bits of B. For bits of B that have zeros it does not matter what A has in the corresponding position, because the bit in C would be zero anyway:
A = 1xx0x011x0
B = 1001011101
----------
C = 1000001100
Positions of A marked with x can be zeros or ones; the information in them is going to be lost either way.
Assuming you are just talking binary logic not C variables, then no.
Consider:
a=0111, b=1010 therefore c=0010
So you have b=1010, c=0010 so now how can you find a?
The left most bit in c is a 0, in b it is 1 so we know a it must be 0
The second bit in c is 0, in b it is 0 so you can't tell what it was in a (either 1 or 0 leads to a 0 in c)
At this point we've proven you can't do it.
No, because there isn't a unique solution. Any value of A that has the same bits set as B would satisfy the equation, regardless of the other bits.
This is a question about equations. It is not possible as the degree of freedom is not zero. It is the same as asking a+b = 10 -- what is a and what is b?
You can't recover A, but you can write A = (X & ~B) ^ C. Here, X can be anything (and it gives all the A's).
Of course this will work only for B and C such that C & ~B == 0.
This is a parametrized solution. Example in python
>>> A = 32776466
>>> B = 89773888
>>> C = A & B
>>> C
22020352
>>> X = 1234567890 # arbitrary value
>>> U = (X & ~B) ^ C
>>> U
1238761874
>>> U & B # same result as A & B
22020352

C Bit Operation Logic (bitAnd)

Background:
I stumbled across bitwise operators in C here, and now I am trying to learn more about them. I searched around for exercises and came across this.
However, I'm having trouble understanding the first one "bitAnd."
The code reads:
/*
* bitAnd - x&y using only ~ and |
* Example: bitAnd(6, 5) = 4
* Legal ops: ~ |
* Max ops: 8
* Rating: 1
*/
int bitAnd(int x, int y) {
/* NOR Equivelent of AND */
return ~(~x | ~y);
}
Question:
Now, I thought that the pipe ( | ) means "or." So, why do we need ~x and ~y? Can't we just say something like:
int bitAnd(int x, int y) {
int or = x | y; //something is one number or the other
int and = ~or; // not or is the same as and
return and;
}
I wrote and ran the second code sample for myself (I have a main function to run it). I get -8 as the answer, but with values 6 and 5, you should get 4.
If you have something for "or" (the pipe operator) and "and" is just the opposite or, why do we need to use "~" on each value before we calculate ~and?
Some extra information/thoughts:
I understand that "~" flips all the bits in the value. "Or" copies the bit from either value into the other if it exists (I learned that from here). So, if I have:
6 = 00000110
and
5 = 00000101
I should get 0000111.
I only mention that to show what knowledge I have of some of the operations in case my understanding of those are wrong as well.
This is typical logic gates knowledge. The equivalent of an AND gate is NOT of a NOR b.
Let's see what happens. Suppose you have values as such:
a = 00111 => 3
b = 01001 => 9
a AND b = 00001 => 1
This is what we expect. Let's run it through your shared method, the first one:
~a = 11000 => 24
~b = 10110 => 22
~a | ~b = 11110 => 30
~(~a | ~b) = 00001 => 1 as we expect.
Now, let's run your second proposed method.
or = 01111 => 15
and = ~or = 10000 => 16.
Now you have a problem. Logically, what you do is this:
~(a | b) = ~a AND ~b.
Is it really true though?
~a = 11000 => 24
~b = 10110 => 22
~a AND ~b = 10000 => 16.
It agrees with what I said above, however, it's wrong as you can see. We want 1, not 16. The bitwise inverse "~" operator is distributive. It inverts the operations as well. So an "or" becomes an "and" and an "and" becomes an "or". I hope that clears it up.
The solution you provided uses the de Morgan's rules, which say that not (A and B) = not A or not B. Since you need to compute A and B, you negate everything once more and you get: A and B = not (not (A and B)) = not (not A or not B).
Now, you can also think in terms of truth tables to see why this is true and why your claim isn't. I won't show everything in detail, but in your solution, not (A of B) when both A and B are 0, the result is 1, which is not consistent with the and operation.

Logic Proof of Associative Property for XOR

I came across a common programming interview problem: given a list of unsigned integers, find the one integer which occurs an odd number of times in the list. For example, if given the list:
{2,3,5,2,5,5,3}
the solution would be the integer 5 since it occurs 3 times in the list while the other integers occur even number of times.
My original solution involved setting up a sorted array, then iterating through the array: For each odd element I would add the integer, while for each even element I would subtract; the end sum was the solution as the other integers would cancel out.
However, I discovered that a more efficient solution existed by simply performing an XOR on each element -- you don't even need a sorted array! That is to say:
2^3^5^2^5^5^3 = 5
I recall from a Discrete Structures class I took that the Associate Property is applicable to the XOR operation, and that's why this solution works:
a^a = 0
and:
a^a^a = a
Though I remember that the Associative Property works for XOR, I'm having trouble finding a logical proof for this property specific to XOR (most logic proofs on the Internet seem more focused on the AND and OR operations). Does anyone know why the Associative Property applies to the XOR operation?
I suspect it involves an XOR identity containing AND and/or OR.
The associative property says that (a^b)^c = a^(b^c). Since XOR is bitwise (the bits in the numbers are treated in parallel), we merely need to consider XOR for a single bit. Then the proof can be done by examining all possibilities:
abc (a^b) (a^b)^c (b^c) a^(b^c)
000 0 0 0 0
001 0 1 1 1
010 1 1 1 1
011 1 0 0 0
100 1 1 0 1
101 1 0 1 0
110 0 0 1 0
111 0 1 0 1
Since the third column, (a^b)^c, is identical to the fifth column, a^(b^c), the associative property holds.
As long as a ^ b == ~a & b | a & ~b, you can proove that :
(a ^ b) ^ c = ~((~a & b) | (a & ~b)) & c | ((~a & b) | (a & ~b)) & ~c
and
a ^ (b ^ c) = a & ~((~b & c) | (b & ~c)) | ~a & ((~b & c) | (b & ~c))
Are equals.

mathematical equation for AND bitwise operation?

In a shift left operation for example,
5 << 1 = 10
10 << 1 = 20
then a mathematical equation can be made,
n << 1 = n * 2.
If there is an equation for a shift left operation,
then is it possible that there is also a
mathematical equation for
an AND operation?
or any other bitwise operators?
There is no straightforward single operation that maps to every bitwise operation. However, they can all be simulated through iterative means (or one really long formula).
(a & b)
can be done with:
(((a/1 % 2) * (b/1 % 2)) * 1) +
(((a/2 % 2) * (b/2 % 2)) * 2) +
(((a/4 % 2) * (b/4 % 2)) * 4) +
...
(((a/n % 2) * (b/n % 2)) * n)
Where n is 2 to the number of bits that A and B are composed minus one. This assumes integer division (remainder is discarded).
That depends on what you mean by "mathematical equation". There is no easy arithmetic one.
If you look at it from a formal number-theoretic standpoint you can describe bitwise "and" (and "or" and "xor") using only addition, multiplication and -- and this is a rather big "and" from the lay perspective -- first-order predicate logic. But that is most certainly not what you meant, not least because these tools are enough to describe anything a computer can do at all.
Except for specific circumstances, it is not possible to describe bitwise operations in other mathematical operations.
An and operation with 2n-1 is the same as a modulus operation with 2n. An and operation with the inverse of 2n-1 can be seen as a division by 2n, a truncation, and a multiplication by same.
It depends on what you mean by “mathematical”. If you are looking for simple school algebra, then answer is no. But mathematics is not sacred — mathematicians define new operations and concepts all the time.
For example, you can represent 32-bit numbers as vectors of 32 booleans, and then define “AND” operation on them which does standard boolean “and” between their corresponding elements.
Yes,they are sums. Consider for a binary word of length n. It can be written as the following;
A=a0*2^0+a1*2^1+a2*2^3....an*2^n. Where an is an element of {0,1}
Therefore if an is a bit in A and bn is a bit in B, then;
AandB=a0*b0*2^0+a1*b1*2^1...an*bn*2^n
similarly
AxorB=(a0+b0)mod2*2^0+(a1+b1)mod2*2^1...+(an+bn)mod2*2^n
Consider now the identity;
Axor1=notA
We now have the three operators we need (Bitwise AND,Bitwise XOR and Bitwise NOT)
From these two we can make anything we want.
For example, bitwise OR
not[(notA)and(notB)]=not[not(AorB)]=AorB
Its not guaranteed to be pretty though.
In response to the comment regarding mod2 arithmetic not being very basic, that's true in a sense. However,while its common because of the prevalence of computers nowadays, the entire subject we are touching on here is not particularly "basic". The OP has grasped something fundamental. There are finite algebraic structures studied in the mathematical field known as "Abstract Algebra" such as addition and multiplication modulo n (where n is some number such as 2, 8 or 2^32). There are other structures using binary operations (addition is a binary operation, it takes two operands and produces a result, as is multiplication, and xor) such as xor, and ,bit shifts etc, that are "isomorphic" to the addition and multiplication over integers mod n. that means they act the same way, they are associative, distributive etc. (although they may or may not be commutative, think of matrix multiplication) Its hard to tell someone where to start looking for more information. I guess the best way would be to start with a book on formal mathematics.(Mathematical proofs) You need that to understand any advanced mathematics text. Then a text on abstract algebra. If your a computer science major you will get a lot of this in your classes. If your a mathematics major, you will study these things in depth all in good time. If your a history major, Im not knocking history , im a history channel junkie, but you should switch majors because your wasting your talents!
Here is a proof that for 2-bit bitwise operations you cannot describe & with
just + - and * (check this, just came up with it now, so, who knows):
The question is, can we find a polynomial
x & y == P(x, y)
where
P(x, y) = a0_0 + a1_0*x + a0_1*y + a2_0*x^ + ...
Here's what it would have to look like:
0 1 2 3
--------
0| 0 0 0 0
1| 0 1 0 1
2| 0 0 2 2
3| 0 1 2 3
First, clearly a0_0 == 0. Next you can see that if P is
rewritten:
|------- Q(x, y) --------|
P(x, y) = xy*R(x,y) + a1_0*x + a0_1*y + ...
And y is held 0, while x varies over 0, 1, 2, 3; then Q(x, y) must be 0 for
each of those values. Likewise if x is held 0 and y varied. So Q(x, y)
may be set to 0 without loss of generality.
But now, since P(2, 2) = 2, yet 2 * 2 == 0, the polynomial P cannot
exist.
And, I think this would generalize to more bits, too.
So the answer is, if you're looking for just +, * and -, no you can't do
it.

Fast Hypotenuse Algorithm for Embedded Processor?

Is there a clever/efficient algorithm for determining the hypotenuse of an angle (i.e. sqrt(a² + b²)), using fixed point math on an embedded processor without hardware multiply?
If the result doesn't have to be particularly accurate, you can get a crude
approximation quite simply:
Take absolute values of a and b, and swap if necessary so that you have a <= b. Then:
h = ((sqrt(2) - 1) * a) + b
To see intuitively how this works, consider the way that a shallow angled line is plotted on a pixel display (e.g. using Bresenham's algorithm). It looks something like this:
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| | | | | | | | | | | | | | | | |*|*|*| ^
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| | | | | | | | | | | | |*|*|*|*| | | | |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| | | | | | | | |*|*|*|*| | | | | | | | a pixels
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
| | | | |*|*|*|*| | | | | | | | | | | | |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|*|*|*|*| | | | | | | | | | | | | | | | v
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
<-------------- b pixels ----------->
For each step in the b direction, the next pixel to be plotted is either immediately to the right, or one pixel up and to the right.
The ideal line from one end to the other can be approximated by the path which joins the centre of each pixel to the centre of the adjacent one. This is a series of a segments of length sqrt(2), and b-a segments of length 1 (taking a pixel to be the unit of measurement). Hence the above formula.
This clearly gives an accurate answer for a == 0 and a == b; but gives an over-estimate for values in between.
The error depends on the ratio b/a; the maximum error occurs when b = (1 + sqrt(2)) * a and turns out to be 2/sqrt(2+sqrt(2)), or about 8.24% over the true value. That's not great, but if it's good enough for your application, this method has the advantage of being simple and fast. (The multiplication by a constant can be written as a sequence of shifts and adds.)
For the record, here are a few more approximations, listed in roughly
increasing order of complexity and accuracy. All these assume 0 ≤ a ≤ b.
h = b + 0.337 * a // max error ≈ 5.5 %
h = max(b, 0.918 * (b + (a>>1))) // max error ≈ 2.6 %
h = b + 0.428 * a * a / b // max error ≈ 1.04 %
Edit: to answer Ecir Hana's question, here is how I derived these
approximations.
First step. Approximating a function of two variables can be a
complex problem. Thus I first transformed this into the problem of
approximating a function of one variable. This can be done by choosing
the longest side as a “scale” factor, as follows:
h = √(b2 + a2)
= b √(1 + (a/b)2)
= b f(a/b) where f(x) = √(1+x2)
Adding the constraint 0 ≤ a ≤ b means we are only concerned with
approximating f(x) in the interval [0, 1].
Below is the plot of f(x) in the relevant interval, together with the
approximation given by Matthew Slattery (namely (√2−1)x + 1).
Second step. Next step is to stare at this plot, while asking
yourself the question “how can I approximate this function cheaply?”.
Since the curve looks roughly parabolic, my first idea was to use a
quadratic function (third approximation). But since this is still
relatively expensive, I also looked at linear and piecewise linear
approximations. Here are my three solutions:
The numerical constants (0.337, 0.918 and 0.428) were initially free
parameters. The particular values were chosen in order to minimize the
maximum absolute error of the approximations. The minimization could
certainly be done by some algorithm, but I just did it “by hand”,
plotting the absolute error and tuning the constant until it is
minimized. In practice this works quite fast. Writing the code to
automate this would have taken longer.
Third step is to come back to the initial problem of approximating a
function of two variables:
h ≈ b (1 + 0.337 (a/b)) = b + 0.337 a
h ≈ b max(1, 0.918 (1 + (a/b)/2)) = max(b, 0.918 (b + a/2))
h ≈ b (1 + 0.428 (a/b)2) = b + 0.428 a2/b
Consider using CORDIC methods. Dr. Dobb's has an article and associated library source here. Square-root, multiply and divide are dealt with at the end of the article.
One possibility looks like this:
#include <math.h>
/* Iterations Accuracy
* 2 6.5 digits
* 3 20 digits
* 4 62 digits
* assuming a numeric type able to maintain that degree of accuracy in
* the individual operations.
*/
#define ITER 3
double dist(double P, double Q) {
/* A reasonably robust method of calculating `sqrt(P*P + Q*Q)'
*
* Transliterated from _More Programming Pearls, Confessions of a Coder_
* by Jon Bentley, pg. 156.
*/
double R;
int i;
P = fabs(P);
Q = fabs(Q);
if (P<Q) {
R = P;
P = Q;
Q = R;
}
/* The book has this as:
* if P = 0.0 return Q; # in AWK
* However, this makes no sense to me - we've just insured that P>=Q, so
* P==0 only if Q==0; OTOH, if Q==0, then distance == P...
*/
if ( Q == 0.0 )
return P;
for (i=0;i<ITER;i++) {
R = Q / P;
R = R * R;
R = R / (4.0 + R);
P = P + 2.0 * R * P;
Q = Q * R;
}
return P;
}
This still does a couple of divides and four multiples per iteration, but you rarely need more than three iterations (and two is often adequate) per input. At least with most processors I've seen, that'll generally be faster than the sqrt would be on its own.
For the moment it's written for doubles, but assuming you've implemented the basic operations, converting it to work with fixed point shouldn't be terribly difficult.
Some doubts have been raised by the comment about "reasonably robust". At least as originally written, this was basically a rather backhanded way of saying that "it may not be perfect, but it's still at least quite a bit better than a direct implementation of the Pythagorean theorem."
In particular, when you square each input, you need roughly twice as many bits to represent the squared result as you did to represent the input value. After you add (which needs only one extra bit) you take the square root, which gets you back to needing roughly the same number of bits as the inputs. Unless you have a type with substantially greater precision than the inputs, it's easy for this to produce really poor results.
This algorithm doesn't square either input directly. It is still possible for an intermediate result to underflow, but it's designed so that when it does so, the result still comes out as well as the format in use supports. Basically, the situation in which it happens is that you have an extremely acute triangle (e.g., something like 90 degrees, 0.000001 degrees, and 89.99999 degrees). If it's close enough to 90, 0, 90, we may not be able to represent the difference between the two longer sides, so it'll compute the hypotenuse as being the same length as the other long side.
By contrast, when the Pythagorean theorem fails, the result will often be a NaN (i.e., tells us nothing) or, depending on the floating point format in use, quite possibly something that looks like a reasonable answer, but is actually wildly incorrect.
You can start by reevaluating if you need the sqrt at all. Many times you are calculating the hypotenuse just to compare it to another value - if you square the value you're comparing against you can eliminate the square root altogether.
Unless you're doing this at >1kHz, multiply even on a MCU without hardware MUL isn't terrible. What's much worse is the sqrt. I would try to modify my application so it doesn't need to calculate it at all.
Standard libraries would probably be best if you actually need it, but you could look at using Newton's method as a possible alternative. It would require several multiply/divide cycles to perform, however.
AVR resources
Atmel App note AVR200: Multiply and Divide Routines (pdf)
This sqrt function on AVR Freaks forum
Another AVR Freaks post
Maybe you could use some of Elm Chans Assembler Libraries and adapt the ihypot-function to your ATtiny. You would need to replace the MUL and maybe (i haven't checked) some other instructions.

Resources