How are the parts of a XOR are called? - xor

How are the parts of a XOR are called?
A xor B = C
What is A and B called
For Division it is:
A / B = C
A = divisor, B = dividend, C = quotient
For sums (and XOR is symmetric as the sum is) it is
A + B = C
summand for A and B, and sum for C
But I am missing a term for xor, how is it called and is there even one?
Sure you could go with operand or parameter or input or [...], but that is very generic, I would like to have a non-generic version.

As Soonts mentioned, the general name for thing operation thing would be an operand.
XOR being a logical or boolean operation, you could call it a logical / boolean operand. Often, they are also called input. Depending on usage, set or statement also works.

(As a friend of mine suggested)
You could call it XORant so the full version would be
Xorant1 ExclusiveOred Xorant2 ResultsIn TheXORed
But I have no Idea if this is commonly used.

Related

Why does a=(b++) have the same behavior as a=b++?

I am writing a small test app in C with GCC 4.8.4 pre-installed on my Ubuntu 14.04. And I got confused for the fact that the expression a=(b++); behaves in the same way as a=b++; does. The following simple code is used:
#include <stdint.h>
#include <stdio.h>
int main(int argc, char* argv[]){
uint8_t a1, a2, b1=10, b2=10;
a1=(b1++);
a2=b2++;
printf("a1=%u, a2=%u, b1=%u, b2=%u.\n", a1, a2, b1, b2);
}
The result after gcc compilation is a1=a2=10, while b1=b2=11. However, I expected the parentheses to have b1 incremented before its value is assigned to a1.
Namely, a1 should be 11 while a2 equals 10.
Does anyone get an idea about this issue?
However, I expected the parentheses to have b1 incremented before its value is assigned to a1
You should not have expected that: placing parentheses around an increment expression does not alter the application of its side effects.
Side effects (in this case, it means writing 11 into b1) get applied some time after retrieving the current value of b1. This could happen before or after the full assignment expression is evaluated completely. That is why a post-increment will remain a post-increment, with or without parentheses around it. If you wanted a pre-increment, place ++ before the variable:
a1 = ++b1;
Quoting from the C99:6.5.2.4:
The result of the postfix ++ operator is the value of the operand.
After the result is obtained, the value of the operand is incremented.
(That is, the value 1 of the appropriate type is added to it.) See the
discussions of additive operators and compound assignment for
information on constraints, types, and conversions and the effects of
operations on pointers. The side effect of updating the stored value
of the operand shall occur between the previous and the next sequence
point.
You can look up the C99: annex C to understand what the valid sequence points are.
In your question, just adding a parentheses doesn't change the sequence points, only the ; character does that.
Or in other words, you can view it like there's a temporary copy of b and the side-effect is original b incremented. But, until a sequence point is reached, all evaluation is done on the temporary copy of b. The temporary copy of b is then discarded, the side effect i.e. increment operation is committed to the storage,when a sequence point is reached.
Parentheses can be tricky to think about. But they do not mean, "make sure that everything inside happens first".
Suppose we have
a = b + c * d;
The higher precedence of multiplication over addition tells us that the compiler will arrange to multiply c by d, and then add the result to b. If we want the other interpretation, we can use parentheses:
a = (b + c) * d;
But suppose that we have some function calls thrown into the mix. That is, suppose we write
a = x() + y() * z();
Now, while it's clear that the return value of y() will be multiplied by the return value of z(), can we say anything about the order that x(), y(), and z() will be called in? The answer is, no, we absolutely cannot! If you're at all unsure, I invite you to try it, using x, y, and z functions like this:
int x() { printf("this is x()\n"); return 2; }
int y() { printf("this is y()\n"); return 3; }
int z() { printf("this is z()\n"); return 4; }
The first time I tried this, using the compiler in front of me, I discovered that function x() was called first, even though its result is needed last. When I changed the calling code to
a = (x() + y()) * z();
the order of the calls to x, y, and z stayed exactly the same, the compiler just arranged to combine their results differently.
Finally, it's important to realize that expressions like i++ do two things: they take i's value and add 1 to it, and then they store the new value back into i. But the store back into i doesn't necessarily happen right away, it can happen later. And the question of "when exactly does the store back into i happen?" is sort of like the question of "when does function x get called?". You can't really tell, it's up to the compiler, it usually doesn't matter, it will differ from compiler to compiler, if you really care, you're going to have to do something else to force the order.
And in any case, remember that the definition of i++ is that it gives the old value of i out to the surrounding expression. That's a pretty absolute rule, and it can not be changed just by adding some parentheses! That's not what parentheses do.
Let's go back to the previous example involving functions x, y, and z. I noticed that function x was called first. Suppose I didn't want that, suppose I wanted functions y and z to be called first. Could I achieve that by writing
x = z() + ((y() * z())?
I could write that, but it doesn't change anything. Remember, the parentheses don't mean "do everything inside first". They do cause the multiplication to happen before the addition, but the compiler was already going to do it that way anyway, based on the higher precedence of multiplication over addition.
Up above I said, "if you really care, you're going to have to do something else to force the order". What you generally have to do is use some temporary variables and some extra statements. (The technical term is "insert some sequence points.") For example, to cause y and z to get called first, I could write
c = y();
d = z();
b = x();
a = b + c * d;
In your case, if you wanted to make sure that the new value of b got assigned to a, you could write
c = b++;
a = b;
But of course that's silly -- if all you want to do is increment b and have its new value assigned to a, that's what prefix ++ is for:
a = ++b;
Your expectations are completely unfounded.
Parentheses have no direct effect on the order of execution. They don't introduce sequence points into the expression and thus they don't force any side-effects to materialize earlier than they would've materialized without parentheses.
Moreover, by definition, post-increment expression b++ evaluates to the original value of b. This requirement will remain in place regardless of how many pair of parentheses you add around b++. Even if parentheses somehow "forced" an instant increment, the language would still require (((b++))) to evaluate to the old value of b, meaning that a would still be guaranteed to receive the non-incremented value of b.
Parentheses only affects the syntactic grouping between operators and their operands. For example, in your original expression a = b++ one might immediately ask whether the ++ apples to b alone or to the result of a = b. In your case, by adding the parentheses you simply explicitly forced the ++ operator to apply to (to group with) b operand. However, according to the language syntax (and the operator precedence and associativity derived from it), ++ already applies to b, i.e. unary ++ has higher precedence than binary =. Your parentheses did not change anything, it only reiterated the grouping that was already there implicitly. Hence no change in the behavior.
Parentheses are entirely syntactic. They just group expressions and they are useful if you want to override the precedence or associativity of operators. For example, if you use parentheses here:
a = 2*(b+1);
you mean that the result of b+1 should be doubled, whereas if you omit the parentheses:
a = 2*b+1;
you mean that just b should be doubled and then the result should be incremented. The two syntax trees for these assignments are:
= =
/ \ / \
a * a +
/ \ / \
2 + * 1
/ \ / \
b 1 2 b
a = 2*(b+1); a = 2*b+1;
By using parentheses, you can therefore change the syntax tree that corresponds to your program and (of course) different syntax may correspond to different semantics.
On the other hand, in your program:
a1 = (b1++);
a2 = b2++;
parentheses are redundant because the assignment operator has lower precedence than the postfix increment (++). The two assignments are equivalent; in both cases, the corresponding syntax tree is the following:
=
/ \
a ++ (postfix)
|
b
Now that we're done with the syntax, let's go to semantics. This statement means: evaluate b++ and assign the result to a. Evaluating b++ returns the current value of b (which is 10 in your program) and, as a side effect, increments b (which now becomes 11). The returned value (that is, 10) is assigned to a. This is what you observe, and this is the correct behaviour.
However, I expected the parentheses to have b1 incremented before its value is assigned to a1.
You aren't assigning b1 to a1: you're assigning the result of the postincrement expression.
Consider the following program, which prints the value of b when executing assignment:
#include <iostream>
using namespace std;
int b;
struct verbose
{
int x;
void operator=(int y) {
cout << "b is " << b << " when operator= is executed" << endl;
x = y;
}
};
int main() {
// your code goes here
verbose a;
b = 10;
a = b++;
cout << "a is " << a.x << endl;
return 0;
}
I suspect this is undefined behavior, but nonetheless when using ideone.com I get the output shown below
b is 11 when operator= is executed
a is 10
OK, in a nutshell: b++ is a unary expression, and parentheses around it won't ever take influence on precedence of arithmetic operations, because the ++ increment operator has one of the highest (if not the highest) precedence in C. Whilst in a * (b + c), the (b + c) is a binary expression (not to be confused with binary numbering system!) because of a variable b and its addend c. So it can easily be remembered like this: parentheses put around binary, ternary, quaternary...+INF expressions will almost always have influence on precedence(*); parentheses around unary ones NEVER will - because these are "strong enough" to "withstand" grouping by parentheses.
(*)As usual, there are some exceptions to the rule, if only a handful: e. g. -> (to access members of pointers on structures) has a very strong binding despite being a binary operator. However, C beginners are very likely to take quite awhile until they can write a -> in their code, as they will need an advanced understanding of both pointers and structures.
The parentheses will not change the post-increment behaviour itself.
a1=(b1++); //b1=10
It equals to,
uint8_t mid_value = b1++; //10
a1 = (mid_value); //10
Placing ++ at the end of a statement (known as post-increment), means that the increment is to be done after the statement.
Even enclosing the variable in parenthesis doesn't change the fact that it will be incremented after the statement is done.
From learn.geekinterview.com:
In the postfix form, the increment or decrement takes place after the value is used in expression evaluation.
In prefix increment or decrement operation the increment or decrement takes place before the value is used in expression evaluation.
That's why a = (b++) and a = b++ are the same in terms of behavior.
In your case, if you want to increment b first, you should use pre-increment, ++b instead of b++ or (b++).
Change
a1 = (b1++);
to
a1 = ++b1; // b will be incremented before it is assigned to a.
To make it short:
b++ is incremented after the statement is done
But even after that, the result of b++ is put to a.
Because of that parentheses do not change the value here.

Fortran to C translation - mysterious division by one

I have encountered following code in FORTRAN77
(http://www-thphys.physics.ox.ac.uk/people/SubirSarkar/bbn/fastbbn.f):
Update = 1.
do k=1,12
Update = Update + alpha(i,k,x,effN)*(R(k)-1.)/1.
enddo
Y = Y * Update
I am wondering about the division by 1.! Whats the reason?
I have translated to C as follows:
double Update = 1.;
for ( int k = 0; k < 12; ++k )
Update += alpha(i+1, k+1, x, effN) * (R[k]-1.) /*/ 1.*/; // CHECK!
Y *= Update;
Is that correct?
remark: due to different array indexing in C, there is a shift of +1 or -1 in the arrax index in comparison to the original code (I wanted to keep the same value as in the original code for the definition of the index and so for the index passed as arguments to function)
Thank you for your help!
Alain
The division by 1. has no effect that I can discern. Any type promotions that it might otherwise require are already required by the -1. in the dividend.
It is conceivable that on some specific platforms the division triggers some kind of desired behavior when the dividend has an exceptional value (i.e. an infinity or NaN), but that would be highly platform-specific.
It is also conceivable that the division is a holdover from some earlier version of the code where it actually had some effect.
Either way, your translation appears to be equivalent to the Fortran version, EXCEPT that nothing in what you presented justifies changing function alpha()'s first argument from i to i+1.

What is '^' operator used in C other than to check if two numbers are equal?

What are the purposes of ^ operator used in C other than to check if two numbers are equal? Also, why is it used for equality in stead of == in the first place?
The ^ operator is the bitwise XOR operator. Although I have never seen it's use for checking equaltity.
x ^ y will evaluate to 0 exatly when x == y.
The XOR operator is used in cryptography (en- and decrypting text using a pseudo-random bit stream), random number generators (like the Mersenne Twister) and in inline-swap and other bit twiddling hacks:
int a = ...;
int b = ...;
// swap a and b
a ^= b;
b ^= a;
a ^= b;
(useful if you don't have space for another variable like on CPUs with few registers).
^ is the Bitwise XOR.
A bitwise operation operates on one or more bit patterns or binary numerals at the level of their individual bits. It is a fast, primitive action directly supported by the processor, and is used to manipulate values for comparisons and calculations. (source: Bitwise Operation)
The XOR Operator has two operands and it returns 1 if only one of the operands is set to 1.
So a Bitwise XOR Operation of two numbers is the resulting of these bit by bit operations.
For exemple:
00000110 // A = 6
00001010 // B = 10
00001100 // A ^ B = 12
^ is a bit-wise XOR operator in C. It can be used in bits toggling and to swap two numbers;
x^=y, y^=x, x^=y;
and can be used to find max of two numbers;
int max(int x, int y)
{
return x ^ ((x ^ y) & -(x < y));
}
It can be used to selectively flip bits. (e.g. to toggle the value of bit #3 in an integer, you can say x = x ^ (1<<3) or, more compactly, x = x^0x08 or even x^=8. (although now that I look at it, the last form looks like some sort of obscene emoticon and should probably be avoided. :)
It should never be used in a test for equality (in C), except in tricky code meant to test undergrads' understanding of the ^ operator. (In assembly, there may be speed advantages on some architectures.)
It it's the exclusive or operator. It will do bitwise exclusive or on the two arguments. If the numbers are equal, this will result in 0, while if they're not equal, the bits that differed between the two arguments will be set.
You generally wouldn't use it inserted of ==, you would use it only when you need to know which bits are different.
Two real usage examples from an embedded system I worked on:
In a status message generating function, where one of the words was supposed to be a passthrough of an external device's status word. There was an disconnect between the device behavior and the message spec - one thought bit0 meant 'error' while the other thought it meant 'OK'.
statuswords[3] = devicestatus ^ 1; //invert B0
The 16-bit target processor was terribly slow to branch, so in an inner loop if (sign(A)!=sign(B) B=0; was coded as:
B*=~(A^B)>>15;
which took 4 cycles rather than 8, and does the same thing: sets B to 0 iff the sign bits are different.
in many general cases we might use '^' as a replacement for'==' but that doesn't exactly give the result for being equal or not.Instead - it checks the given variables bit by bit and sets a result for each bit individually and finally displays a result summed up with the resulting bits as a bulk.

L-Value required? C programming bit pattern

void InsertA(SET *A,int elem)
{
if( isMember(*A,elem) == false)
{
*A = *A || 1<<elem;; /*it says its in this row*/
}
}
/*Error: Lvalue required in Function InsertA
any thoughts on this guys? noob here
*/
In this statement :
*A = *A || 1<<elem;; /*it says its in this row*/
We have these operators *,=,||,<<
Now look at the precedence table at
Precedence Operator operation associativity
-------- --------- ----------------
3 * Indirection (dereference) R to L
7 << Bitwise left shift L to R
14 || Logical OR L to R
16 = Direct assignment R to L
So lets see what happens:
1) Indirection will be performed first. There are two of them. They associate Right to Left. That means Right one will be performed first. Its important to understand that there are two dereferencing operator here which will be considered differently later when encountering the = operator.
2) A bit wise left shift will performed on 1.
3) A logical OR will be performed with *A and the result of bitwise shift. it may evaluate zero or non zero.
4) This zero/nonzero value will be assigned to *A. Here *A can be treated as lvalue in a context of = operator. If you leave this consideration it will lead to ambiguity. Because we often think of dereferencing operation like *A as an rvalue or an value to be used. Actually its a valid lvalue which will be converted implicitly to a rvalue (This is when a value which is stored at address pointed by A is returned). Otherwise *A is simply a container in memory which is open to values.
So the thing is your expression is undefined and does not make any sense why you are putting a logical value into *A. It will make more sense if you use binary or instead of logical.
Lets do that:
We have a new entry in our precedence table
Precedence OP OPeration Associativity
12 | Bitwise OR L to R
Only change will occur in step 3 when a bitwise OR will be performed.
Lets have an example
lets say elem = 3.
A is pointing to the array {1,2,3,3,4}
1) '*A's will be performed. It will just calculate the "Offsets" needed to do load or store instructions of the processor.
2)we will get a constant bit pattern : 1 << 3 = 1000
3)now for | we need rvalues as both operands. So now a load instruction will be performed to fetch the value stored in the memory. Say its 2. So we will get 0010 | 1000 = 1010
4)A store instruction will be performed to put this bit pattern into the memory so the array will look like {1,A,3,3,4}
Explanation for too much verbosity: I think this can help if future users who will try to find how to dissect a complicated expression by language rules.
As noted in the comments, the code should compile.
But it looks like you want to set a bit in an int, so I suspect, you really want | instead of ||.
So you should do
*A |= 1<<elem;
|| is a logical operation, not a bitwise one. Have you tried changing it to |?
Whenever you do A = you have the potential to create a temporary A, same with *A. Be careful about using the = operator and look up how to disable copy constructors.
You may use the |= operator. A |= (1 << whatever)
EDIT: make sure you're not compiling your C code with a C++ compiler in C++ mode. GCC has a switch for C, it depends on your build environment.

Are If Thens faster than multiplication and assignment?

I have a quick question, suppose I have the following code and it's repeated in a simliar way 10 times for example.
if blah then
number = number + 2^n
end if
Would it be faster to evaluate:
number = number + blah*2^n?
Which also brings the question, can you multiply a boolean value times a integer (Although I am not sure the type that is returned from 2^n, is it an integer or unsigned..etc)? (I'm working in Ada, but let's try to generalize this maybe?)
EDIT: Sorry I should clarify I am looking at 2 to the power of n, and I put c in there cause I was interested for my own learning in the future if I ever run into this problem in c and I think there are more c programmers out there on these boards then Ada (I'm assuming and you know what that means), however my current problem is in the Ada language, but the question should be fairly language independent (I hope).
There is no general answer to such a question, this depends a lot on your compiler and CPU. Modern CPU have conditional move instructions, so everything is possible.
The only ways to know here are to inspect the assembler that is produced (usually -S as compiler option) and to measure.
if we are talking about C and blah is not within your control, then just do this:
if(blah) number += (1<<n);
There is really not a boolean in C and does not need to be, false is zero and true is not zero, so you cannot assume that not zero is 1 which is what you would need for your solution, nor can you assume that any particular bit in blah is set, for example:
number += (blah&1)<<n;
Would not necessarily work either because 0x2 or 0x4 or anything non-zero with bit zero clear is considered a true. Typically you will find 0xFFF...FFFF (minus one, or all ones) used as true, but you cannot rely on typical.
Now, if you are in complete control over the value in blah, and keep it strictly to a 0 for false and 1 for true then you could do what you were asking about:
number += blah<<n;
And avoid the potential for a branch, extra cache line fill, etc.
Back to the generic case though, taking this generic solution:
unsigned int fun ( int blah, unsigned int n, unsigned int number )
{
if(blah) number += (1<<n);
return(number);
}
And compiling for the two most popular/used platforms:
testl %edi, %edi
movl %edx, %eax
je .L2
movl $1, %edx
movl %esi, %ecx
sall %cl, %edx
addl %edx, %eax
.L2:
The above uses a conditional branch.
The one below uses conditional execution, no branch, no pipeline flush, is deterministic.
cmp r0,#0
movne r3,#1
addne r2,r2,r3,asl r1
mov r0,r2
bx lr
Could have saved the mov r0,r2 instruction by re-arranging the arguments in the function call, but that is academic, you wouldnt burn a function call on this normally.
EDIT:
As suggested:
unsigned int fun ( int blah, unsigned int n, unsigned int number )
{
number += ((blah!=0)&1)<<n;
return(number);
}
subs r0, r0, #0
movne r0, #1
add r0, r2, r0, asl r1
bx lr
Certainly cheaper, and the code looks good, but I wouldnt make assumptions that the result of blah!=0, which is zero or whatever the compiler has defined as true always has the lsbit set. It doesnt have to have that bit set for the compiler to generate working code. Perhaps the standards dictate the specific value for true. by re-arranging the function parameters the if(blah) number +=... will also result in three single clock instructions and not have assumptions.
EDIT2:
Looking at what I understand to be the C99 standard:
The == (equal to) and != (not equal
to) operators are analogous to the
relational operators except for their
lower precedence. Each of the
operators yields 1 if the specified
relation is true and 0 if it is false.
Which explains why the above edit works and why you get the movne r0,#1 and not some other random number.
The poster was asking the question with regards to C but also noted that ADA was the current language, from a language independent perspective you should not assume "features" like the C feature above and use an if(blah) number = number + (1<<n). But this was asked with a C tag so the generically (processor independent) fastest result for C is, I think, number += (blah!=0)<<n; So Steven Wright's comment had it right and he should get credit for this.
The posters assumption is also basically correct, if you can get blah into a 0 or 1 form then using it in the math is faster in the sense that there is no branch. Getting it into that form without it being more expensive than a branch is the trick.
In Ada...
The original formulation:
if Blah then
Number := Number + (2 ** N);
end if;
The alternative general formulation, assuming Blah is of type Boolean and Number and N are of suitable types:
Number := Number + (Boolean'pos(Blah) * (2 ** N));
(For N and Number of user-defined integer or floating point types, suitable definitions and type conversions may be required, the key point here is the Boolean'pos() construct, which Ada guarantees will give you a 0 or 1 for the predefined Boolean type.)
As for whether this is faster or not, I concur with #Cthutu:
I would keep it with the conditional.
You shouldn't worry about low-level
optimisation details at this point.
Write the code that describes your
algorithm best and trust your
compiler.
I would keep it with the conditional. You shouldn't worry about low-level optimisation details at this point. Write the code that describes your algorithm best and trust your compiler. On some CPUs the multiplication is slower (e.g. ARM processors that have conditionals on each instruction). You could also use the ?: expression which optimises better under some compilers. For example:
number += (blah ? 2^n : 0);
If for some reason this little calculation is the bottleneck of your application after profiling then worry about low-level optimisation.
In C, regarding blah*2^n: Do you have any reason to believe that blah takes the values 0 and 1? The language only promises that 0 <-> FALSE and (everything else) <-> TRUE. C allows you to multiply a "boolean" temporary with another number, but the result is not defined except insofar as result=0 <=> the bool was false or the number was zero.
In Ada, regarding blah*2^n: The language does not define a multiplication operator on type Boolean. Thus blah cannot be a bool and be multiplied.
If your language allows multiplication between a boolean and a number, then yes, that is faster than a conditional. Conditionals require branching, which can invalidate the CPU's pipeline. Also if the branch is big enough, it can even cause a cache miss in the instructions, though that's unlikely in your small example.
Generaly, and particularly when working with Ada, you should not worry about micro-optimization issues like this. Write your code so that it is clear to a reader, and only worry about performance when you have a problem with performance, and have it tracked down to that portion of the code.
Different CPUs have different needs, and they can be insanely complex. For example, in this case which is faster depends a lot on your CPU's pipeline setup, what's in cache at the time, and how its branch prediction unit works. Part of your compiler's job is to be an expert in those things, and it will do a better job than all but the very best assembly programmers. Certianly better than you (or me).
So you just worry about writing good code, and let the compiler worry about making efficient machine code out of it.
For the problem stated, there is indeed simple expressions in C that may produce efficient code.
The nth power of 2 can be computed with the << operator as 1 << n, provided n is less than the number of value bits in an int.
If blah is a boolean, namely an int with a value of 0 or 1, your code fragment can be written:
number += blah << n;
If blah is any scalar type that can be tested for its truth value as if (blah), the expression is slightly more elaborate:
number += !!blah << n;
which is equivalent to number += (blah != 0) << n;
The test is still present but, for modern architectures, the generated code will not have any jumps, as can be verified online using Godbolt's compiler explorer.
In either case, you can't avoid a branch (internally), so don't try!
In
number = number + blah*2^n
the full expression will always have to be evaluated, unless the compiler is smart enough to stop when blah is 0. If it is, you'll get a branch if blah is 0. If it's not, you always get an expensive multiply. In case blah is false, you'll also get the unnecessary add and assignment.
In the "if then" statement, the statement will only do the add and assignment when blah is true.
In short, the answer to your question in this case is "yes".
This code shows they perform similarly, but multiplication is usually slightly faster.
#Test
public void manual_time_trial()
{
Date beforeIfElse = new Date();
if_else_test();
Date afterIfElse = new Date();
long ifElseDifference = afterIfElse.getTime() - beforeIfElse.getTime();
System.out.println("If-Else Diff: " + ifElseDifference);
Date beforeMultiplication = new Date();
multiplication_test();
Date afterMultiplication = new Date();
long multiplicationDifference = afterMultiplication.getTime() - beforeMultiplication.getTime();
System.out.println("Mult Diff : " + multiplicationDifference);
}
private static long loopFor = 100000000000L;
private static short x = 200;
private static short y = 195;
private static int z;
private static void if_else_test()
{
short diff = (short) (y - x);
for(long i = 0; i < loopFor; i++)
{
if (diff < 0)
{
z = -diff;
}
else
{
z = diff;
}
}
}
private static void multiplication_test()
{
for(long i = 0; i < loopFor; i++)
{
short diff = (short) (y - x);
z = diff * diff;
}
}

Resources