I have encountered following code in FORTRAN77
(http://www-thphys.physics.ox.ac.uk/people/SubirSarkar/bbn/fastbbn.f):
Update = 1.
do k=1,12
Update = Update + alpha(i,k,x,effN)*(R(k)-1.)/1.
enddo
Y = Y * Update
I am wondering about the division by 1.! Whats the reason?
I have translated to C as follows:
double Update = 1.;
for ( int k = 0; k < 12; ++k )
Update += alpha(i+1, k+1, x, effN) * (R[k]-1.) /*/ 1.*/; // CHECK!
Y *= Update;
Is that correct?
remark: due to different array indexing in C, there is a shift of +1 or -1 in the arrax index in comparison to the original code (I wanted to keep the same value as in the original code for the definition of the index and so for the index passed as arguments to function)
Thank you for your help!
Alain
The division by 1. has no effect that I can discern. Any type promotions that it might otherwise require are already required by the -1. in the dividend.
It is conceivable that on some specific platforms the division triggers some kind of desired behavior when the dividend has an exceptional value (i.e. an infinity or NaN), but that would be highly platform-specific.
It is also conceivable that the division is a holdover from some earlier version of the code where it actually had some effect.
Either way, your translation appears to be equivalent to the Fortran version, EXCEPT that nothing in what you presented justifies changing function alpha()'s first argument from i to i+1.
Related
How are the parts of a XOR are called?
A xor B = C
What is A and B called
For Division it is:
A / B = C
A = divisor, B = dividend, C = quotient
For sums (and XOR is symmetric as the sum is) it is
A + B = C
summand for A and B, and sum for C
But I am missing a term for xor, how is it called and is there even one?
Sure you could go with operand or parameter or input or [...], but that is very generic, I would like to have a non-generic version.
As Soonts mentioned, the general name for thing operation thing would be an operand.
XOR being a logical or boolean operation, you could call it a logical / boolean operand. Often, they are also called input. Depending on usage, set or statement also works.
(As a friend of mine suggested)
You could call it XORant so the full version would be
Xorant1 ExclusiveOred Xorant2 ResultsIn TheXORed
But I have no Idea if this is commonly used.
In the following code, x and y are int32_t variables. In this simplified example, they always differ by 1. When they span the int32_t overflow boundary (0x7FFFFFFF, the max 2's compliment 32-bit positive number, to 0x80000000, the largest magnitude negative number), subtracting them seems to give different results when it is done inside the conditional of the if statement (Method 1) than it does if the result is stored in a temporary variable (Method 2). Why don't they give the same result?
I would think that subtracting two int32_t variables would yield a result of type int32_t, so using a temporary of that type shouldn't change anything. I tried explicitly typecasting inside the if statement conditional; that didn't change anything. FWIW, Method 2 gives the result I would expect.
The code:
int32_t x = (0x80000000 - 3);
int i;
for( i = 0; i < 5; ++i )
{
int32_t y = x + 1; // this may cause rollover from 0x7fffffff (positive) to 0x80000000 (negative)
UARTprintf("\n" "x = 0x%08X, y = 0x%08X", x, y );
if( ( y - x ) >= 1 ) // Method 1
UARTprintf(" - true ");
else
UARTprintf(" - FALSE");
int32_t z = ( y - x ); // Method 2
if( ( z ) >= 1 )
UARTprintf(" - true ");
else
UARTprintf(" - false");
++x;
}
Output:
x = 0x7ffffffd, y = 0x7ffffffe - true - true
x = 0x7ffffffe, y = 0x7fffffff - true - true
x = 0x7fffffff, y = 0x80000000 - FALSE - true
x = 0x80000000, y = 0x80000001 - true - true
x = 0x80000001, y = 0x80000002 - true - true
In my actual application (not this simplified example), y is incremented by a hardware timer and x is a record of when some code was last executed. The test is intended to make some code run at intervals. Considering that y represents time and the application may run for a very long time before it is restarted, just not letting it overflow isn't an option.
Noting, as several of you did, that the standard does not define the behavior when signed integer overflow occurs tells me that I don't have a right to complain that I can't count on it working the way I want it to, but it doesn't give me a solution I can count on. Even using a temporary variable, which seems to work with my current compiler version and settings, might quit working when one of those things changes. Do I have any trustworthy options short of resorting to assembly code?
Given that signed integer overflow leads to undefined behaviour - you better not try to explain it.
Because your assumptions are based on "common sense", not the standard.
Otherwise - check assembly and try to debug it, but again, the outcome would not be scalable: you won't be able to apply the new knowledge to some other case (but with no doubt it would be fun to do).
The question I didn't know enough to ask originally is, "How can I avoid undefined behavior when doing subtraction on integers that might have overflowed?" Please correct me if I am wrong, but it appears that the answer to that question would be "use unsigned rather than signed integers" because the results are well defined (per C11 6.2.5/9) "a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type."
In this case, that is enough to come up with a working solution because the elapsed time will always be zero or a small positive number therefore the result of the subtraction will always be positive. So the result of the subtraction could be kept as an unsigned number and compared ( ">=1" ) or converted back to a signed int for comparison (C11 6.3.1.3 "When a value with integer type is converted to another integer type ... if the value can be represented by the new type, it is unchanged." This code works, and I believe does not rely on any undefined behavior: "if( ( (int32_t)(uint32_t)y - (uint32_t)x ) >= 1 )"
In the more general case, however, converting to unsigned to do the subtraction, then converting back to a signed result (which might be negative) is not well defined. C11 6.3.1.3 says regarding converting to another integer type that if "the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised." So I can still imagine a scenario in which assembly code would be needed to achieve well-defined results; this just isn't one of them.
Here is some example code
int test = 1234;
int modValue, andValue;
modValue = test % -8;
andValue = test & 7;
printf("Mod Value = %d And Value = %d\n", modValue, andValue);
int counter = 0;
for(counter = 0; counter < 10000; counter++) {
modValue = counter % -8;
andValue = counter & 7;
if(modValue != andValue) {
printf("diff found at %d\n", counter);
}
}
Ideone link: http://ideone.com/g79yQm
Negative numbers give different results, that's about it but other then that do they always both function exactly the same for all positive values?
Even for negative numbers they seem to be offsetted only always just off by 1 cyclic round.
Those wondering it's similar to this question Why is modulo operator necessary? question but I don't subtract 1.
This uses negative values which are higher then the modulus value and yeah only works for positive values.
I found this from the IDA-PRO Hex-Ray's decompiler seems to generate sometimes a Modulus % and other times a AND & operator for both identical source codes afaik in different functions. I guess it's from the optimizer.
Since this project I decompiled shouldn't even use negative values I wonder what was the original source code doubt anyone uses modulus with negative values though seems weird.
Also using And as a modulus command how I know cyclic operation always use a modulus yet in this case the person must of used a Val And 7 since Val % 7 is completely different result.
Forgot to say the original code most likely used abs(Val) and 7 since anything with modulus with positive values seems to be wrong I don't think anyone would be using modulus with negative values it looks unappealing to the eyes. So I guess that's the best it could be.
The optimization of x % N to x & (N-1) only works if N is a power of two.
You also need to know that x is positive, otherwise there is a little difference between the bitmask operation and the remainder operation. The bitmask operation produces the remainder for the Euclidean division, which is always positive, whereas % produces the remainder for C's division /, which rounds towards zero and produces a remainder that is sometimes negative.
The sign of the result of % are machine dependent for negative operands, the same applies for overflow/underflow. / by proxy follows the same rules.
Beginner here.
Why is this an endless loop ?
for (p = 0; p < 5; p += 0.5)
{
printf("p=%2.2f\n",p);
}
You see an endless loop because your p is of an integral type (e.g. an int). No matter how many times you add 0.5 to an int, it would remain 0, because int truncates double/fp values assigned to it. In other words, it is equivalent to a loop where you add zero on each step.
If you make p a float or a double, your problem would go away.
EDIT (Suggested by Oli Charlesworth's comment)
It is worth noting that using floats and doubles to control loops is discouraged, because the results are not always as clean as in your example. Changing the step from 0.5 (which is 2 to the negative power of 1) to 0.1 (which is not an integral negative power of 2) would change the results that you see in a rather unexpected way.
If you need to iterate by a non-integer step, you should consider using this simple pattern:
// Loop is controlled by an integer counter
for (int i = 0 ; i != 10 ; i++) {
// FP value is calculated by multiplying the counter by the intended step:
double p = i * 0.5;
// p is between 0 and 4.5, inclusive
}
I think it depends on how p is declared. If it's an integer type, p will always be 0 (because the result of 0 + 0.5 will be truncated to 0 every time) so the for will never stop.
a type conversion problem, float/double lost precision when assigned to an integer type.
P.S. It is really a very bad idea to use float/double in condition test. Not all floating point numbers in computers are accurate.
If p is a float or a double, there's nothing wrong with the code, and the loop will terminate.
If p is integer, the behaviour of the code is undefined since the format specifier in printf() is wrong.
when you add a double constant to integer variable, the double constant "becomes" integer. 0.5 becomes just 0. So you add 0 to p.
In an if statement I want to include a range, e.g.:
if(10 < a < 0)
but by doing so, I get a warning "Pointless comparison". However, this works fine without any warning:
if(a<10 && a>0)
Is the first case possible to implement in C?
Note that the original version if(10 < a < 0) is perfectly legal. It just doesn't do what you might (reasonably) think it does. You're fortunate that the compiler recognized it as a probable mistake and warned you about it.
The < operator associates left-to-right, just like the + operator. So just as a + b + c really means (a + b) + c, a < b < c really means (a < b) < c. The < operator yields an int value of 0 if the condition is false, 1 if it's true. So you're either testing whether 0 is less than c, or whether 1 is less than c.
In the unlikely case that that's really what you want to do, adding parentheses will probably silence the warning. It will also reassure anyone reading your code later that you know what you're doing, so they don't "fix" it. (Again, this applies only in the unlikely event that you really want (a < b) < c).)
The way to check whether a is less than b and b is less than c is:
a < b && b < c
(There are languages, including Python, where a < b < c means a<b && b<c, as it commonly does in mathematics. C just doesn't happen to be one of those languages.)
It's not possible, you have to split the check as you did in case 2.
No it is not possible.
You have to use the second way by splitting the two conditional checks.
The first does one comparison, then compares the result of the first to the second value. In this case, the operators group left to right, so it's equivalent to (10<a) < 0. The warning it's giving you is really because < will always yield 0 or 1. The warning is telling you that the result of the first comparison can never be less than 0, so the second comparison will always yield false.
Even though the compiler won't complain about it, the second isn't really much improvement. How can a number be simultaneously less than 0, but greater than 10? Ideally, the compiler would give you a warning that the condition is always false. Presumably you want 0<a<10 and a>0 && a<10.
You can get the effect of the second using only a single comparison: if ((unsigned)a < 10) will be true only if the number is in the range 0..10. A range comparison can normally be reduced to a single comparison with code like:
if ((unsigned)(x-range_start)<(range_end-range_start))
// in range
else
// out of range.
At one time this was a staple of decent assembly language programming. I doubt many people do it any more though (I certainly don't as a rule).
As stated above, you have to split the check. Think about it from the compiler's point of view, which looks at one operator at a time. 10 < a = True or False. And then it goes to do True/False < 0, which doesn't make sense.
no,this is not valid syntax of if statement,it should have a valid constant expression,or may have logical operators in them,and is executed only,when the expression in the bracket evaluates to true,or non zero value