After a few unsuccessful searches, I still don't know if there's a way to substract two unsigned int (or more) and detect if the result of this substraction is negative (or not).
I've try things like :
if(((int)x - (int)y) < 0)
But I don't think it's the best way.
Realize that what you intend by
unsigned int x;
unsigned int y;
if (x - y < 0)
is mathematically equivalent to:
unsigned int x;
unsigned int y;
if (y > x)
EDIT
There aren't many questions for which I can assert a definitive proof, but I can for this one. It's basic inequality algebra:
x - y < 0
add y to both sides:
x < y, which is the same as y > x.
You can do similarly with more variables, if you need:
x - y - z < 0 == x < y + z, or y + z > x
see chux's comment to his own answer, though, for a valid warning about integer overflow when dealing with multiple values.
Simply compare.
unsigned x, y, diff;
diff = x - y;
if (x < y) {
printf("Difference is negative and not representable as an unsigned.\n");
}
[Edit] OP change from "2 unsigned int" to "multiple unsigned int"
Confident doing N*(N-1)/2 compares would be needed if a wider integer width is not available for subtracting N unsigned.
With N > 2, simplest, if available, to use wider integers. Such as
long long diff;
// or
#include <stdint.h>
intmax_t diff;
Depending though on your platform, these type may or may not be wider than unsigned. Certainly not narrower.
Note: this issue similarly applies to multiple signed int too. Other compares are use though. But that is another question.
Related
This is what I've found so far online,
int main(void)
{
long a = 12345;
int b = 10;
int remain = a - (a / b) * b;
printf("%i\n", remain);
}
First I wonder how the formula works. Maybe i cant do math, but the priority of operations here seems a bit odd. If i run this code the expected answer of 5 is printed. But I dont get how (a / b) * b doesn't cancel out to 'a' leading to a - a = 0.
Now, this only works for int and long, as soon as double are involved it doesn't work anymore. Anyone might tell me why? Is there an alternative to modulo that works for double?
Also I'm not sure if i understand up to what value a long can go, i found online that the upper limit was 2147483647 but when i input bigger numbers such as the one in 'a' the code runs without any issue up to a certain point...
Thanks for your help I'm new to coding and trying to learn!
Given two double finite numbers x and y, with y not equal to zero, fmod(x, y) produces the remainder of x when divided by y. Specifically, it returns x − ny, where n is chosen so that x − ny has the same sign as x and is smaller in magnitude than y. (So, if x is positive, 0 ≤ fmod(x, y) < x, and, if x is negative, x < fmod(x, y) ≤ 0.)
fmod is declared in <math.h>.
A properly implemented fmod returns an exact result; there is no floating-point error, since the specified result is always representable.
The C standard also specifies remquo to return the remainder and some low bits (at least three) of the quotient n and remainder with a variation on the definition of the remainder. It also specifies variants of these functions for float and long double.
Naive implementation. Limited range. Adds additional floating point imprecisions (as it does some arithmetic)
double naivemod(double x)
{
return x - (long long)x;
}
int main(void)
{
printf("%.50f\n", naivemod(345345.567567756));
printf("%.50f\n", naivemod(.0));
printf("%.50f\n", naivemod(10.5));
printf("%.50f\n", naivemod(-10.0/3));
}
I´m looking for an alternative for the ceil() and floor() functions in C, due to I am not allowed to use these in a project.
What I have build so far is a tricky back and forth way by the use of the cast operator and with that the conversion from a floating-point value (in my case a double) into an int and later as I need the closest integers, above and below the given floating-point value, to be also double values, back to double:
#include <stdio.h>
int main(void) {
double original = 124.576;
double floorint;
double ceilint;
int f;
int c;
f = (int)original; //Truncation to closest floor integer value
c = f + 1;
floorint = (double)f;
ceilint = (double)c;
printf("Original Value: %lf, Floor Int: %lf , Ceil Int: %lf", original, floorint, ceilint);
}
Output:
Original Value: 124.576000, Floor Int: 124.000000 , Ceil Int: 125.000000
For this example normally I would not need the ceil and floor integer values of c and f to be converted back to double but I need them in double in my real program. Consider that as a requirement for the task.
Although the output is giving the desired values and seems right so far, I´m still in concern if this method is really that right and appropriate or, to say it more clearly, if this method does bring any bad behavior or issue into the program or gives me a performance-loss in comparison to other alternatives, if there are any other possible alternatives.
Do you know a better alternative? And if so, why this one should be better?
Thank you very much.
Do you know a better alternative? And if so, why this one should be better?
OP'code fails:
original is already a whole number.
original is a negative like -1.5. Truncation is not floor there.
original is just outside int range.
original is not-a-number.
Alternative construction
double my_ceil(double x)
Using the cast to some integer type trick is a problem when x is outsize the integer range. So check first if x is inside range of a wide enough integer (one whose precision exceeds double). x values outside that are already whole numbers. Recommend to go for the widest integer (u)intmax_t.
Remember that a cast to an integer is a round toward 0 and not a floor. Different handling needed if x is negative/positive when code is ceil() or floor(). OP's code missed this.
I'd avoid if (x >= INTMAX_MAX) { as that involves (double) INTMAX_MAX whose rounding and then precise value is "chosen in an implementation-defined manner". Instead, I'd compare against INTMAX_MAX_P1. some_integer_MAX is a Mersenne Number and with 2's complement, ...MIN is a negated "power of 2".
#include <inttypes.h>
#define INTMAX_MAX_P1 ((INTMAX_MAX/2 + 1)*2.0)
double my_ceil(double x) {
if (x >= INTMAX_MAX_P1) {
return x;
}
if (x < INTMAX_MIN) {
return x;
}
intmax_t i = (intmax_t) x; // this rounds towards 0
if (i < 0 || x == i) return i; // negative x is already rounded up.
return i + 1.0;
}
As x may be a not-a-number, it is more useful to reverse the compare as relational compare of a NaN is false.
double my_ceil(double x) {
if (x >= INTMAX_MIN && x < INTMAX_MAX_P1) {
intmax_t i = (intmax_t) x; // this rounds towards 0
if (i < 0 || x == i) return i; // negative x is already rounded up.
return i + 1.0;
}
return x;
}
double my_floor(double x) {
if (x >= INTMAX_MIN && x < INTMAX_MAX_P1) {
intmax_t i = (intmax_t) x; // this rounds towards 0
if (i > 0 || x == i) return i; // positive x is already rounded down.
return i - 1.0;
}
return x;
}
You're missing an important step: you need to check if the number is already integral, so for ceil assuming non-negative numbers (generalisation is trivial), use something like
double ceil(double f){
if (f >= LLONG_MAX){
// f will be integral unless you have a really funky platform
return f;
} else {
long long i = f;
return 0.0 + i + (f != i); // to obviate potential long long overflow
}
}
Another missing piece in the puzzle, which is covered off by my enclosing if, is to check if f is within the bounds of a long long. On common platforms if f was outside the bounds of a long long then it would be integral anyway.
Note that floor is trivial due to the fact that truncation to long long is always towards zero.
For example:
size_t x;
...
__builtin_uaddll_overflow(x,1,&x);
Would the above code correctly guard against integer overflow regardless of compiler implementation?
What I know so far:
This reference states that size_t is an unsigned type.
According to this discussion, typedef unsigned long size_t; may be used to define size_t.
Is there a function listed at this reference that will always be correct? Or will it necessarily depend on the specific implementation? If so, how could I programatically choose the correct function?
size_t x;
...
__builtin_uaddll_overflow(x,1,&x);
Would the above code correctly guard against integer overflow regardless of compiler implementation?
No. __builtin_uaddll_overflow() is not a specified C operator nor a C standard library function. It restricts code to select compilers. The functionality of __builtin_uaddll_overflow() is not specified by C.
Instead, simply compare against SIZE_MAX for a portable implementation. It is portable regardless if size_t is the same as unsigned, unsigned long or some other unsigned type, even wider or narrower than unsigned.
size_t x;
size_t y;
if (SIZE_MAX - x < y) Overflow();
else size_t sum = x + y;
This works for the various unsigned types too - even narrow ones. Use the same some_unsigned_type throughout.
some_unsigned_type x;
some_unsigned_type y;
if (some_unsigned_type_MAX - x < y) Overflow();
else some_unsigned_type sum = x + y;
When working with unsigned types at least as wide as unsigned, code could use the following.
some_at_least_unsigned_type x;
some_at_least_unsigned_type y;
some_at_least_unsigned_type sum = x + y; // overflow behavior well defined
if (sum < x) {
Overflow();
}
else {
// continue with non-overflowed sum
}
In the case of size_t, size_t is very commonly as wide or wider than unsigned, although not specified to be so.
The above usually works with unsigned types narrower then unsigned. Yet the x + y is done with int math then and unicorn platforms could then overflow int math. 1u*x + y or 0u + x + y forces the math to always be done as unsigned, regardless if some_unsigned_type is narrower/wider than unsigned. A good compiler will emit optimize code that does not perform an actual 1u* multiplication.
some_unsigned_type x;
some_unsigned_type y;
some_unsigned_type sum = 1u*x + y; // overflow behavior well defined
if (sum < x) {
Overflow();
}
else {
// continue with non-overflowed sum
}
Or in OP's + 1 example, to insure unsigned math addition:
size_t x;
size_t sum = x + 1u; // overflow behavior well defined
if (sum == 0) {
Overflow();
}
else {
// continue with non-overflowed sum
}
This question already has answers here:
What is the most effective way for float and double comparison?
(34 answers)
Closed 6 years ago.
Please before you think that I'm asking the same N% Question read it first and please pay Attention to it.
I'm working on a project where I have more functions which returns double and it may be possible that some of them are the same which is a good thing in my project and if is true then I need a double comparison to see if are equal.
I know that doing an equality comparison if( x == y ) is not a smart thing and we don't need to speak why, but we can check for < or > which is the part of this Question.
Does the language (standard) guarantee this, that the comparison < and > are 100%?
If yes then, the following program can be used:
#include <stdio.h>
int main(void){
double x = 3.14;
double y = 3.14;
if( x < y || x > y){
/* Are not Equal */
}else{
/* Are Equal, foo() will be called here\n" */
foo(x, y);
}
}
Does foo(x, y); get executed? BecauseX and Y should be equal here.
EDIT:
This question doesn't seek a way to compare two double, it is only the fact that should I use, or should I don't use < > instead of ==
I know that doing an equality comparison if( x == y ) is not a smart thing
This is simply not true. It may be the right thing to do or the wrong thing to do, depending on the particular problem.
if (x < y || x > y)
This has guaranteed exactly the same effect1 as
if (x != y)
and the opposite effect of
if (x == y)
When one is wrong, the other is wrong too. When one is right, the other is right as well. Writing an equality condition with < and > symbols instead of == or != doesn't suddenly make it smarter.
[1] Except maybe when one of the operands is a NaN.
some of them are the same ... and if is true then I need a double comparison to see if are equal.
OP is questioning two different ways for testing FP equality and wondering if they are the functionally alike.
Aside from maybe NaN, which is not well defined by C, (but well defined by IEEE 754), both comparisons are alike yet fail conically equivalence testing.
Consider this double code:
if (a==b) {
double ai = 1.0/a;
double bi = 1.0/b;
printf("%d\n", ai == bi);
} else {
printf("%d\n", 1);
}
Is the result always "1"? Below is an exception (mouse over to see)
Consider a=0.0; b=-0.0. Both are equal to each other, but their inverses typically are not the same. One being positive infinity, the other: negative infinity.
The question comes down to how equal do you need? Are NaN important? Using memcmp(&a, &b, sizeof a) is certainly a strong test and maybe too strong for on select systems FP numbers can have the same non-zero value, yet different encodings. If these differences are important, or maybe just the exceptional case above, is for OP to decide.
For testing if 2 different codes/functions were producing the same binary64 result, consider rating their Unit-in-the-Last-Place difference. Something like the following: compare unsigned long long ULP_diff() against 0, 1 or 2. depending on your error tolerance.
// not highly portable
#include <assert.h>
unsigned long long ULP(double x) {
union {
double d;
unsigned long long ull;
} u;
assert(sizeof(double) == sizeof(unsigned long long));
u.d = x;
if (u.ull & 0x8000000000000000) {
u.ull ^= 0x8000000000000000;
return 0x8000000000000000 - u.ull;
}
return u.ull + 0x8000000000000000;
}
unsigned long long ULP_diff(double x, double y) {
unsigned long ullx = ULP(x);
unsigned long ully = ULP(y);
if (x > y) return ullx - ully;
return ully - ullx;
}
If you want fractional number equality, you either have to use an epsilon comparison (ie. check if the numbers are close enough to one another within a specific threshold), or use some fixed-point arithmetic very carefully to avoid rounding errors.
And yes, this same question has been asked more times than necessary:
Most effective way for float and double comparison
Floating point equality and tolerances
You need to do more reading into how comparisons work, and specifically why floating point equality doesn't work. It's not an issue with the equals operator itself, as you appear to think (For arithmetic types [when no special values like NaN are involved], !(x > y || y > x) will always be the same as x == y. In fact, most compilers will optimize x < y || x > y to x != y), but rather because rounding error is a basic part of floating point operation in the first place. x == y does indeed work for floating point types, and you can do it freely. It becomes an issue after you do any arithmetic operation and then want to compare them, because it's unpredictable what the rounding error will do.
So essentially, yes. Compare equality all you want unless you are actually doing anything with the doubles. If you are just using them as an index or something of the like, there shouldn't be a problem, as long as you know you are assigning them the same value. Using boolean identities won't save you from the basic functionality of floating-point numbers.
First of all your conditional is a little off. To check for non equality you want
( x < y || y < x)
not
(x < y || y > x )
which just checks the same thing twice, meaning x < y comes back as false.
Ignoring that small issue:
Yes > and < should be 100% in that it is almost always the same as the ==. The only difference is different behavior with Nan. But it doesn't fix your problem.
Here is a really contrived example.
#include <stdio.h>
void foo(double x, double y){
printf( "OK a:%f, b:%f\n",x,y);
}
void bar(double x, double y){
printf( "BAD a:%f, b:%f\n",x,y);
}
int main(void){
double x = 3.14;
double y = 3.14;
if( x < y || y < x){
/* Are not Equal */
bar(x, y);
}else{
/* Are Equal, foo() will be called here\n" */
foo(x, y);
}
for( int i = 0; i < 1000; i++) {
y = y + 0.1;
}
x = x + 100;
if( x < y || y < x){
bar(x, y);
}else{
/* Are Equal, foo() will be called here\n" */
foo(x, y);
}
}
Here is you output (hint its BAD)
$ ./a.exe
OK a:3.140000, b:3.140000
BAD a:103.140000, b:103.140000
Best practice I know for double equality is to check there closeness within some epsilon,
eps = 0.00000000001
if( abs( x - y ) < eps ) {
printf("EQUAL!");
}
#include <stdio.h>
int main(void){
double x = 3.14;
double y = 3.14;
if( x < y || x > y){
/* Are not Equal */
}else{
/* Are Equal, foo() will be called here\n" */
printf("yes");
}
}
prints yes
Maybe it seems a little bit rare question, but I would like to find a function able to transform a double (c number) into a long (c number). It's not necessary to preserve the double information. The most important thing is:
double a,b;
long c,d;
c = f(a);
d = f(b);
This must be truth:
if (a < b) then c < d for all a,b double and for all c,d long
Thank you to all of you.
Your requirement is feasible if the following two conditions hold:
The compiler defines sizeof(double) the same as sizeof(long)
The hardware uses IEEE 754 double-precision binary floating-point format
While the 2nd condition holds on every widely-used platform, the 1st condition does not.
If both conditions do hold on your platform, then you can implement the function as follows:
long f(double x)
{
if (x > 0)
return double_to_long(x);
if (x < 0)
return -double_to_long(-x);
return 0;
}
You have several different ways to implement the conversion function:
long double_to_long(double x)
{
long y;
memcpy(&y,&x,sizeof(x));
return y;
}
long double_to_long(double x)
{
long y;
y = *(long*)&x;
return y;
}
long double_to_long(double x)
{
union
{
double x;
long y;
}
u;
u.x = x;
return u.y;
}
Please note that the second option is not recommended, because it breaks strict-aliasing rule.
There are four basic transformations from floating-point to integer types:
floor - Rounds towards negative infinity, i.e. next lowest integer.
ceil[ing] - Rounds towards positive infinity, i.e. next highest integer.
trunc[ate] - Rounds towards zero, i.e. strips the floating-point portion and leaves the integer.
round - Rounds towards the nearest integer.
None of these transformations will give the behaviour you specify, but floor will permit the slightly weaker condition (a < b) implies (c <= d).
If a double value uses more space to represent than a long, then there is no mapping that can meet your initial constraint, thanks to the pigeonhole principle. Basically, since the double type can represent many more distinct values than a long type, there is no way to preserve the strict partial order of the < relationship, as multiple double values would be forced to map to the same long value.
See also:
Difference between Math.Floor() and Math.Truncate() (Stack Overflow)
Pigeonhole principle (Wikipedia)
Use frexp() to get you mostly there. It splits the number into exponent and significand (fraction).
Assume long is at least the same size as double, other-wise this is pointless. Pigeonhole principle.
#include <math.h>
long f(double x) {
assert(sizeof(long) >= sizeof(double));
#define EXPOWIDTH 11
#define FRACWIDTH 52
int ipart;
double fraction = frexp(fabs(x), &ipart);
long lg = ipart;
lg += (1L << EXPOWIDTH)/2;
if (lg < 0) ipart = 0;
if (lg >= (1L << EXPOWIDTH)) lg = (1L << EXPOWIDTH) - 1;
lg <<= FRACWIDTH;
lg += (long) (fraction * (1L << FRACWIDTH));
if (x < 0) {
lg = -lg;
}
return lg;
}
-
Notes:
The proper value for EXPO depends on DBL_MAX_EXP and DBL_MIN_EXP and particulars of the double type.
This solution maps the same double values near the extremes of double. I will look and test more later.
Otherwise as commented above: overlay the two types.
As long is often 2's complement and double is laid out in a sign-magnitude fashion, extra work is need when the double is negative. Also watch out for -0.0.
long f(double x) {
assert(sizeof x == sizeof (long));
union {
double d;
long lg;
} u = { x*1.0 }; // *1.0 gets rid of -0.0
// If 2's complement - which is the common situation
if (u.lg < 0) {
u.lg = LONG_MAX - u.lg;
}
return u.lg;
}