alright. I have the Euclidean division like this : a = b * q + r
I know that to get r, I can do the modulo : a % b
but how do I get q ? // doesn't seem to work.
Using Euclidean division
If a = 7 and b = 3, then q = 2 and r = 1, since 7 = 3 × 2 + 1.
If a = 7 and b = −3, then q = −2 and r = 1, since 7 = −3 × (−2) + 1.
If a = −7 and b = 3, then q = −3 and r = 2, since −7 = 3 × (−3) + 2.
If a = −7 and b = −3, then q = 3 and r = 2, since −7 = −3 × 3 + 2.
Likely a more simple solution is available.
int Ediv(int a, int b) {
printf("a:%2d / b:%2d = ", a,b);
int r = a % b;
if (r < 0) r += abs(b);
printf("r:%2d ", r);
return (a - r) / b;
}
void Etest() {
printf("q:%2d\n", Ediv(7,3));
printf("q:%2d\n", Ediv(7,-3));
printf("q:%2d\n", Ediv(-7,3));
printf("q:%2d\n", Ediv(-7,-3));
}
a: 7 / b: 3 = r: 1 q: 2
a: 7 / b:-3 = r: 1 q:-2
a:-7 / b: 3 = r: 2 q:-3
a:-7 / b:-3 = r: 2 q: 3
OP asserts "I know that to get r, I can do the modulo : a % b". This fails when a is negative.
Further, % is the "remainder operator". In C, the difference between Euclidean remainder and modulo occurs when a is negative. Remainder calculation for the modulo operation
If a and b are integers, just use integer division, /.
chux's answer is a good one, as it's the one that correctly understands the question (unlike the accepted one). Daan Leijen's Division and Modulus for Computer Scientists also provides an algorithm with proof:
/* Euclidean quotient */
long quoE(long numer, long denom) {
/* The C99 and C++11 languages define both of these as truncating. */
long q = numer / denom;
long r = numer % denom;
if (r < 0) {
if (denom > 0)
q = q - 1; // r = r + denom;
else {
q = q + 1; // r = r - denom;
}
return q;
}
This one is trivially equivalent to chux's algorithm. It looks a bit more verbose, but it might take one fewer div instruction on x86, as the instruction performs a "divmod" -- returning q and r at the same time.
Take an example:
7 = 3*2 + 1
2 (q) can be obtained by 7/3, i.e, a/b = q.
Related
The formula for generating decryption key for RSA algorithm is ed = 1 mod T where T is generated using the formula (p-1)(q-1). p and q are two non identical prime number. e is the Encryption Key. So as per the formula if I like to implement the ed = 1 mod T in C program the code block should be
d = (1%T)/e;
However, I found most of the coding websites(coding website) use d*e = 1 + k * T to generate the decryption key.
I can not understand from where they get k?
The modular multiplicative inverse can be found with the extended Euclidean algorithm.
Here is a simple demonstration implementation. Cryptographic implementations generally needed extended-precision arithmetic.
#include <stdio.h>
#include <stdlib.h>
// Return the multiplicative inverse of e modulo t.
static int invert(int e, int t)
{
/* We will work with equations in the form:
x = d*e + s*t
where e and t are fixed, and x, d, and s change as we go along.
Initially, we know these two equations:
t = 0*e + 1*t.
e = 1*e + 0*t.
We will use the Euclidean algorithm to reduce the left side to the
greatest common divisor of t and e. If they are relatively prime,
this eventually produces an equation with 1 on the left side, giving
us:
1 = d*e + s*t
and then d is the multiplicative inverse of e since d*e is congruent to
1 modulo t.
*/
// Now we start with our first values of x, d, and s:
int x0 = t, d0 = 0, s0 = 1;
int x1 = e, d1 = 1, s1 = 0;
// Then we iteratively reduce the equations.
while (1)
{
/* Find the largest q such that we can subtract q*x1 from x0 without
getting a negative result.
*/
int q = x0/x1;
/* Subtract the equation x1 = d1*e + s1*t from the equation
x0 = d0*e + s0*t.
*/
int x2 = x0 - q*x1;
int d2 = d0 - q*d1;
int s2 = s0 - q*s1;
/* If we found the inverse, return it.
We could return d2 directly; it is mathematically correct.
However, if it is negative, the positive equivalent might be
preferred, so we return that.
*/
if (x2 == 1)
return d2 < 0 ? d2+t : d2;
if (x2 == 0)
{
fprintf(stderr, "Error, %d is not relatively prime to %d.\n", e, t);
exit(EXIT_FAILURE);
}
/* Slide equation 1 to equation 0 and equation 2 to equation 1 so we
can work on a new one.
*/
x0 = x1; x1 = x2;
d0 = d1; d1 = d2;
s0 = s1; s1 = s2;
}
}
int main(void)
{
int e = 3, t = 3127, d = invert(e, t);
printf("The multiplicative inverse of %d modulo %d is %d.\n", e, t, d);
}
Output:
The multiplicative inverse of 3 modulo 3127 is 2085.
static inline bool is_divisible(uint32_t n, uint64_t M) {
return n * M <= M - 1;
}
static uint64_t M3 = UINT64_C(0xFFFFFFFFFFFFFFFF) / 3 + 1;
....
uint8_t div3 = is_divisible(17, M3);
As mention in title, this function can determine whether n is divisible by 3.
The only thing I figure out is that M is same as ceil((1<<64)/d) which d is 3.
Are there anyone call explain why is_divisible work? thanks!
Divide n by 3 to find quotient q and remainder r, allowing us represent n as n = 3q + r, where 0 ≤ r < 3.
Intuitively, multiple 3q + r by (264−1)/3 + 1 causes the q portion to vanish because it wraps modulo 264, leaving the remainder portion to be in one of three segments of [0, 264) depending on the value of r. The comparison with M3 determines whether it is in the first segment, meaning r is zero. A proof follows.
Note that M3 is (264−1)/3 + 1.
Then n•M3 = (3q + r)•((264−1)/3 + 1) = q•(264−1)+3q+r•((264−1)/3 + 1) = q•264+2q+r•((264−1)/3 + 1).
When this is evaluated with uint64_t arithmetic, the q•264 term vanishes due to wrapping modulo 264, so the computed result is
2q+r•((264−1)/3 + 1).
Suppose n is a multiple of 3. Then r = 0. Since n is a uint32_t value, q < 232, so 2q+r•((264−1)/3 + 1) = 2q < 233 < M3.
Suppose n is not a multiple of 3. Then r = 1 or r = 2. If r = 1, r•((264−1)/3 + 1) = (264−1)/3 + 1 > M3−1. And, of course, if r = 2, r•((264−1)/3 + 1) is even greater and also exceeds M3−1. However, we need to be concerned about wrapping in uint64_t arithmetic. Again, since q < 232, we have, for r = 2, 2q+r•((264−1)/3 + 1) < 2•232 + 2•((264−1)/3 + 1) = 233 + 2/3•264 − 2/3 + 2 = 264 − 1/3•264 + 233 + 4/3, which is clearly less than 264, so no wrapping occurs.
So, I have to make a work for college and it consists in creating an algorithm.
The algorithm must find couples of numbers which satisfy a certain condition, which is: the sum from 1 to n (exlusive) results the same as the sum from n+1 to m (inclusive).
At the final, the algorithm must give at least 15 couples.
The first couple is 6 and 8, because from 1 to n (exclusive) (6) is 1+2+3+4+5 = 15 and from n+1 to m is 8+7 = 15.
The algorithm I created is the following one:
int main() {
int count = 0;
unsigned int before = 0;
unsigned int after = 0;
unsigned int n = 1;
unsigned int m = 0;
do {
before += n - 1;
after = n + 1;
for (m = after + 1; after < before; m++) {
after += m;
}
if (before == after) {
printf("%d\t%d\n", n, (m - 1));
count++;
}
n++;
} while (count < 15);
}
This is actually OK, but some of the output are not correct, and its also crap, in terms of complexity, and since I am studying Complexity of Algorithms, it would be good to find some algorithm better than this one.
I also tried doing it in Java, but using int is not good for this problem and using long, it takes hours and hours to compute.
The numbers I have found so far:
6 and 8
35 and 49
204 and 288
1189 and 1681
6930 and 9800
40391 and 57121
The following ones may be incorrect:
100469 and 107694
115619 and 134705
121501 and 144689
740802 and 745928
1250970 and 1251592
2096128 and 2097152
2100223 and 2101246
4196352 and 8388608
18912301 and 18912497
Your results are incorrect beyond the first 6: the range of type unsigned int is insufficient to store the sums. You should use type unsigned long long for before and after.
Furthermore, your algorithm becomes very slow for large values because you recompute after from scratch for each new value of before, with a time complexity of O(N2). You can keep 2 running sums in parallel and reduce the complexity to quasi-linear.
Last but not least, there are only 12 solutions below UINT32_MAX, so type unsigned long long, which is guaranteed to have at least 64 value bits is required for n and m as well. To avoid incorrect results, overflow should be tested when updating after.
Further tests show that the sums after and before exceed 64 bits for values of m around 8589934591. A solution is to subtract 262 from both before and after when they reach 263. With this modification, the program can keep searching for larger values of n and m much beyond 32-bits.
Here is an improved version:
#include <stdio.h>
int main() {
int count = 0;
unsigned long long n = 1;
unsigned long long m = 2;
unsigned long long before = 0;
unsigned long long after = 2;
for (;;) {
if (before < after) {
before += n;
n++;
after -= n;
} else {
m++;
/* reduce values to prevent overflow */
if (after > 0x8000000000000000) {
after -= 0x4000000000000000;
before -= 0x4000000000000000;
}
after += m;
while (before > after) {
after += n;
n--;
before -= n;
}
}
if (before == after) {
printf("%llu\t%llu\n", n, m);
count++;
if (count == 15)
break;
}
}
printf("%d solutions up to %llu\n", count, m);
return 0;
}
Output (running time 30 minutes):
6 8
35 49
204 288
1189 1681
6930 9800
40391 57121
235416 332928
1372105 1940449
7997214 11309768
46611179 65918161
271669860 384199200
1583407981 2239277041
9228778026 13051463048
53789260175 76069501249
313506783024 443365544448
15 solutions up to 443365544448
Your initial brute force program as posted above generates plenty of data for you to analyze. The people in the question's comments recommended the "sum of an arithmetic series" formula instead of your repeated addition, but the fact is that it still would run slow. It's surely an improvement, but it's still not good enough if you want something usable.
Believe it or not, there are some patterns to the values of n and m, which will require some math to explain. I'll be using the functions n(i), m(i), and d(i) = m(i) - n(i) to represent the values of n, m, and the difference between them, respectively, during iteration i.
You found the first six couples:
i n(i) m(i) d(i)
== ====== ====== ======
1 6 8 2
2 35 49 14
3 204 288 84
4 1189 1681 492
5 6930 9800 2870
6 40391 57121 16730
Notice that 6+8 = 14, 35+49 = 84, 204+288 = 492, etc. It so happens that, in the general case, d(i+1) = m(i) + n(i) (e.g. d(2) = m(1) + n(1) = 6 + 8 = 14).
So now we know the following:
d(7)
= n(6) + m(6)
= 40391 + 57121
= 97512
# m(i) = n(i) + d(i)
m(7) = n(7) + 97512
Another way of looking at it since m(i) = n(i) + d(i) is d(i+1) = d(i) + 2n(i):
d(7)
= n(6) + d(6) + n(6)
= d(6) + 2n(6)
= 16730 + 2(40391)
= 97512
d(i) also happens to be useful for computing n(i+1):
n(i+1) = 2d(i+1) + n(i) + 1
n(7) = 2d(7) + n(6) + 1
= 2(97512) + 40391 + 1
= 235416
From there, it's easy to determine things:
i n(i) m(i) d(i)
== ====== ====== ======
1 6 2 8
2 35 14 49
3 204 84 288
4 1189 492 1681
5 6930 2870 9800
6 40391 16370 57121
7 235416 332928 97512
But what about a starting condition? We need a way to find 6 in the first place, and that starting case can be computed by working backward and using substitution:
n(1) = 2d(1) + n(0) + 1
6 = 2(2) + n(0) + 1
5 = 4 + n(0)
1 = n(0)
d(1) = d(0) + 2n(0)
2 = d(0) + 2(1)
2 = d(0) + 2
0 = d(0)
m(0) = n(0) + d(0)
= 1 + 0
= 1
Note that n(0) = m(0) (1 = 1), but it is not a couple. For a pair of numbers to be a couple, the numbers must not be the same.
All that's left is to compute the sum. Since the integers from 1 to n-1 (i.e. 1 to n, excluding n) form an arithmetic series and the series starts at 1, you can use the formula
n(n - 1)
S(n) = --------
2
Below is a program that uses all of this information. You'll notice I'm using a multiplication function mul in place of the multiplication operator. The function's result is used to end the loop prematurely when an unsigned overflow (i.e. wraparound) is encountered. There are probably better ways to detect the wraparound behavior, and the algorithm could be better designed, but it works.
#include <errno.h>
#include <limits.h>
#include <stdio.h>
typedef unsigned long long uval_t;
/*
* Uses a version of the "FOIL method" to multiply two numbers.
* If overflow occurs, 0 is returned, and errno is ERANGE.
* Otherwise, no overflow occurs, and the product m*n is returned.
*/
uval_t mul(uval_t m, uval_t n)
{
/*
* Shift amount is half the number of bits in uval_t.
* This allows us to work with the upper and lower halves.
* If the upper half of F is not zero, overflow occurs and zero is returned.
* If the upper half of (O+I << half_shift) + L is not zero,
* overflow occurs and zero is returned.
* Otherwise, the returned value is the mathematically accurate result of m*n.
*/
#define half_shift ((sizeof (uval_t) * CHAR_BIT) >> 1)
#define rsh(v) ((v) >> half_shift)
#define lsh(v) ((v) << half_shift)
uval_t a[2], b[2];
uval_t f, o, i, l;
a[0] = rsh(m);
a[1] = m & ~lsh(a[0]);
b[0] = rsh(n);
b[1] = n & ~lsh(b[0]);
f = a[0] * b[0];
if (f != 0)
{
errno = ERANGE;
return 0;
}
o = a[0] * b[1];
i = a[1] * b[0];
l = a[1] * b[1];
if (rsh(o+i + rsh(l)) != 0)
{
errno = ERANGE;
return 0;
}
return lsh(o+i) + l;
}
int main(void)
{
int i;
uval_t n = 1, d = 0;
uval_t sum = 0;
#define MAX 15
for (i = 1; i <= MAX; i++)
{
d += n * 2;
n += d * 2 + 1;
sum = mul(n, n - 1) / 2;
if (sum == 0)
break;
printf("%2d\t%20llu\t%20llu\t%20llu\n", i, n, n+d, sum);
}
return 0;
}
This yields 12 lines of output, the last being this one:
12 1583407981 2239277041 1253590416355544190
Of course, if you don't care about the sums, then you can just avoid computing them entirely, and you can find all 15 couples just fine without even needing to check for overflow of a 64-bit type.
To go further with the sums, you have a few options, in order of most to least recommended:
use a "bignum" library such as GNU MP, which is similar to Java's java.math.BigInteger class and which has its own printf-like function for displaying values; if you're on Linux, it may already be available
use your compiler's 128-bit type, assuming it has one available, and create your own printing function for it if necessary
create your own "big integer" type and the associated necessary addition, subtraction, multiplication, division, etc. printing functions for it; a way that allows for easy printing is that it could just be two unsigned long long values glued together with one representing the lower 19 decimal digits (i.e. the max value for it would be 999 9999 9999 9999 9999), and the other representing the upper 19 digits for a total of 38 digits, which is 1038-1 or 127 bits
The fact that the full 15 sums required don't fit in 64 bits, however, makes me concerned that the question was perhaps worded badly and wanted something different from what you wrote.
Edit
To prove this works, we must first establish some rules:
For any values n and m, 0 ≤ n < m must be true, meaning n == m is forbidden (else we don't have a couple, a.k.a. "ordered pair").
n and m must both be integers.
With that out of the way, consider an algorithm for computing the sum of an arithmetic series starting at a and ending at, and including, b with a difference of +1 between each successive term:
(b - a + 1)(b + a)
S(a, b) = ------------------
2
b² - a² + b + a
= ---------------
2
b(1 + b) + a(1 - a)
= -------------------
2
If such a series begins at a=1, you can derive a simpler formula:
b(b + 1)
S(b) = --------
2
Applying this to your problem, you want to know how to find values such that the following is true:
S(n-1) = S(n+1, m)
After applying the arguments, the result looks like this:
(n-1)n m(1 + m) + (n+1)[1 - (n+1)]
------ = ---------------------------
2 2
(n-1)n = m(1 + m) + (n+1)(1 - n - 1)
n² - n = m² + m + (n+1)(-n)
n² - n = m² + m - n² - n
2n² = m² + m
While not important for my purposes, it's perhaps worth noting that m² + m can be rewritten as m(m+1), and the 2n² signifies that one or both of m and m+1 must be divisible by 2. In addition, one must be a perfect square while the other must be twice a perfect square due to the requirement that at least one expression must be divisible by 2. In other words, 2n² = m(m+1) = 2x²y². You can find another equally valid solution using x and y to generate the values of n and m, but I won't demonstrate that here.
Given the equations for n(i+1), m(i+1), and d(i+1):
d(i+1) = d(i) + 2n(i)
= m(i) + n(i)
n(i+1) = 2d(i+1) + n(i) + 1
= 2m(i) + 3n(i) + 1
m(i+1) = d(i+1) + n(i+1)
= 3m(i) + 4n(i) + 1
And the starting conditions:
n(0) = 1
d(0) = 0
m(0) = 1
We can determine whether they actually work by substituting i+2 in place of i in all cases and finding whether we end up with the same equation. Assuming f(n(i)) = 2n²(i) and g(m(i)) = m(i) ⋅ (m(i) + 1), the equation f(n(i+2)) = g(m(i+2)) reduces to f(n(i)) = g(m(i)), proving the equations work for any couple:
f(n(i+2))
= g(m(i+2))
f(2m(i+1) + 3n(i+1) + 1)
= g((3m(i+1) + 4n(i+1) + 1))
2 ⋅ (12m(i) + 17n(i) + 6)²
= (17m(i) + 24n(i) + 8) ⋅ (17m(i) + 24n(i) + 8 + 1)
2 ⋅ (144m²(i) + 408m(i)⋅n(i) + 144m(i) + 289n²(i) + 204n(i) + 36)
= 289m²(i) + 816m(i)⋅n(i) + 289m(i) + 576n²(i) + 408n(i) + 72
288m²(i) + 816m(i)⋅n(i) + 288m(i) + 578n²(i) + 408n(i) + 72
= 289m²(i) + 816m(i)⋅n(i) + 289m(i) + 576n²(i) + 408n(i) + 72
2n²(i)
= m²(i) + m(i)
f(n(i))
= g(m(i))
If you're lost toward the end, I simply subtracted 288m²(i) + 816m(i)⋅n(i) + 288m(i) + 576n²(i) + 408n(i) + 72 from both sides of the equation, yielding 2n²(i) = m²(i) + m(i).
I need to compute the mathematical expression floor(ln(u)/ln(1-p)) for 0 < u < 1 and 0 < p < 1 in C on an embedded processor with no floating point arithmetics and no ln function. The result is a positive integer. I know about the limit cases (p=0), I'll deal with them later...
I imagine that the solution involves having u and p range over 0..UINT16_MAX, and appeal to a lookup table for the logarithm, but I cannot figure out how exactly: what does the lookup table map to?
The result needs not be 100% exact, approximations are OK.
Thanks!
Since the logarithm is used in both dividend and divisor, there is no need to use log(); we can use log2() instead. Due to the restrictions on the inputs u and p the logarithms are known to be both negative, so we can restrict ourselves to compute the positive quantity -log2().
We can use fixed-point arithmetic to compute the logarithm. We do so by multiplying the original input by a sequence of factors of decreasing magnitude that approach 1. Considering each of the factor in sequence, we multiply the input only by those factors that result in a product closer to 1, but without exceeding it. While doing so, we sum the log2() of the factors that "fit". At the end of this procedure we wind up with a number very close to 1 as our final product, and a sum that represents the binary logarithm.
This process is known in the literature as multiplicative normalization or pseudo division, and some early publications describing it are the works by De Lugish and Meggitt. The latter indicates that the origin is basically Henry Briggs's method for computing common logarithms.
B. de Lugish. "A Class of Algorithms for Automatic Evaluation of Functions and Computations in a Digital Computer". PhD thesis, Dept. of Computer Science, University of Illinois, Urbana, 1970.
J. E. Meggitt. "Pseudo division and pseudo multiplication processes". IBM Journal of Research and Development, Vol. 6, No. 2, April 1962, pp. 210-226
As the chosen set of factors comprises 2i and (1+2-i) the necessary multiplications can be performed without the need for a multiplication instruction: the products can be computed by either shift or shift plus add.
Since the inputs u and p are purely fractional numbers with 16 bits, we may want to chose a 5.16 fixed-point result for the logarithm. By simply dividing the two logarithm values, we remove the fixed-point scale factor, and apply a floor() operation at the same time, because for positive numbers, floor(x) is identical to trunc(x) and integer division is truncating.
Note that the fixed-point computation of the logarithm results in large relative error for inputs near 1. This in turn means the entire function computed using fixed-point arithmetic may deliver results significantly different from the reference if p is small. An example of this is the following test case: u=55af p=0052 res=848 ref=874.
#include <stdlib.h>
#include <stdio.h>
#include <stdint.h>
/* input x is a 0.16 fixed-point number in [0,1)
function returns -log2(x) as a 5.16 fixed-point number in (0, 16]
*/
uint32_t nlog2_16 (uint16_t x)
{
uint32_t r = 0;
uint32_t t, a = x;
/* try factors 2**i with i = 8, 4, 2, 1 */
if ((t = a << 8 ) < 0x10000) { a = t; r += 0x80000; }
if ((t = a << 4 ) < 0x10000) { a = t; r += 0x40000; }
if ((t = a << 2 ) < 0x10000) { a = t; r += 0x20000; }
if ((t = a << 1 ) < 0x10000) { a = t; r += 0x10000; }
/* try factors (1+2**(-i)) with i = 1, .., 16 */
if ((t = a + (a >> 1)) < 0x10000) { a = t; r += 0x095c0; }
if ((t = a + (a >> 2)) < 0x10000) { a = t; r += 0x0526a; }
if ((t = a + (a >> 3)) < 0x10000) { a = t; r += 0x02b80; }
if ((t = a + (a >> 4)) < 0x10000) { a = t; r += 0x01664; }
if ((t = a + (a >> 5)) < 0x10000) { a = t; r += 0x00b5d; }
if ((t = a + (a >> 6)) < 0x10000) { a = t; r += 0x005ba; }
if ((t = a + (a >> 7)) < 0x10000) { a = t; r += 0x002e0; }
if ((t = a + (a >> 8)) < 0x10000) { a = t; r += 0x00171; }
if ((t = a + (a >> 9)) < 0x10000) { a = t; r += 0x000b8; }
if ((t = a + (a >> 10)) < 0x10000) { a = t; r += 0x0005c; }
if ((t = a + (a >> 11)) < 0x10000) { a = t; r += 0x0002e; }
if ((t = a + (a >> 12)) < 0x10000) { a = t; r += 0x00017; }
if ((t = a + (a >> 13)) < 0x10000) { a = t; r += 0x0000c; }
if ((t = a + (a >> 14)) < 0x10000) { a = t; r += 0x00006; }
if ((t = a + (a >> 15)) < 0x10000) { a = t; r += 0x00003; }
if ((t = a + (a >> 16)) < 0x10000) { a = t; r += 0x00001; }
return r;
}
/* Compute floor(log(u)/log(1-p)) for 0 < u < 1 and 0 < p < 1,
where 'u' and 'p' are represented as 0.16 fixed-point numbers
Result is an integer in range [0, 1048676]
*/
uint32_t func (uint16_t u, uint16_t p)
{
uint16_t one_minus_p = 0x10000 - p; // 1.0 - p
uint32_t log_u = nlog2_16 (u);
uint32_t log_p = nlog2_16 (one_minus_p);
uint32_t res = log_u / log_p; // divide and floor in one go
return res;
}
The maximum value of this function basically depends on the precision limit; that is, how arbitrarily close to the limits (u -> 0) or (1 - p -> 1) the fixed point values can be.
If we assume (k) fractional bits, e.g., with the limits: u = (2^-k) and 1 - p = 1 - (2^-k),
then the maximum value is: k / (k - log2(2^k - 1))
(As the ratio of natural logarithms, we are free to use any base e.g., lb(x) or log2)
Unlike njuffa's answer, I went with a lookup table approach, settling on k = 10 fractional bits to represent 0 < frac(u) < 1024 and 0 < frac(p) < 1024. This requires a log table with 2^k entries. Using 32-bit table values, we're only looking at a 4KiB table.
Any more than that, and you are using enough memory that you could seriously consider using the relevant parts of a 'soft-float' library. e.g., k = 16 would yield a 256KiB LUT.
We're computing the values - log2(i / 1024.0) for 0 < i < 1024. Since these values are in the open interval (0, k), we only need 4 binary digits to store the integral part. So we store the precomputed LUT in 32-bit [4.28] fixed-point format:
uint32_t lut[1024]; /* never use lut[0] */
for (uint32_t i = 1; i < 1024; i++)
lut[i] = (uint32_t) (- (log2(i / 1024.0) * (268435456.0));
Given: u, p represented by [0.10] fixed-point values in [1, 1023] :
uint32_t func (uint16_t u, uint16_t p)
{
/* assert: 0 < u, p < 1024 */
return lut[u] / lut[1024 - p];
}
We can easily test all valid (u, p) pairs against the 'naive' floating-point evaluation:
floor(log(u / 1024.0) / log(1.0 - p / 1024.0))
and only get a mismatch (+1 too high) on the following cases:
u = 193, p = 1 : 1708 vs 1707 (1.7079978488147417e+03)
u = 250, p = 384 : 3 vs 2 (2.9999999999999996e+00)
u = 413, p = 4 : 232 vs 231 (2.3199989016957960e+02)
u = 603, p = 1 : 542 vs 541 (5.4199909906444600e+02)
u = 680, p = 1 : 419 vs 418 (4.1899938077226307e+02)
Finally, it turns out that using the natural logarithm in a [3.29] fixed-point format gives us even higher precision, where:
lut[i] = (uint32_t) (- (log(i / 1024.0) * (536870912.0));
only yields a single 'mismatch', though 'bignum' precision suggests it's correct:
u = 250, p = 384 : 3 vs 2 (2.9999999999999996e+00)
I want to check if the / operator has no remainder or not:
int x = 0;
if (x = 16 / 4), if there is no remainder:
then x = x - 1;
if (x = 16 / 5), if remainder is not zero:
then x = x + 1;
How to check if there are remainder in C? and
How to implement it?
Frist, you need % remainder operator:
if (x = 16 % 4){
printf("remainder in X");
}
Note: it will not work with float/double, in that case you need to use fmod (double numer, double denom);.
Second, to implement it as you wish:
if (x = 16 / 4), if there is no remainder, x = x - 1;
If (x = 16 / 5), then x = x + 1;
Useing , comma operator, you can do it in single step as follows (read comments):
int main(){
int x = 0, // Quotient.
n = 16, // Numerator
d = 4; // Denominator
// Remainder is not saved
if(x = n / d, n % d) // == x = n / d; if(n % d)
printf("Remainder not zero, x + 1 = %d", (x + 1));
else
printf("Remainder is zero, x - 1 = %d", (x - 1));
return 1;
}
Check working codes #codepade: first, second, third.
Notice in if-condition I am using Comma Operator: ,, to understand , operator read: comma operator with an example.
If you want to find the remainder of an integer division then you can use the modulus(%):
if( 16 % 4 == 0 )
{
x = x - 1 ;
}
else
{
x = x +1 ;
}
use the % operator to find the remainder of a division
if (number % divisor == 0)
{
//code for perfect divisor
}
else
{
//the number doesn't divide perfectly by divisor
}
use modulous operator for this purpose.
if(x%y == 0) then there is no remainder.
In division operation, if the result is floating point, then only integer part will be returned and decimal part will be discarded.
you can use Modulous operator which deals with remainder.
The modulus operator (represented by the % symbol in C) computes the remainder. So:
x = 16 % 4;
x will be 0.
X = 16 % 5;
x will be 1