I´ve run into problem while working on my school project in C.
I am working with two equtions:
x=a+ui ;
y=b+vj
Known is the value of a, b, u, v. I need to multiply u, v with natural numbers (i, j) until equality x=y occurs.
(Note: i and j may not have a same value/ could be a different numbers while x=y).
And here comes the problem, I dont know how to solve the equations while i and j differs, i suppose that possible solution is by cycles, or diofantic equations (I dont know how to apply them to the equations above.)
I am beginner and stuck. Can someone help me with solution in C code, please? Thank you.
EDIT
If i just want to solve the equations by the cycles,
I think I know what should I do, but I just dont know how to write it in C..
1) every step of cycle, I count x, y
2) if x is lower then y, i is increased by 1, if not, j is increased by 1
3) its repeating until x=y
4) sometimes the solution is not there at all, so I have to put there a condition so it doesnt run forever.
Rewriting your equations, you get:
a + u i = b + v j
With the solution
i = (b + v j - a) / u
Now you need to find a natural number j, such that i becomes natural. Obviously, b + v j - a has to be a positive multiple of u. So:
b + v j - a = k * u, k \in N
j = (k * u + a - b) / v
Now k * u + a - b must be a multiple of v. This is not always possible. The easiest way to check is to iterate the first few possible ks and see where it gets you. If you get a natural number, you can plug the solution of j into the above equation and you are guaranteed to get a natural i.
This assumes that x, y, a, b, u, and v are real numbers. If they are also integers, you might get a bit farther than that.
Related
For an assignment I need to solve a mathmatical problem. I narrowed it down to the following:
Let A[1, ... ,n] be an array of n integers.
Let y be an integer constant.
Now, I have to write an algorithm that finds the minimum of M(y) in O(n) time:
M(y) = Sum |A[i] - y|, i = 1 to n. Note that I not just take A[i] - y, but the absolute value |A[i] - y|.
For clarity, I also put this equation in Wolfram Alpha.
I have considered least squares method, but this will not yield the minimum of M(y) but more of an average value of A, I think. As I'm taking the absolute value of A[i] - y, there is also no way I can differentiate this function to y. Also I can't just come up with any algorithm because I have to do it in O(n) time. Also, I believe there can be more correct answers for y in some cases, in that case, the value of y must be equal to one of the integer elements of A.
This has really been eating me for a whole week now and I still haven't figured it out. Can anyone please teach me the way to go or point me in the right direction? I'm stuck. Thank you so much for your help.
You want to pick a y for which M(y) = sum(abs(A[i] - y)) is minimal. Let's assume every A[i] is positive (it does not change the result, because the problem is invariant by translation).
Let's start with two simple observations. First, if you pick y such that y < min(A) or y > max(A), you end up with a greater value for M(y) than if you picked y such that min(A) <= y <= max(A). Also, there is a unique local minimum or range of minima of A (M(y) is convex).
So we can start by picking some y in the interval [min(A) .. max(A)] and try to move this value around so that we get a smaller M(y). To make things easier to understand, let's sort A and pick a i in [1 .. n] (so y = A[i]).
There are three cases to consider.
If A[i+1] > A[i], and either {n is odd and i < (n+1)/2} or {n is even and i < n/2}, then M(A[i+1]) < M(A[i]).
This is because, going from M(A[i]) to M(A[i+1]), the number of terms that decrease (that is n-i) is greater than the number of terms that increase (that is i), and the increase or decrease is always of the same amount. In the case where n is odd, i < (n+1)/2 <=> 2*i < n+1 <=> 2*i < n, because 2*i is even (thus necessarily smaller than a larger even number from which we subtract one).
In more formal terms, M(A[i]) = sum(A[i]-A[s]) + sum(A[g]-A[i]), where s and g represent indices such that A[s] < A[i] and A[g] > A[i]. So if A[i+1] > A[i], then M(A[i+1]) = sum(A[i]-A[s]) + i*(A[i+1]-A[i]) + sum(A[g]-A[i]) - (n-i)*(A[i+1]-A[i]) = M(A[i]) + (2*i-n)*(A[i+1]-A[i]). Since 2*i < n and A[i+1] > A[i], (2*i-n)*(A[i+1]-A[i]) < 0, so M(A[i+1]) < M(A[i]).
Similarly, if A[i-1] < A[i], and either {n is odd and i > (n+1)/2} or {n is even and i > (n/2)+1}, then M(A[i-1]) > M(A[i]).
Finally, if {n is odd and i = (n+1)/2} or {n is even and i = (n/2) or (n/2)+1}, then you have a minimum, because decrementing or incrementing i will eventually lead you to the first or second case, respectively. There are leftover possible values for i, but all of them lead to A[i] being a minimum too.
The median of A is exactly the value A[i] where i satisfies the last case. If the number of elements in A is odd, then you have exactly one such value, y = A[(n+1)/2] (but possibly multiple indices for it) ; if it's even, then you have a range (which may contain just one integer) of such values, A[n/2] <= y <= A[n/2+1].
There is a standard C++ algorithm that can help you find the median in O(n) time : nth_element. If you are using another language, look up the median of medians algorithm (which Nico Schertler pointed out) or even introselect (which is what nth_element typically uses).
I have came across this problem many time but I am unable to solve it. There would occur some cases or the other which will wrong answer or otherwise the program I write will be too slow. Formally I am talking about calculating
nCk mod p where p is a prime n is a large number, and 1<=k<=n.
What have I tried:
I know the recursive formulation of factorial and then modelling it as a dynamic programming problem, but I feel that it is slow. The recursive formulation is (nCk) + (nCk-1) = (n+1Ck). I took care of the modulus while storing values in array to avoid overflows but I am not sure that just doing a mod p on the result will avoid all overflows as it may happen that one needs to remove.
To compute nCr, there's a simple algorithm based on the rule nCr = (n - 1)C(r - 1) * n / r:
def nCr(n,r):
if r == 0:
return 1
return n * nCr(n - 1, r - 1) // r
Now in modulo arithmetic we don't quite have division, but we have modulo inverses which (when modding by a prime) are just as good
def nCrModP(n, r, p):
if r == 0:
return 1
return n * nCrModP(n - 1, r - 1) * modinv(r, p) % p
Here's one implementation of modinv on rosettacode
Not sure what you mean by "storing values in array", but I assume they array serves as a lookup table while running to avoid redundant calculations to speed things up. This should take care of the speed problem. Regarding the overflows - you can perform the modulo operation at any stage of computation and repeat it as much as you want - the result will be correct.
First, let's work with the case where p is relatively small.
Take the base-p expansions of n and k: write n = n_0 + n_1 p + n_2 p^2 + ... + n_m p^m and k = k_0 + k_1 p + ... + k_m p^m where each n_i and each k_i is at least 0 but less than p. A theorem (which I think is due to Edouard Lucas) states that C(n,k) = C(n_0, k_0) * C(n_1, k_1) * ... * C(n_m, k_m). This reduces to taking a mod-p product of numbers in the "n is relatively small" case below.
Second, if n is relatively small, you can just compute binomial coefficients using dynamic programming on the formula C(n,k) = C(n-1,k-1) + C(n-1,k), reducing mod p at each step. Or do something more clever.
Third, if k is relatively small (and less than p), you should be able to compute n!/(k!(n-k)!) mod p by computing n!/(n-k)! as n * (n-1) * ... * (n-k+1), reducing modulo p after each product, then multiplying by the modular inverses of each number between 1 and k.
I am looking for a fast algorithm:
I have a int array of size n, the goal is to find all patterns in the array that
x1, x2, x3 are different elements in the array, such that x1+x2 = x3
For example I know there's a int array of size 3 is [1, 2, 3] then there's only one possibility: 1+2 = 3 (consider 1+2 = 2+1)
I am thinking about implementing Pairs and Hashmaps to make the algorithm fast. (the fastest one I got now is still O(n^2))
Please share your idea for this problem, thank you
Edit: The answer below applies to a version of this problem in which you only want one triplet that adds up like that. When you want all of them, since there are potentially at least O(n^2) possible outputs (as pointed out by ex0du5), and even O(n^3) in pathological cases of repeated elements, you're not going to beat the simple O(n^2) algorithm based on hashing (mapping from a value to the list of indices with that value).
This is basically the 3SUM problem. Without potentially unboundedly large elements, the best known algorithms are approximately O(n^2), but we've only proved that it can't be faster than O(n lg n) for most models of computation.
If the integer elements lie in the range [u, v], you can do a slightly different version of this in O(n + (v-u) lg (v-u)) with an FFT. I'm going to describe a process to transform this problem into that one, solve it there, and then figure out the answer to your problem based on this transformation.
The problem that I know how to solve with FFT is to find a length-3 arithmetic sequence in an array: that is, a sequence a, b, c with c - b = b - a, or equivalently, a + c = 2b.
Unfortunately, the last step of the transformation back isn't as fast as I'd like, but I'll talk about that when we get there.
Let's call your original array X, which contains integers x_1, ..., x_n. We want to find indices i, j, k such that x_i + x_j = x_k.
Find the minimum u and maximum v of X in O(n) time. Let u' be min(u, u*2) and v' be max(v, v*2).
Construct a binary array (bitstring) Z of length v' - u' + 1; Z[i] will be true if either X or its double [x_1*2, ..., x_n*2] contains u' + i. This is O(n) to initialize; just walk over each element of X and set the two corresponding elements of Z.
As we're building this array, we can save the indices of any duplicates we find into an auxiliary list Y. Once Z is complete, we just check for 2 * x_i for each x_i in Y. If any are present, we're done; otherwise the duplicates are irrelevant, and we can forget about Y. (The only situation slightly more complicated is if 0 is repeated; then we need three distinct copies of it to get a solution.)
Now, a solution to your problem, i.e. x_i + x_j = x_k, will appear in Z as three evenly-spaced ones, since some simple algebraic manipulations give us 2*x_j - x_k = x_k - 2*x_i. Note that the elements on the ends are our special doubled entries (from 2X) and the one in the middle is a regular entry (from X).
Consider Z as a representation of a polynomial p, where the coefficient for the term of degree i is Z[i]. If X is [1, 2, 3, 5], then Z is 1111110001 (because we have 1, 2, 3, 4, 5, 6, and 10); p is then 1 + x + x2 + x3 + x4 + x5 + x9.
Now, remember from high school algebra that the coefficient of xc in the product of two polynomials is the sum over all a, b with a + b = c of the first polynomial's coefficient for xa times the second's coefficient for xb. So, if we consider q = p2, the coefficient of x2j (for a j with Z[j] = 1) will be the sum over all i of Z[i] * Z[2*j - i]. But since Z is binary, that's exactly the number of triplets i,j,k which are evenly-spaced ones in Z. Note that (j, j, j) is always such a triplet, so we only care about ones with values > 1.
We can then use a Fast Fourier Transform to find p2 in O(|Z| log |Z|) time, where |Z| is v' - u' + 1. We get out another array of coefficients; call it W.
Loop over each x_k in X. (Recall that our desired evenly-spaced ones are all centered on an element of X, not 2*X.) If the corresponding W for twice this element, i.e. W[2*(x_k - u')], is 1, we know it's not the center of any nontrivial progressions and we can skip it. (As argued before, it should only be a positive integer.)
Otherwise, it might be the center of a progression that we want (so we need to find i and j). But, unfortunately, it might also be the center of a progression that doesn't have our desired form. So we need to check. Loop over the other elements x_i of X, and check if there's a triple with 2*x_i, x_k, 2*x_j for some j (by checking Z[2*(x_k - x_j) - u']). If so, we have an answer; if we make it through all of X without a hit, then the FFT found only spurious answers, and we have to check another element of W.
This last step is therefore O(n * 1 + (number of x_k with W[2*(x_k - u')] > 1 that aren't actually solutions)), which is maybe possibly O(n^2), which is obviously not okay. There should be a way to avoid generating these spurious answers in the output W; if we knew that any appropriate W coefficient definitely had an answer, this last step would be O(n) and all would be well.
I think it's possible to use a somewhat different polynomial to do this, but I haven't gotten it to actually work. I'll think about it some more....
Partially based on this answer.
It has to be at least O(n^2) as there are n(n-1)/2 different sums possible to check for other members. You have to compute all those, because any pair summed may be any other member (start with one example and permute all the elements to convince yourself that all must be checked). Or look at fibonacci for something concrete.
So calculating that and looking up members in a hash table gives amortised O(n^2). Or use an ordered tree if you need best worst-case.
You essentially need to find all the different sums of value pairs so I don't think you're going to do any better than O(n2). But you can optimize by sorting the list and reducing duplicate values, then only pairing a value with anything equal or greater, and stopping when the sum exceeds the maximum value in the list.
How can I prove if this language is regular or not?
L = {an bn: n≥1} union {an bn+2: n≥1}
I'll give an approach and a sketch of a prove, there might be some holes in it that I believe you can fill yourself.
The idea is to use nerode's theorem - show that there are infinte number of equivalence groups for RL - and from the theorem you can derive that the language is irregular.
Define two types of sets:
G_j = {anb k | n-k = j , k≥1} for each j in
[-2,-1,0,1,...]
H_j = {aj } for each j in
[0,1,...]
G_illegal = {0,1}* / (G_j U H_j) [for each j in the specified range]
It is easy to see that for each x in G_illegal, and for each z in {a,b}*: xz is not in L.
So, for every x,y in G_illegal and for each z in {a,b}*: xz in L <-> yz in L.
Also, for each z in {a,b}* - and for each x,y in some G_j [same j for both]:
if z contains a, both xz and yz are not in L
if z = bj, then xz = an bk bj, and since k+j = n - xz is in L. Same applies for y, so yz is in L.
if z = bj+2, then xz = an bk bj+2, and since k+j+2 = n+2 - xz is in L. Same applies for y, so yz is in L.
otherwise, x is bi such that i≠j and i≠j+2, and you get that both xz and yz are not in L.
So, for every j and for every x,y in G_j and for each z in {a,b}*: xz in L <-> yz in L.
Prove the same for every H_j using the same approach.
Also, it is easy to show that for each x G_j U H_j, and for each y in G_illegal - for z = bj, xz is in L and yz is not in L.
For x in G_j, and y in H_i, for z = abj+1 - it is easy to see that xz is not in L and yz is in L.
It is easy to see that for x,y in G_j and G_i respectively or x, y in H_j, H_i - for z = bj: xz is in L while yz is not.
We just proved that the sets we created are actually the equivalence relations for RL from nerode's theorem, and since we have infinite number of these sets, each is an equivalence relation for RL [we have H_j and G_j for every j] - we can derive from nerode's theorem that the language L is irregular.
You could just use the pumping lemma for regular languages. It basically says that if you can find a string for any given integer n and any partition of this string into xyz such that |xy| <= n, and |y| > 0, then you can pump the y part of the string, and it has to stay in the language, that means, if xy^iz it's not in the language for some i, then the language is not regular.
The proof goes like this, kind of a adversary proof. Suppose someone tells you that this language is regular. Then ask him for a number n > 0. You build a convenient string of length greater than n, and you give to the adversary. He partitions the string in x, y z, in any way he wants, as long as |xy| <= n. Then you have to pump y (repeat it i times) until you find a string that is not in that same language.
In this case, I tell you: give me n. You fix n. The I tell you: take the string "a^n b^{n+2}", and tell you to split it. In any way that you can split this string, you will always have to make y = a^k, with k > 0, since you are force to make |xy| <= n, and my string begins with a^n. Here is the trick, you give the adversary a string such that any way he can split it, he gives you a part that you can pump. So now we pump y, let's say, 0 times, and you get "a^m b^{n+2}" with m < n, which is not in your language. Done. We can also pump a 1 time, n times, n! factorial times, anything you need to make it leave the language.
The proof of this theorem goes around saying that if you have a regular language then you have an automaton with n states for some fixed n. If a string has more than n characters, then it must go through some cycle in your automaton. If we name x the part of the string before entering the cycle, and y the part in the cycle, it's clear that we can pump y as many times as we want, because we can keep running on the cycle as many times as we want, and the resulting string has to be in the language, because it will be recognized by that automaton. To use the theorem to prove for non-regularity, since we don't know how the supposed automaton will be, we have to leave to the adversary the choose for n and for the position of the cycle inside the automaton (there will be no automaton, but you say to the adversary something like: dare to give me an automaton and I will show you it cannot exist.)
I'm looking for tips or research papers that will help me perform the sum (i=0 to k) of X^i * Y, or more explicitly, Y + X^1 * Y +...+ X^k * Y in CUDA C. Where X is an N-by-N matrix, and Y is a N-by-1 vector
You should check out Thrust.
Factor out the Y, then just do a scan (using multiplication as the operator) followed by a reduction (using addition as the operator).
I know this isn't what you're looking for, but can't you factor the Y out and just right multiply it with the result of sum(i=0 to k) of X^i?
Besides factoring out Y from the summation, you could compute the eigenspace of X and subsequently very efficiently compute each X^i (the slowest part of computing your summation will undoubtedly be raising X to a range of powers, so I'll attack that).
More specifically, compute the eigenvalues of X and form a diagonal matrix of the eigenvalues, call this Q. Using the eigenvalues, we can diagonalize X and create a new matrix D such that
(1) D = Q^-1 X Q
Because D is diagonal, we can very efficiently compute it raised to any power i. Applying (1) we determine that
(2) D^i = (Q^-1 X Q)^i
and furthermore, we can show that (2) is equivalent to
(3) D^i = Q^-1 X^i Q
Finally, we can find any arbitrary X^i efficiently by rearranging our equation and computing
(4) X^i = Q D^i Q^-1
(I wanted to verify my memory here, so I found a reference on Wikipedia).