So, I have four integers and I need to find out the lowest two out of those four. What would be the most efficient way of doing so in C (or any other language)?
Edit: I need a fixed implementation, for the sake of efficiency as this is a very critical operation that is going to be performed thousands of times.
Here's an efficient implementation using sorting networks:
inline void Sort2(int *p0, int *p1)
{
if (*p0 > *p1)
{
const int temp = *p0;
*p0 = *p1;
*p1 = temp;
}
}
inline void Sort4(int *p0, int *p1, int *p2, int *p3)
{
Sort2(p0, p1);
Sort2(p2, p3);
Sort2(p0, p2);
Sort2(p1, p3);
Sort2(p1, p2);
}
This takes only 5 compares and up to 5 swaps. You can just ignore the results for p2, p3.
Note that for a performance-critical application Sort2 can be implemented without branches in one or two instructions on some architectures.
Just write a loop and keep track of the lowes 2 values ?
Should be at max O(2N) which is i think the best achievable complexity.
The most efficient way? Trying to avoid any extra steps, I got this (in pseudo-code). This will avoid any unnecessary comparisons that you'll get with other more general solutions (specifically ones that don't advantage of the transitive nature of comparison operations).
Bear in mind that this is only thinking about efficiency, not at all aiming for beautiful code.
if a<=b:
if b<=c:
# c too big, which of b and d is smaller?
if b<=d:
return (a,b)
else:
return (a,d)
else if b<=d:
# a and c both < b, and b < d
return (a,c)
else:
# b is > a, c and d. Down to just those three.
if a<=c:
if c<=d:
# a < c < d
return (a,c)
else:
# a and d both < c
return (a,d)
else if d<=a:
# Both c and d < a
return (c,d)
else:
# c < a < d
return (a,c)
else:
# b < a
if a<=c:
# c too big, which of a and d is smaller?
if a<=d:
return (a,b)
else:
return (b,d)
else if a<=d:
# b and c both < a, and a < d
return (b,c)
else:
# a is > b, c and d. Down to just those three.
if b<=c:
if c<=d:
# b < c < d
return (b,c)
else:
# b and d both < c
return (b,d)
else if d<=b:
# Both c and d < b
return (c,d)
else:
# c < b < d
return (b,c)
I think this has a worst case of 5 comparisons and a best case of 3 (obviously there's no way of doing it in less than 3 comparison).
You can get away with exactly 4 comparisons and maximally 4 swaps.
inline void swap(int* i, int* j) {
static int buffer;
buffer = *j;
*j = *i;
*i = buffer;
}
inline void sort2(int* a, int* s) {
if (*s < a[1])
swap(s,a+1);
if (*s < a[0]) // it is NOT sufficient to say "else if" here
swap(s,a);
}
inline void sort4(int* a) {
sort2(a,a+2);
sort2(a,a+3);
}
The result will be sitting the the first to cells, but note that these cells are not necessarily sorted! They're just the smallest elements.
I would make an array out of them, sort and take the first two values.
You can accomplish it with at most 4 comparisons:
compare the first pair of numbers and let the smaller be a1 and the larger be a2
compare the second pair of numbers and let the smaller be a3 and the larger be a4
if a1 >= a4 return (a3, a4)
(now we know that that a1 < a4)
if a3 >= a2 return (a1, a2)
(now we also know that a3 < a2)
return (a1, a3)
To see that this is true, you can check all the combinations of possible returns:
(a1, a2) (a1, a3) (a1, a4)
(a2, a3) (a2, a4)
(a3, a4)
I think you can sort the array and pick the first two elements.
Related
I'm given the task to find the closest value in an array to a given value t. We consider the absolute value.
I came up with the following function in C:
struct tuple
{
int index;
int val;
};
typedef struct tuple tuple;
tuple find_closest(int A[], int l, int r, int t)
{
if(l == r)
{
tuple t1;
t1.val = abs(A[l] - t);
t1.index = l;
return t1;
}
int m = (l+r)/2;
tuple t2, t3;
t2 = find_closest(A, l, m, t);
t3 = find_closest(A, m+1, r, t);
if(t2.val < t3.val)
{
return t2;
}
else
{
return t3;
}
}
int main()
{
int A[] = {5,7,9,13,15,27,2,3};
tuple sol;
sol = find_closest(A, 0, 7, 20);
printf("%d", sol.index);
return 0;
}
We learnt about the Divide and Conquer method which is why I implemented it recursively. I'm trying to compute the asymptotic complexity of my solution to make a statement about the efficiency of my function. Can someone help? I don't think that my solution is the most efficient one.
The code performs exactly n-1 comparisons of array values (which is easy to prove in several ways, for example by induction, or by noting that each comparison rejects exactly one element from being the best and you do comparisons until there's exactly one index left). The depth of the recursion is ceil(lg(n)).
An inductive proof looks something like this: let C(n) be the number of times if(t2.val < t3.val) is executed where n=r-l+1. Then C(1) = 0, and for n>1, C(n) = C(a) + C(b) + 1 for some a+b=n, a, b > 0. Then by the induction hypothesis, C(n) = a-1 + b-1 + 1 = a+b - 1 = n - 1. QED. Note that this proof is the same no matter how you choose m as long as l <= m < r.
This isn't a problem that divide-and-conquer helps with unless you are using parallelism, and even then a linear scan has the benefit of using the CPU's cache efficiently so the practical benefit of parallelism will be less (possibly a lot less) than you expect.
I have 3 20x2 double arrays A, B and C. I want to combine them in one 3d array D so that D(:,:,1) will return A, D(:,:,2) will return B and D(:,:,3) will return C.
Using cat to concatenate along the third dimension might be the elegant way -
D = cat(3,A,B,C)
Here, the first input argument 3 specifies the dimension along which the concatenation is to be performed.
Like this?
A = 1*ones(20,2);
B = 2*ones(20,2);
C = 3*ones(20,2);
D = zeros(20,2,3); % Preallocate the D Matrix
D(:,:,1) = A;
D(:,:,2) = B;
D(:,:,3) = C;
D(1,1,1) % prints 1
D(1,1,2) % prints 2
D(1,1,3) % prints 3
As a class assignment, I am to write a C program to generate all Pythagorean triples lower than a given value 't'. Here's my code, which first generates a primitive triplet (a, b, c) using Euclid's Formula, and prints all triplets of the form (ka, kb, kc) for 1 < kc < t.
for (i = 2; i < (sqrt(t) + 1); i++)
for (j = 1; j < i; j++)
if ((gcd(i,j) == 1) && ((i-j) % 2) && ((i*i + j*j) < t))
{
k = 0;
a = i * i - j * j;
b = 2 * i * j;
c = i * i + j * j;
while ((++k) * c < t)
printf("(%d, %d, %d)\n", k*a, k*b, k*c);
}
Most other algorithms that I came across use nested loops to check sum of squares, and are significantly slower than this as t grows. Is it possible to deduce a proof that it is indeed faster?
Algorithm complexity is the general method to analyze algorithmic performance. In particular, big O is commonly used to compare algorithms based on the worst case situation of each one of them.
In you case you have 4 loops:
The for that iterates thorough i
The for that iterates thorough j
The loop inside gcd
The while loop
In the worst case each of these loops performs sqrt(t) iterations. A big O complexity would be:
O(for_i) * O(for_j) * (O(gcd) + O(while))
=
O(sqrt(t)) * O(sqrt(t)) * (O(sqrt(t)) + O(sqrt(t)))
=
O(t*sqrt(t))
For the other algorithms that are slower than your method. You can apply a same reasoning to find their big O then show that this big O is greater than yours. For example the naive algorithm that checks all sums of squares will have 2 nested loops; each has at most t iterations and therefore the big O is O(t*t) > O(t*sqrt(t)).
As an alternative to Euclid's algorithm, if (a, b, c) is a primitive pythagorean triple, so are (a-2b+2c, 2a-b+2c, 2a-2b+3c), (a+2b+2c, 2a+b+2c, 2a+2b+3c) and (-a+2b+2c, -2a+b+2c, -2a+2b+3c). Here's the algorithm in Python (because I just happened to have the algorithm in Python, and I'm too lazy to rewrite it in C, and anyway, it's your homework):
def pyth(n):
def p(a, b, c):
if n < a + b + c: return []
return ([[a, b, c]] if a < b else [[b, a, c]]) \
+ p(a-b-b+c+c, a+a-b+c+c, a+a-b-b+c+c+c) \
+ p(a+b+b+c+c, a+a+b+c+c, a+a+b+b+c+c+c) \
+ p(c+c+b+b-a, c+c+b-a-a, c+c+c+b+b-a-a)
return p(3, 4, 5)
Then it is easy to multiply each primitive triangle by successive constants until you reach the limit. I'm not sure if this is faster than Euclid's algorithm, but I'm hopeful that it is because it has no gcd calculations.
I saw this question is a programming interview book, here I'm simplifying the question.
Assume you have an array A of length n, and you have a permutation array P of length n as well. Your method will return an array where elements of A will appear in the order with indices specified in P.
Quick example: Your method takes A = [a, b, c, d, e] and P = [4, 3, 2, 0, 1]. then it will return [e, d, c, a, b]. You are allowed to use only constant space (i.e. you can't allocate another array, which takes O(n) space).
Ideas?
There is a trivial O(n^2) algorithm, but you can do this in O(n). E.g.:
A = [a, b, c, d, e]
P = [4, 3, 2, 0, 1]
We can swap each element in A with the right element required by P, after each swap, there will be one more element in the right position, and do this in a circular fashion for each of the positions (swap elements pointed with ^s):
[a, b, c, d, e] <- P[0] = 4 != 0 (where a initially was), swap 0 (where a is) with 4
^ ^
[e, b, c, d, a] <- P[4] = 1 != 0 (where a initially was), swap 4 (where a is) with 1
^ ^
[e, a, c, d, b] <- P[1] = 3 != 0 (where a initially was), swap 1 (where a is) with 3
^ ^
[e, d, c, a, b] <- P[3] = 0 == 0 (where a initially was), finish step
After one circle, we find the next element in the array that does not stay in the right position, and do this again. So in the end you will get the result you want, and since each position is touched a constant time (for each position, at most one operation (swap) is performed), it is O(n) time.
You can stored the information of which one is in its right place by:
set the corresponding entry in P to -1, which is unrecoverable: after the operations above, P will become [-1, -1, 2, -1, -1], which denotes that only the second one might be not in the right position, and a further step will make sure it is in the right position and terminates the algorithm;
set the corresponding entry in P to -n - 1: P becomes [-5, -4, 2, -1, -2], which can be recovered in O(n) trivially.
Yet another unnecessary answer! This one preserves the permutation array P explicitly, which was necessary for my situation, but sacrifices in cost. Also this does not require tracking the correctly placed elements. I understand that a previous answer provides the O(N) solution, so I guess this one is just for amusement!
We get best case complexity O(N), worst case O(N^2), and average case O(NlogN). For large arrays (N~10000 or greater), the average case is essentially O(N).
Here is the core algorithm in Java (I mean pseudo-code *cough cough*)
int ind=0;
float temp=0;
for(int i=0; i<(n-1); i++){
// get next index
ind = P[i];
while(ind<i)
ind = P[ind];
// swap elements in array
temp = A[i];
A[i] = A[ind];
A[ind] = temp;
}
Here is an example of the algorithm running (similar to previous answers):
let A = [a, b, c, d, e]
and P = [2, 4, 3, 0, 1]
then expected = [c, e, d, a, b]
i=0: [a, b, c, d, e] // (ind=P[0]=2)>=0 no while loop, swap A[0]<->A[2]
^ ^
i=1: [c, b, a, d, e] // (ind=P[1]=4)>=1 no while loop, swap A[1]<->A[4]
^ ^
i=2: [c, e, a, d, b] // (ind=P[2]=3)>=2 no while loop, swap A[2]<->A[3]
^ ^
i=3a: [c, e, d, a, b] // (ind=P[3]=0)<3 uh-oh! enter while loop...
^
i=3b: [c, e, d, a, b] // loop iteration: ind<-P[0]. now have (ind=2)<3
? ^
i=3c: [c, e, d, a, b] // loop iteration: ind<-P[2]. now have (ind=3)>=3
? ^
i=3d: [c, e, d, a, b] // good index found. Swap A[3]<->A[3]
^
done.
This algorithm can bounce around in that while loop for any indices j<i, up to at most i times during the ith iteration. In the worst case (I think!) each iteration of the outer for loop would result in i extra assignments from the while loop, so we'd have an arithmetic series thing going on, which would add an N^2 factor to the complexity! Running this for a range of N and averaging the number of 'extra' assignments needed by the while loop (averaged over many permutations for each N, that is), though, strongly suggests to me that the average case is O(NlogN).
Thanks!
The simplest case is when there is only a single swap for an element to the destination index. for ex:
array=abcd
perm =1032. you just need two direct swaps: ab swap, cd swap
for other cases, we need to keep swapping until an element reaches its final destination. for ex: abcd, 3021 starting with first element, we swap a and d. we check if a's destination is 0 at perm[perm[0]]. its not, so we swap a with elem at array[perm[perm[0]]] which is b. again we check if a's has reached its destination at perm[perm[perm[0]]] and yes it is. so we stop.
we repeat this for each array index.
Every item is moved in-place only once, so it's O(N) with O(1) storage.
def permute(array, perm):
for i in range(len(array)):
elem, p = array[i], perm[i]
while( p != i ):
elem, array[p] = array[p], elem
elem = array[p]
p = perm[p]
return array
#RinRisson has given the only completely correct answer so far! Every other answer has been something that required extra storage — O(n) stack space, or assuming that the permutation P was conveniently stored adjacent to O(n) unused-but-mutable sign bits, or whatever.
Here's RinRisson's correct answer written out in C++. This passes every test I have thrown at it, including an exhaustive test of every possible permutation of length 0 through 11.
Notice that you don't even need the permutation to be materialized; we can treat it as a completely black-box function OldIndex -> NewIndex:
template<class RandomIt, class F>
void permute(RandomIt first, RandomIt last, const F& p)
{
using IndexType = std::decay_t<decltype(p(0))>;
IndexType n = last - first;
for (IndexType i = 0; i + 1 < n; ++i) {
IndexType ind = p(i);
while (ind < i) {
ind = p(ind);
}
using std::swap;
swap(*(first + i), *(first + ind));
}
}
Or slap a more STL-ish interface on top:
template<class RandomIt, class ForwardIt>
void permute(RandomIt first, RandomIt last, ForwardIt pfirst, ForwardIt plast)
{
assert(std::distance(first, last) == std::distance(pfirst, plast));
permute(first, last, [&](auto i) { return *std::next(pfirst, i); });
}
You can consequently put the desired element to the front of the array, while working with the remaining array of the size (n-1) in the the next iteration step.
The permutation array needs to be accordingly adjusted to reflect the decreasing size of the array. Namely, if the element you placed in the front was found at position "X" you need to decrease by one all the indexes greater or equal to X in the permutation table.
In the case of your example:
array permutation -> adjusted permutation
A = {[a b c d e]} [4 3 2 0 1]
A1 = { e [a b c d]} [3 2 0 1] -> [3 2 0 1] (decrease all indexes >= 4)
A2 = { e d [a b c]} [2 0 1] -> [2 0 1] (decrease all indexes >= 3)
A3 = { e d c [a b]} [0 1] -> [0 1] (decrease all indexes >= 2)
A4 = { e d c a [b]} [1] -> [0] (decrease all indexes >= 0)
Another example:
A0 = {[a b c d e]} [0 2 4 3 1]
A1 = { a [b c d e]} [2 4 3 1] -> [1 3 2 0] (decrease all indexes >= 0)
A2 = { a c [b d e]} [3 2 0] -> [2 1 0] (decrease all indexes >= 2)
A3 = { a c e [b d]} [1 0] -> [1 0] (decrease all indexes >= 2)
A4 = { a c e d [b]} [0] -> [0] (decrease all indexes >= 1)
The algorithm, though not the fastest, avoids the extra memory allocation while still keeping the track of the initial order of elements.
Here a clearer version which takes a swapElements function that accepts indices, e.g., std::swap(Item[cycle], Item[P[cycle]])$
Essentially it runs through all elements and follows the cycles if they haven't been visited yet. Instead of the second check !visited[P[cycle]], we could also compare with the first element in the cycle which has been done somewhere else above.
bool visited[n] = {0};
for (int i = 0; i < n; i++) {
int cycle = i;
while(! visited[cycle] && ! visited[P[cycle]]) {
swapElements(cycle,P[cycle]);
visited[cycle]=true;
cycle = P[cycle];
}
}
Just a simple example C/C++ code addition to the Ziyao Wei's answer. Code is not allowed in comments, so as an answer, sorry:
for (int i = 0; i < count; ++i)
{
// Skip to the next non-processed item
if (destinations[i] < 0)
continue;
int currentPosition = i;
// destinations[X] = Y means "an item on position Y should be at position X"
// So we should move an item that is now at position X somewhere
// else - swap it with item on position Y. Then we have a right
// item on position X, but the original X-item now on position Y,
// maybe should be occupied by someone else (an item Z). So we
// check destinations[Y] = Z and move the X-item further until we got
// destinations[?] = X which mean that on position ? should be an item
// from position X - which is exactly the X-item we've been kicking
// around all this time. Loop closed.
//
// Each permutation has one or more such loops, they obvisouly
// don't intersect, so we may mark each processed position as such
// and once the loop is over go further down by an array from
// position X searching for a non-marked item to start a new loop.
while (destinations[currentPosition] != i)
{
const int target = destinations[currentPosition];
std::swap(items[currentPosition], items[target]);
destinations[currentPosition] = -1 - target;
currentPosition = target;
}
// Mark last current position as swapped before moving on
destinations[currentPosition] = -1 - destinations[currentPosition];
}
for (int i = 0; i < count; ++i)
destinations[i] = -1 - destinations[i];
(for C - replace std::swap with something else)
Traceback what we have swapped by checking index.
Java, O(N) swaps, O(1) space:
static void swap(char[] arr, int x, int y) {
char tmp = arr[x];
arr[x] = arr[y];
arr[y] = tmp;
}
public static void main(String[] args) {
int[] intArray = new int[]{4,2,3,0,1};
char[] charArray = new char[]{'A','B','C','D','E'};
for(int i=0; i<intArray.length; i++) {
int index_to_swap = intArray[i];
// Check index if it has already been swapped before
while (index_to_swap < i) {
// trace back the index
index_to_swap = intArray[index_to_swap];
}
swap(charArray, index_to_swap, i);
}
}
I agree with many solutions here, but below is a very short code snippet that permute throughout a permutation cycle:
def _swap(a, i, j):
a[i], a[j] = a[j], a[i]
def apply_permutation(a, p):
idx = 0
while p[idx] != 0:
_swap(a, idx, p[idx])
idx = p[idx]
So the code snippet below
a = list(range(4))
p = [1, 3, 2, 0]
apply_permutation(a, p)
print(a)
Outputs [2, 4, 3, 1]
Given two consecutive arrays, A and B. They look something like
int AandB[] = {a1,a2,...,am,b1,b2,...,bn};
You need to write a program that would switch the order of arrays A and B in the memory, so that B would appear before A. In our example, AandB should become
int AandB[] = {b1,b2,...,bn,a1,...,am};
What's the most efficient way to do that?
Three array reverses:
(a1 a2 a3 a4 a5 b1 b2 b3)
b3 b2 b1 a5 a4 a3 a2 a1
(b3 b2 b1)a5 a4 a3 a2 a1
b1 b2 b3 a5 a4 a3 a2 a1
b1 b2 b3(a5 a4 a3 a2 a1)
b1 b2 b3 a1 a2 a3 a4 a5
Expressed using a "rev" function that takes a start and end:
rev(AandB, 0, n+m)
rev(AandB, 0, m)
rev(AandB, m, n)
For rev (omitting types, etc. for clarity):
rev(x, i, j) {
j--; // j points to one after the subarray we're reversing
while (i < j) {
tmp = x[i];
x[i] = x[j];
x[j] = tmp;
i++;
j--;
}
}
My answer:
First, I'm assuming wlog that m<n.
Since every permutation can be decomposed into disjoint cycles, so can the permutation which takes a1,...,am,b1,..,bn to b1,..,bn,a1,...,am. And since given an index i, it is easy to calculate p(i) (assume wlog that m<n, then if i<=m, we have p(i)=n+i, if i>m we have p(i)=i-m).
We can start with AandB[i] and move its value to p(i)=j, then, take the value in AandB[j] and move it to p(j), etc. Since permutations can be decompose into disjoint cycles, we'll end up in i.
We only need to keep track of which elements did we already move. It is possible to prove that in our case, no cycle in the permutation will contain two consecutive elements of A, so I think it is enough to keep track of how many elements of A have we ordered.
Another simple option which is not as efficient is to note that
given {a1,...,am,b1,...bn}, it is possible to replace a1..am with b(n-m)..b(n), getting {b(n-m)...b(n),b(1)..b(m),a1..am}. And now by recursion, solve the same problem for the first n elements of your array. But this is probably not so efficient.
There are some more details which I omitted, but anyhow the interviewer told me that it's not the way to go, and there's some very clever solution that is also very simple.
The transformation you want to do is essentially a circular shift by n (or by m, depending on the direction of the shift).
E.g., we have 1 2 3 4 5 6 7 a b c (I use letters and numbers to separate two arrays)
During this transformation, 1 will move to the position of 4, 4 will move to 7, 7 to c, c to 3, 3 to 6, etc. Eventually, we'll return to the position 1, from which started.
So, moving one number at the time, we completed it.
The only trick is that sometimes we'll return to 1 before completing whole transformation. Like in the case 1 2 a b c d, the positions will be 1 -> a -> c -> 1. In this case, we'll need to start from 2 and repeat operation.
You can notice that amount of repetitions we need is a greatest common divisor of n and m.
So, the code could look like
int repetitions = GCD(n, m);
int size = n + m;
for (int i = 0; i < repetitions; ++i) {
int current_number = a[i];
int j = i;
do {
j = (j + n) % size;
int tmp = current_number;
current_number = a[j];
a[j] = tmp;
} while (j != i);
}
Greatest common divisor can be easily computed in O(logn) with well-known recursive formula.
edit
It does work, I tried in Java. I only changed data type to string for ease of representation.
String[] a = {"1", "2", "3", "4", "5", "6", "a", "b", "c"};
int n = 3;
int m = 6;
// code from above...
System.out.println(Arrays.toString(a));
And Euclid's formula:
int GCD(int a, int b) {
if (a == 0) {
return b;
}
return GCD(b % a, a);
}
Well, thinking while typing here...
I'm assuming by "in memory" you can't cheat by creating one or more new arrays, even as temporaries. I will also assume that you can have a single temporary variable (otherwise swapping contents would get really tricky).
It looks like your two sub-arrays can be different sizes, so you can't just swap a1 with b1 and a2 with b2, etc.
So you need to figure out where the "a" array elements will start first. You do that by finding "n". Then you'd have to repeatedly save the first remaining "a" element, and put the first remaining "b" element there.
Now here's where it gets tricky. You need to get the save "a" element into its rightful spot, but that may contain an unswapped element. The easiest thing to do would probably be to just shift up all the remaining elements by one, and put your saved "a" at the end. If you do that repeatedly, you'll end up with everything in the right place. That's a lot of shifting though if the arrays are large.
I believe a slightly more sophisiticated algorithim could just do shifting for the elements in the delta (the first "q" elelments, where "q" is the delta between your array sizes) and only while working in the delta area. After that it would just be simple swaps.
we can use array_merge from php.
use array_splice() first to split these arrays and then use the above function. This is for php.