I was asked in a interview to insert a element at a index(the index was given) in the most optimized way.
I have already tried this
/* Make room for new array element by shifting to right */
for(i=size; i>=pos; i--)
{
arr[i] = arr[i-1];
}
arr[pos-1] = num; /* Insert new element at given position and increment size */
size++;
Is there any better way than this to do?
you can do the following optimization:
-put iterator on last element's position and iterate from right to left until you reach pos and apparently maintaining variable last which keeps the last new encountered element
-on every iteration, check if current element is equal to last, if yes then skip it, if no then copy current element to current_index+1 and assign current element to variable last
-finally assign new element to pos
As we see, this approach is based on assumption that comparison is cheaper than shift (i.e. it is better to do comparison on every iteration and shift element from time to time, instead of shifting ALL elements from pos till end) but I'm not quite sure on what extent this assumption is true - considering that moving an element would require two assembly operations involving memory (from memory addr. to register and back to new memory addr.) vs. one operation involving memory (from memory addr. to register) and one arithmetic operation for comparison
The worst case is when there are no equal elements adjacent to each other - in that case we will do both comparison and shift on every iteration.
Related
I have a general question in programming.
Suppose I have an array, I need to find the index K that divides the array into two parts L, R so that the value
|max (L) -max (R)| Is maximal.
max(L) is the highest number in the L part
K points to the first member in R
This seems to be a problem that reduces to only 2 viable candidates for a solution: either K splits off the first value from the rest, or the last value from the rest, giving you a small part of just one value, and a large part with the remaining values, including the maximum value.
Suppose the maximum value in the array can be found at index M, then one of the two parts will have that value and it will be Max(Part). The other part should have a maximum value that is as small as possible. Consequently that part should be reduced to just one value: adding one more value to that part could never decrease its maximum value.
If the overall maximum value is at one of the ends of the array, then there is no choice, and the small part will be chopped off the array at the other end of it.
When the overall maximum value is not at an end of the array, there are two possibilities: choose the one where the chopped off value will be the lowest. In other words, K will be either 1 or n-1 (in zero-based indexing), and this can be determined in constant time, i.e. O(1).
Actually to solve this question we can do it in constant time.
1.Since the list must be divided in two either list A or list B will contain the leftmost or rightmost element.
Adding values to our list can only increase the maximum element of a list, so it is never desirable to have a list of size larger than 1
So all we need to do is look at the head and tail, take the smallest A, and make the rest of the list B
For example consider 6,7,7,3,2,6,4
A = [4], (smallest head/tail), B = [6,7,7,3,2,6]
You can solve it in O(n) with some preparation:
Make two arrays, maxL[] and maxR[] equal in size to the original array
Walk the original array starting from the left, setting maxL[i] to the max value so far
Walk the original array again starting from the right, setting maxR[i] to the max value so far
Now walk both maxL[] and maxR[] in any direction, looking for k such that the value of ABS(maxL[k] - maxR[k]) is maximized; return k.
Im beginner in programming. My question is how to count number sequences in input array? For example:
input array = [0,0,1,1,1,1,1,1,0,1,0,1,1,1]
output integer = 3 (count one-sequences)
And how to calculate number sequences first and last indexes in input array? For example:
input array = [0,0,1,1,1,1,1,1,0,1,0,1,1,1]
output array = [3-8,10-10,12-14] (one first and last place in a sequence)
I tried to solve this problem in C with arrays. Thank you!
Your task is a good exercise to familiarize you with the 0-based array indexes used in C, iterating arrays, and adjusting the array indexes to 1-based when the output requires.
Taking the first two together, 0-based arrays in C, and iterating over the elements, you must first determine how many elements are in your array. This is something that gives new C programmers trouble. The reason being is for general arrays (as opposed to null-terminated strings), you must either know the number of elements in the array, or determine the number of elements within the scope where the array was declared.
What does that mean? It means, the only time you can use the sizeof operator to determine the size of an array is inside the same scope (i.e. inside the same block of code {...} where the array is declared. If the array is passed to a function, the parameter passing the array is converted (you may see it referred to as decays) to a pointer. When that occurs, the sizeof operator simply returns the size of a pointer (generally 8-bytes on x86_64 and 4-bytes on x86), not the size of the array.
So now you know the first part of your task. (1) declare the array; and (2) save the size of the array to use in iterating over the elements. The first you can do with int array[] = {0,0,1,1,1,1,1,1,0,1,0,1,1,1}; and the second with sizeof array;
Your next job is to iterate over each element in the array and test whether it is '0' or '1' and respond appropriately. To iterate over each element in the array (as opposed to a string), you will typically use a for loop coupled with an index variable ( 'i' below) that will allow you to access each element of the array. You may have something similar to:
size_t i = 0;
...
for (i = 0; i< sizeof array; i++) {
... /* elements accessed as array[i] */
}
(note: you are free to use int as the type for 'i' as well, but for your choice of type, you generally want to ask can 'i' ever be negative here? If not, a choice of a type that handles only positive number will help the compiler warn if you are misusing the variable later in your code)
To build the complete logic you will need to test for all changes from '0' to '1' you may have to use nested if ... else ... statements. (You may have to check if you are dealing with array[0] specifically as part of your test logic) You have 2 tasks here. (1) determine if the last element was '0' and the current element '1', then update your sequence_count++; and (2) test if the current element is '1', then store the adjusted index in a second array and update the count or index for the second array so you can keep track of where to store the next adjusted index value. I will let you work on the test logic and will help if you get stuck.
Finally, you need only print out your final sequence_count and then iterate over your second array (where you stored the adjusted index values for each time array was '1'.
This will get you started. Edit your question and add your current code when you get stuck and people can help further.
I have been given an array (of n elements) and i have to find the smallest element on the right side of each element which is greater than itself(current element).
For example :
Array = {8,20,9,6,15,31}
Output Array = {9,31,15,15,31,-1}
Is it possible to solve this in O(n).? I thought of traversing the array from the right side (starting from n-2) and building a balance binary search tree for the remaining elements, as searching in it for an element which is immediately greater than the current element would be O(logn) .
Hence time complexity would come out to be O(n*(log(n)).
Is there a better approach to this problem?
The problem you present is impossible to solve in O(n) time, since you can reduce sorting to it and thereby achieve sorting in O(n) time.
Say there exists an algorithm which solves the problem in O(n).
Let there be an element a.
The algorithm can also be used to find the smallest element to the left of and larger than a (by reversing the array before running the algorithm).
It can also be used to find the largest element to the right (or left) of and smaller than a (by negating the elements before running the algorithm).
So, after running the algorithm four times (in linear time), you know which elements should be to the right and to the left of each element. In order to construct the sorted array in linear time, you'd need to keep the indices of the elements instead of the values. You first find the smallest element by following your "larger-than pointers" in linear time, and then make another pass in the other direction to actually build the array.
Others have proved that it is impossible in general to solve in O(n).
However, it is possible to do in O(m) where m is the size of your largest element.
This means that in certain cases (e.g. if if your input array is known to be a permutation of the integers 1 up to n) then it is possible to do in O(n).
The code below shows the approach, built upon a standard method for computing the next greater element. (There is a good explanation of this method on geeks for geeks)
def next_greater_element(A):
"""Return an array of indices to the next strictly greater element, -1 if none exists"""
i=0
NGE=[-1]*len(A)
stack=[]
while i<len(A)-1:
stack.append(i)
while stack and A[stack[-1]]<A[i+1]:
x=stack.pop()
NGE[x]=i+1
i+=1
return NGE
def smallest_greater_element(A):
"""Return an array of smallest element on right side of each element"""
top = max(A) + 1
M = [-1] * top # M will contain the index of each element sorted by rank
for i,a in enumerate(A):
M[a] = i
N = next_greater_element(M) # N contains an index to the next element with higher value (-1 if none)
return [N[a] for a in A]
A=[8,20,9,6,15,31]
print smallest_greater_element(A)
The idea is to find the next element in size order with greater index. This next element will therefore be the smallest one appearing to the right.
This cannot be done in O(n), since we can reduce Element Distinctness Problem (which is known to be sovleable in Omega(nlogn) when comparisons based) to it.
First, let's do a little expansion to the problem, that does not influence its hardness:
I have been given an array (of n elements) and i have to find the
smallest element on the right side of each element which is greater/equals
than itself(current element).
The addition is we allow the element to be equal to it (and to the right), and not only strictly greater than1.
Now, Given an instance of element distinctness arr, run the algorithm for this problem, and look if there is any element i such that arr[i] == res[i], if there isn't answer "all distinct", otherwise: "not all distinct".
However, since Element Distinctness is Omega(nlogn) comparisons based, it makes this problem such as well.
(1)
One possible justification why adding equality is not making the problem more difficult is - assuming elements are integers, we can just add i/(n+1) to each element in the array, now for each two elements if arr[i] < arr[j], also arr[i] + i/(n+1) < arr[j] + j/(n+1), but if arr[i] = arr[j], then if i<j arr[i] + i/(n+1) < arr[j] + j/(n+1), and we can have the same algorithm solve the problem for equalities as well.
You are given an unsorted array of n integers, and you would like to find if there are any duplicates in the array (i.e. any integer appearing more than once).
Describe an algorithm (implemented with two nested loops) to do this.
The question that I am stuck at is:
How can you limit the input data to achieve a better Big O complexity? Describe an algorithm for handling this limited data to find if there are any duplicates. What is the Big O complexity?
Your help will be greatly appreciated. This is not related to my coursework, assignment or coursework and such. It's from the previous year exam paper and I am doing some self-study but seem to be stuck on this question. The only possible solution that i could come up with is:
If we limit the data, and use nested loops to perform operations to find if there are duplicates. The complexity would be O(n) simply because the amount of time the operations take to perform is proportional to the data size.
If my answer makes no sense, then please ignore it and if you could, then please suggest possible solutions/ working out to this answer.
If someone could help me solve this answer, I would be grateful as I have attempted countless possible solution, all of which seems to be not the correct one.
Edited part, again.. Another possible solution (if effective!):
We could implement a loop to sort the array so that it sorts the array (from lowest integer to highest integer), therefore the duplicates will be right next to each other making them easier and faster to be identified.
The big O complexity would still be O(n^2).
Since this is linear type, it would simply use the first loop and iterate n-1 times as we are getting the index in the array (in the first iteration it could be, for instance, 1) and store this in a variable names 'current'.
The loop will update the current variable by +1 each time through the iteration, within that loop, we now write another loop to compare the current number to the next number and if it equals to the next number, we can print using a printf statement else we move back to the outer loop to update the current variable by + 1 (next value in the array) and update the next variable to hold the value of the number after the value in current.
You can do linearly (O(n)) for any input if you use hash tables (which have constant look-up time).
However, this is not what you are being asked about.
By limiting the possible values in the array, you can achieve linear performance.
E.g., if your integers have range 1..L, you can allocate a bit array of length L, initialize it to 0, and iterate over your input array, checking and flipping the appropriate bit for each input.
A variance of Bucket Sort will do. This will give you complexity of O(n) where 'n' is the number of input elements.
But one restriction - max value. You should know the max value your integer array can take. Lets say it as m.
The idea is to create a bool array of size m (all initialized to false). Then iterate over your array. As you find an element, set bucket[m] to true. If it is already true then you've encountered a duplicate.
A java code,
// alternatively, you can iterate over the array to find the maxVal which again is O(n).
public boolean findDup(int [] arr, int maxVal)
{
// java by default assigns false to all the values.
boolean bucket[] = new boolean[maxVal];
for (int elem : arr)
{
if (bucket[elem])
{
return true; // a duplicate found
}
bucket[elem] = true;
}
return false;
}
But the constraint here is the space. You need O(maxVal) space.
nested loops get you O(N*M) or O(N*log(M)) for O(N) you can not use nested loops !!!
I would do it by use of histogram instead:
DWORD in[N]={ ... }; // input data ... values are from < 0 , M )
DWORD his[M]={ ... }; // histogram of in[]
int i,j;
// compute histogram O(N)
for (i=0;i<M;i++) his[i]=0; // this can be done also by memset ...
for (i=0;i<N;i++) his[in[i]]++; // if the range of values is not from 0 then shift it ...
// remove duplicates O(N)
for (i=0,j=0;i<N;i++)
{
his[in[i]]--; // count down duplicates
in[j]=in[i]; // copy item
if (his[in[i]]<=0) j++; // if not duplicate then do not delete it
}
// now j holds the new in[] array size
[Notes]
if value range is too big with sparse areas then you need to convert his[]
to dynamic list with two values per item
one is the value from in[] and the second is its occurrence count
but then you need nested loop -> O(N*M)
or with binary search -> O(N*log(M))
I have an array of pointers (this is algorithmic, so don't go into language specifics). Most of the time, this array points to locations outside of the array, but it degrades to a point where every pointer in the array points to another pointer in the array. Eventually, these pointers form an infinite loop.
So on the assumption that the entire array consists of pointers to another location in the array and you start at the beginning, how could you find the length of the loop with the highest efficiency in both time and space? I believe the best time efficiency would be O(n), since you have to loop over the array, and the best space efficiency would be O(1), though I have no idea how that would be achieved.
Index: 0 1 2 3 4 5 6
Value: *3 *2 *5 *4 *1 *1 *D
D is data that was being pointed to before the loop began. In this example, the cycle is 1, 2, 5 and it repeats infinitely, but indices 0, 3, and 4 are not part of the cycle.
This is an instance of the cycle-detection problem. An elegant O(n) time O(1) space solution was discovered by Robert W. Floyd in 1960; it's commonly known as the "Tortoise and Hare" algorithm because it consists of traversing the sequence with two pointers, one moving twice as fast as the other.
The idea is simple: the cycle must have a loop with length k, for some k. At each iteration, the hare moves two steps and the tortoise moves one, so the distance between them is one greater than it was in the previous iteration. Every k iterations, therefore, they are a multiple of k steps apart from each other, and once they are both in the cycle (which will happen once the tortoise arrives), if they are a multiple of k steps apart, they both point at the same element.
If all you need to know is the length of the cycle, you wait for the hare and the tortoise to reach the same spot; then you step along the cycle, counting steps until you get back to the same spot again. In the worst case, the total number of steps will be the length of the tail plus twice the length of the cycle, which must be less than twice the number of elements.
Note: The second paragraph was edited to possibly make the idea "more obvious", whatever that might mean. A formal proof is easy and so is an implementation, so I provided neither.
Make a directed graph of the elements in the array where a node points to another node if the element of the node points to the element of the node its pointing to and for each node. keep track of the indegree of the node(number of pointers pointing to it.) While making your graph, if there is a node with indegree == 2, then that node is part of an infinite cycle.
The above fails if the first element is included in the infinite cycle, so before the algorithm starts, add 1 indegree to the first element to resolve this.
The array becomes, as you describe it, a graph (more properly a forrest) where each vertex has out-degree of exactly one. The components of such a graph can only consist of chains that each possibly end in a single loop. That is, each component is either shaped like an O or like a 6. (I am assuming no pointers are null, but this is easy to deal with. You end up with 1-shaped components with no cycles at all.)
You can trace all these components by "visiting" and keeping track of where you've been with a "visited" hash or flags array.
Here's an algorithm.
Edit It just DFS of a forrest simplified for the case of one child per node, which eliminates the need for a stack (or recursion) because backtracking is not needed.
Let A[0..N-1] be the array of pointers.
Let V[0..N-1] be an array of boolean "visited" flags, initially false.
Let C[0..N-1] be an array if integer counts, initially zero.
Let S[0..N-1] be an array of "step counts" for each component trace.
longest = 0 // length of longest cycle
for i in 0..N-1, increment C[j] if A[i] points to A[j]
for each k such that C[k] = 0
// No "in edges", so must be the start of a 6-shaped component
s = 0
while V[k] is false
V[k] = true
S[k] = s
s = s + 1
k index of the array location that A[k] points to
end
// Loop found. Length is s - S[k]
longest = max(longest, s - S[k])
end
// Rest of loops must be of the O variety
while there exists V[k] false
Let k be such that V[k] is false.
s = 0
while V[k] is false
V[k] = true
s = s + 1
k index of the array location that A[k] points to
end
// Found loop of length s
longest = max(longest, s)
end
Space and execution time are both proportional to size of the input array A. You can get rid of the S array if you're willing to trace 6-shaped components twice.
Addition I fully agree that if it's not necessary to find the cycle of maximum size, then the ancient "two pointer" algorithm for finding cycles in a linked list is superior, since it requires only constant space.