Give a big-O estimate for the number of operations, where an operation is a comparison or a multiplication,
used in this segment of an algorithm (ignoring comparisons used to test the conditions
in the for loops, where a1, a2, ..., an are positive real numbers). Plus, max function find max value from index 'i' to 'j', not compare just only two value.
m := 0
for i := 1 to n
for j := i + 1 to n
m := max(ai, aj, m)
The problem gives the max function with no description. function get three value, 'ai' is start index, 'aj' is end and 'm' is variable to save max value. I think that the function's time complexity is O(n) because 'A' is just array and we have to travel that section to get a max value. We want to know that code's bigO as well as max function's it.
First of all maximum element in an array can be found out without so many iterations through the array. All you need is one pass and setting m to a highly negative number.
m := (highly negative number) -inf
for i := 1 to n
m := max(ai,m)
For your algorithm, the time complexity is O(n2) because you travel different sections of the array more than once and not just once as you have mentioned.
To be more precise, time complexity of your algorithm would be :
(n-1) + (n-2) + (n-3) + ... 1 = n*n - c (some constant)
=> O(n2)
Related
We are given an array of n integers, and a constant value k, can any one
suggest me to find out the maximum possible integer x such that arr[0]/x + arr[1]/x +.. arr[n-1]/x >=k ,
-> where '/' is the integer division
-> sum of all elements of array >= k
-> k is a constant(1<=k<=10^5)
-> 1<=n<=10^5.
e.g. n=5, k=3
arr=[1,1,1,8,8]
answer-> x=4
in something like o(N log N) ?
Here is an algorithm that often meets your bound on time efficiency. I assume that your array values are non-negative. The algorithm depends on these facts:
Your objective function arr[0]/x + arr[1]/x +.. arr[n-1]/x (let's call it f(x)) is a decreasing function of x. In other words, if x increases then f(x) will stay the same or decrease.
f(1) equals the sum of the elements of the array, so f(1) >= k. In other words, at x = 1 the objective function is not below the target value k.
If M is set to the maximum array value, the value of arr[i] // (M + 1) is zero, so f(M + 1) = 0. In other words, at x = M + 1 the objective is below the target value k.
So we have upper and lower bounds on the value of x for a decreasing function. We can therefore do a binary search from 1 to M + 1 for the value of x where
f(x) >= k and f(x + 1) < k
Only one value of x will satisfy that, and a binary search can easily find it. The binary search will take log(M) steps. Each step involves one evaluation of f(x) which takes N steps to use each array member. Thus the overall time efficiency is O(N log(M)). If M (the maximum array value) is of the order of N then that is your desired efficiency. At the limiting values you give for N and the array values, we have M < N^2, so N log(M) < 2 N log(N) and your desired efficiency is still met. If N is small and M is large, your desired efficiency is not met. (This means an array like [10^9, 10^9-1] where N = 2 and M = 10^9 which could take 30 steps in the binary search.) This may or may not meet your needs.
this is a classic problem, but I am curious if it is possible to do better with these conditions.
Problem: Suppose we have a sorted array of length 4*N, that is, each element is repeated 4 times. Note that N can be any natural number. Also, each element in the array is subject to the constraint 0 < A[i] < 190*N. Are there 4 elements in the array such that A[i] + A[j] + A[k] + A[m] = V, where V can be any positive integer; note we must use exactly 4 elements and they can be repeated. It is not necessarily a requirement to find the 4 elements that satisfy the condition, rather, just showing it can be done for a given array and V is enough.
Ex : A = [1,1,1,1,4,4,4,4,5,5,5,5,11,11,11,11]
V = 22
This is true because, 11 + 5 + 5 + 1 = 22.
My attempt:
Instead of "4sum" I first tried k-sum, but this proved pretty difficult so I instead went for this variation. The first solution I came to was rather naive O(n^2). However, given these constraints, I imagine that we can do better. I tried some dynamic programming methods and divide and conquer, but that didn't quite get me anywhere. To be specific, I am not sure how to cleverly approach this in a way where I can "eliminate" portions of the array without having to explicitly check values against all or almost all permutations.
Make an vector S0 of length 256N where S0[x]=1 if x appears in A.
Perform a convolution of S0 with itself to produce a new vector S1 of length 512N. S1[x] is nonzero iff x is the sum of 2 numbers in A.
Perform a convolution of S1 with itself to make a new vector S2. S2[x] is nonzero iff x is the sum of 4 numbers in A.
Check S2[V] to get your answer.
Convolution can be performed in O(N log N) time using FFT convolution (http://www.dspguide.com/ch18/2.htm) or similar techniques.
Since at most 4 such convolutions are performed, the total complexity is O(N log N)
I cannot understand why the time complexity for this code is O(logn):
double n;
/* ... */
while (n>1) {
n*=0.999;
}
At least it says so in my study materials.
Imagine if the code were as follows:
double n;
/* ... */
while (n>1) {
n*=0.5;
}
It should be intuitive to see how this is O(logn), I hope.
When you multiply by 0.999 instead, it becomes slower by a constant factor, but of course the complexity is still written as O(logn)
You want to calculate how many iterations you need before n becomes equal to (or less than) 1.
If you call the number of iterations for k you want to solve
n * 0.999^k = 1
It goes like this
n * 0.999^k = 1
0.999^k = 1/n
log(0.999^k) = log(1/n)
k * log(0.999) = -log(n)
k = -log(n)/log(0.999)
k = (-1/log(0.999)) * log(n)
For big-O we only care about "the big picture" so we throw away constants. Here log(0.999) is a negative constant so (-1/log(0.999)) is a positive constant that we can "throw away", i.e. set to 1. So we get:
k ~ log(n)
So the code is O(logn)
From this you can also notice that the value of the constant (i.e. 0.999 in your example) doesn't matter for the big-O calculation. All constant values greater than 0 and less than 1 will result in O(logn).
Logarithm has two inputs: a base and a number. The result of a logarithm is the power you need to raise the base to to achieve the given number. Since your base is 0.999, the number is the first smaller than 1 and you have a scalar, which is n, effectively the number of steps depends on the power you need to raise your base to achieve such a small number, which multiplied with n will yield a smaller number than 1. This corresponds to the definition of the logarithm, with which I have started my answer.
EDIT:
Think about it this way: You have n as an input and you search for the first k where
n * .999^k < 1
since you are searching k by incrementing it, since if you have l >= n at a step, then you will have l * .999 in the next step. Repeating this achieves a logarithmic complexity for your multiplication algorithm.
I am attempting to solve the below question. I solved this question in O(n^2) time complexity. Is there a way to optimize it further and bring the complexity down to O(n) by iterating the array just once?
Given an array of n integers and a number S. I need to find the minimum set of consecutive integers whose sum is greater than the number S. If no such set exists, I will print 0.
Required complexities:
Space complexity-O(1)
Time Complexity-O(n)
Example-
Array A={2,5,4,6,3,9,2,17,1}
S= 17
Output=2
Explanation-
Possible solutions are:-
{2,5,4,6,3}=2+5+4+6+3=20(>18)=5 numbers
{5,4,6,3,9}=27(>18)=5 numbers
{4,6,3,9}=22(>18)-4 numbers
{6,3,9,2}=20=4 numbers
{3,9,2,17}=4 numbers
{9,2,17}=3 numbers
{2,17}=2 numbers
so, minimum =2 numbers. output=2.
Assuming that all integers are non-negative and S is positive, you can use the following algorithm:
Use two indices, one for where the current sequence starts, and another for where it ends. When the sum of that sequence is too small you extend the sequence by incrementing the second index; if the sum is over S, you keep track of whether it is the best so far and at the same time remove the first value from the sequence, by incrementing the first index.
Here is the algorithm in more formal pseudo code:
n = size(A)
best = n + 1
sum = 0
i = 0
for j = 0 to n - 1:
sum = sum + A[j]
while sum > S:
if j - i + 1 < best:
best = j - i + 1
sum = sum - A[i]
i = i + 1
if best > n:
best = 0
output best
Space complexity is O(1) as there are 4 numerical variables involved (not counting the input array), which represents a fixed amount of memory.
Time complexity is O(n) as the total number of times the statements in the inner loop execute is never more than n (i is incremented each time and will never bypass j).
Lets say we are given an array A[] of length N and we have to answer Q queries which consists of two integers L,R. We have to find the number from A[L] to A[R] which has its frequency at least (R-L+1)/2. If such number doesn't exist then we have to print "No such number"
I could think of only O(Q*(R-L)) approach of running a frequency counter and first obtaining the most frequent number in the array from L to R. Then count its frequency.
But more optimization is needed.
Constraints: 1<= N <= 3*10^5, ,1<=Q<=10^5 ,1<=L<=R<=N
I know an O((N + Q) * sqrt(N)) solution:
Let's call a number heavy if at occurs at least B times in the array. There are at most N / B heavy numbers in the array.
If the query segment is "short" (R - L + 1 < 2 * B), we can answer it in O(B) time (by simply iterating over all elements of the range).
If the query segment is "long" (R - L + 1 >= 2 * B), a frequent element must be heavy. We can iterate over all heavy numbers and check if at least one then fits (to do that, we can precompute prefix sums of number of occurrences for each heavy element and find the number of its occurrences in a [L, R] segment in constant time).
If we set B = C * sqrt(N) for some constant C, this solution runs in O((N + Q) * sqrt(N)) time and uses O(N * sqrt(N)) memory. With properly chosen C, and may fit into time and memory limit.
There is also a randomized solution which runs in O(N + Q * log N * k) time.
Let's store a vector of position of occurrences for each unique element in the array. Now we can find the number of occurrences of a fixed element in a fixed range in O(log N) time (two binary searches over the vector of occurrences).
For each query, we'll do the following:
pick a random element from the segment
Check the number of its occurrences in O(log N) time as described above
If it's frequent enough, we are done. Otherwise, we pick another random element and do the same
If a frequent element exists, the probability not to pick it is no more than 1 / 2 for each trial. If we do it k times, the probability not to find it is (1 / 2) ^ k
With a proper choice of k (so that O(k * log N) per query is fast enough and (1 / 2) ^ k is reasonably small), this solution should pass.
Both solutions are easy to code (the first just needs prefix sums, the second only uses a vector of occurrences and binary search). If I had to code one them, I'd pick the latter (the former can be more painful to squeeze in time and memory limit).