Time complexity of one dimensional and two dimensional arrays - c

Take for example taking input from the user for a one-dimensional array of size n, we know it's time complexity is O(n).
for(i=0;i<n;i=i+1)
{
scanf("%d",&a[i]);
}
Now if the array is two dimensional, the time complexity is O(n^2)
for(i=0;i<n;i=i+1)
{
for(j=0;j<n;j=j+1)
{
scanf("%d",&a[i][j]);
}
}
But even the two dimensional array is saved in the same manner as the one-dimensional array. It is essentially just an increase in size, why is there a difference?

The difference in the time complexity is because of the number of elements you are storing.
For a linear 1D array, you are storing n elements in a linear way, and it is getting stored in a linear way too. So time complexity is O(n)
For a 2D array, lets suppose you are storing m elements, which are actually stored in a linear way only in the memory. But, what you work with are the elements divided into rows (and columns). Now for instance, the size of each row and column is n, then you are reading n*n elements, which is nothing but m elements only (m == n*n). So the complexity is O(m) or O(n^2)

sort of a perceptual issue, since if you have n elements it would be O(n) either way, but if you are measuring based on one of the array dimensions you aren't going to be n^2 unless it is a square matrix (I see that yours is)... so better to think of it as O(h*w) or just 0(n) anyway...

Related

Complexity of sorting n/logn sequences of an array size n

Given an array in the size of N (the array contains whole numbers), I wish to sort the array but only on lengths of log(n) in the array, so by the end the array will have n/logn sequences (in the size of logn each sequence) that are sorted.
My idea was to use the algorithm of MergeSort which in worst case of time complexity runs O(nlogn).
But since I am only sorting lengths of logn in the array, the time complexity should be O(log(n)*log(log(n))) because I am not in fact going through the entire length of N.
So MergeSort will be preformed in that case n/logn times.
Is it safe to assume that the overall time complexity of this action would be (n/logn)*O(log(n)log(log(n))) => O(nlog(log(n)))?
Your calculation is correct: sorting n / log n chunks of the array of size log n can be done in O(n log(log n)).
However, if your entire array is not that big in the first place (say a few thousand elements max), the log n chunks will be quite small, in which case it is actually more efficient to use insertion sort rather than an algorithm like merge sort or Quicksort.

Time and Space Complexity of top k frequent elements in an array

There is a small confusion regarding the time and space complexity for the given problem:
Given an array of size N , return a list of top K frequent elements.
Based on the most popular solution:
Use a HashMap of size K with the count of each entry as value.
Build a MaxHeap of size K by traversing the HashMap generated above.
Pop the elements in the MaxHeap into a list and return the list.
K being the number of unique elements in the input.
The space and time complexity is: O(K) and O(K*log(K)
Now the confusion starts here. We know we are dealing with worst case complexity in the above analysis. So the worst value K can take is N, when all the elements in array are unique.
Hence K <= N. Thereby O(K) be represented as O(N) ??
Thereby, shouldn't the space and time complexity be O(N) and O(N*log(N)) for the above problem?
I know this is a technicality, but its been bothering me for a while. Please advise.
Yes, you are right since K<N, the time complexity for the hashmap part should be O(N).
But heap only have K elements in it and has the time complexity of O(Klog(K)) which if considered asymptotically is far larger than linear complexity of O(N) and hence results in final time complexity of O(Klog(K)).

Sorting an array of integers using algorithm with complexity O(n)

I red already that the best sort comparison algorithms have complexity O(nlog(n)). But I'm asked to sort an array of integers(in C) using a sorting algorithm of complexity O(n) given that all the elements in the array are non-negative and less than a constant K. But I have no idea how to use this information in the sorting algorithm. You guys have any idea?
That's a simple one (known as "counting sort" or "histogram sort", a degenerate case of "bucket sort"):
Allocate an array with one slot for each non-negative integer less than k, and zero it. O(k)
Iterate over all elements of the input and count them in our array. O(n)
Iterate over our array and write the elements out in-order. O(n+k)
Thus, O(n+k).
Radix sort gives you O(n log k), not O(n log n) complexity. Since K is a fixed number independent of n, the resultant complexity is O(n * const), I.e. It is linear.
create a new array of size K and just insert each element to the array in it own position..
lets say K=100
create an array of 100 integers - clear it.
and if you have the set {55, 2, 7, 34}.
you just need to the following:
array[55] = 1;
array[2] = 1;
array[7] = 1;
array[34] =1;
And the go over the array from start to end and just print the index of the cells that are == 1
Depends on the kind of complexity. Average case O(n+k): Bucket Sort.
Radix sort should be O(m * n) though. (m being the length of the key used to sort)

Algorithm complexity in 2D arrays

Let's suppose that I have an array M of n*m elements, so if I want to print its elements I can do something like:
for i=1 to m
for j=1 to n
print m[i,j]
next j
next i
I know that the print instruction is done in constant time, so in this case I would have an algorithm complexity of:
\sum_{i=1}^{m}\sum_{j=1}^{n}c=m.n.c
so I suppose is in the order of O(n)
But what happens if the array has the same number of rows and columns, I suppose the complexity is:
\sum_{i=1}^{n}\sum_{j=1}^{n}c=n.n.c
so it is of order O(n^{2})
are my assumptions correct?
I'm assuming that m and n are variables and not constants. In that case, the runtime of the algorithm should be O(mn), not O(n), since the runtime is directly proportional to the number of elements in the array. You derived this with a summation, but it might be easier to see by just looking at how much work is done per array element. Given this, you're correct that if m = n, the runtime is quadratic on n.
Hope this helps!

Inserting unknown number of elements into dynamic array in linear time

(This question is inspired by deque::insert() at index?, I was surprised that it wasn't covered in my algorithm lecture and that I also didn't find it mentioned in another question here and even not in Wikipedia :). I think it might be of general interest and I will answer it myself ...)
Dynamic arrays are datastructures that allow addition of elements at the end in amortized constant time O(1) (by doubling the size of the allocated memory each time it needs to grow, see Amortized time of dynamic array for a short analysis).
However, insertion of a single element in the middle of the array takes linear time O(n), since in the worst case (i.e. insertion at first position) all other elements needs to be shifted by one.
If I want to insert k elements at a specific index in the array, the naive approach of performit the insert operation k times would thus lead to a complexity of O(n*k) and, if k=O(n), to a quadratic complexity of O(n²).
If I know k in advance, the solution is quite easy: Expand the array if neccessary (possibly reallocating space), shift the elements starting at the insertion point by k and simply copy the new elements.
But there might be situations, where I do not know the number of elements I want to insert in advance: For example I might get the elements from a stream-like interface, so I only get a flag when the last element is read.
Is there a way to insert multiple (k) elements, where k is not known in advance, into a dynamic array at consecutive positions in linear time?
In fact there is a way and it is quite simple:
First append all k elements at the end of the array. Since appending one element takes O(1) time, this will be done in O(k) time.
Second rotate the elements into place. If you want to insert the elements at position index. For this you need to rotate the subarray A[pos..n-1] by k positions to the right (or n-pos-k positions to the left, which is equivalent). Rotation can be done in linear time by use of a reverse operation as explained in Algorithm to rotate an array in linear time. Thus the time needed for rotation is O(n).
Therefore the total time for the algorithm is O(k)+O(n)=O(n+k). If the number of elements to be inserted is in the order of n (k=O(n)), you'll get O(n+n)=O(2n)=O(n) and thus linear time.
You could simply allocate a new array of length k+n and insert the desired elements linearly.
newArr = new T[k + n];
for (int i = 0; i < k + n; i++)
newArr[i] = i <= insertionIndex ? oldArr[i]
: i <= insertionIndex + k ? toInsert[i - insertionIndex - 1]
: oldArr[i - k];
return newArr;
Each iteration takes constant time, and it runs k+n times, thus O(k+n) (or, O(n) if you so like).

Resources