Subtracting a number from all elements of an array in constant time - arrays

Can I subtract an integral value from all elements of an array in constant time?
For example:
Given array: 1 2 3 4
Expected array:0 1 2 3
I want this result in O(1) time complexity.Is this possible?
If yes,How can I achieve the same?
P.S.:The expression a[100]={0}; initializes all cells of array to zero without using the for loop.I am looking for similar expression

You can't change n elements in memory in less than O(n) time, but you can change what that memory represents. If you were to create a custom array class you can include an offset member. When an array element is read you add the offset on demand. When an element is added, you subtract the current offset before storing it in memory (so it is recalled as the correct value when added with the offset). With that layout simply modify the offset in O(1) time and effectively achieve what you are looking for.

No. You would need to go through every element and subtract one. Going through every element implies o(n) or linear time. Constant implies you would only need to go through one element.
a[100]={0} is syntactic sugar that appears to be constant but it's actually linear on the backend. See this answer

Related

Difficulty/confusion using subscripts in MATLAB

I am working with a code in MATLAB and i have to implement function y= 1-2x(t-1)
but when i try to code i get error.
How to get rid of this error?
clc
clear all
close all
t=-3:.1:3;
x=heaviside(t);
y=1-2*x(t-1)
plot(t,y)
There is a difference between evaluating a function and indexing an array, though they both use the same syntax in MATLAB.
Since x is an array, not a function, x(t-1) attempts to index into the array x, at locations t-1. However, t contains non-integer values and non-positive values. Indices in MATLAB must be between 1 and the number of elements in the array.
To shift an array by 1 to the right, you can use indexing as follows:
x([1,1:end-1])
Here, we repeat element #1, and drop the last element. There are other ways of accomplishing the same.
But, because one time unit does not correspond to one array element, since t is incremented by 0.1 every array element, this corresponds to a shift of 0.1 time units, not of 1 time unit.
To shift by one time unit, you’d have to modify the indexing above to shift the array by 10 elements. In the general case, it is posible that 1 time unit does not correspond to an integer number of array elements, for example if the increment had been 0.3 instead of 0.1. In this case, you need to interpolate:
interp1(t,x,t-1,'linear','extrap')
Here we are reading outside of the input array, and therefore need to take care of extrapolation. Hence that last argument to the function call. You can also choose to fill extrapolated values with zeros, for example.

Find duplicates in an array in linear time

Problem: You are given an array of n+1 integers from range 1..n. At least one number has duplicate. All array values can be same. Print all duplicates in linear time and constant space. Array can't be modified.
The obvious solution would be to create a bit array with default value false, set 1 in bitarray[array[i]] for each element, then check if it's already 1. That requires additional space, so no good. My another thought: reorder the array by hash and check if a current element and the element array[hash % n] are equal. This is also no good since we can't modify the original array. Now I think that it looks like an impossible task. Is there even a solution to this?

Most efficient way to add a constant to a column of an array in LabVIEW?

I want to add a constant to the second column of an array.
I do this as shown below:
Where for illustration the values are as follows:
What is the most efficient way of adding a constant to an array column?
With a question about efficiency you should supply number. For anything lower than a 1000 x 1000 2D array I can't measure the difference. Usually it is best to simply test it.
Here the code for testing (same answer as crossrulz)
With a 10000 x 10000 array option 2 becomes about 10 times faster.
One comment unless you are in a very high demanding situation, readability is usually preferred over efficiency. In my opinion option 2 is more readable since it has no for loop and the constant is presented as a constant instead of an array.
But you can get more efficient than that by using the In Place Element structure. The image below shows two different ways to add 5 to a column. The second one avoids making a memory copy of the entire array. Indexing out a column of an array with Index Array and then modifying it requires a shift of underlying memory format, even though the array is going to be put back in the Replace Array Subset. The In Place Element structure gives enough context to LabVIEW for it to recognize that the Add can be done without data copies.
Index Array to get the second column, add your constant, and then Replace Array Subset to replace the second column.

Accessing Elements - Really O(1)?

It is said that an example of a O(1) operation is that of accessing an element in an array. According to one source, O(1) can be defined in the following manner:
[Big-O of 1] means that the execution time of the algorithm does not depend on
the size of the input. Its execution time is constant.
However, if one wants to access an element in an array, does not the efficiency of the operation depend on the amount of elements in the array? For example
int[] arr = new int[1000000];
addElements(arr, 1000000); //A function which adds 1 million arbitrary integers to the array.
int foo = arr[55];
I don't understand how the last statement can be described as running in O(1); does not the 1,000,000 elements in the array have a bearing on the running time of the operation? Surely it'd take longer to find element 55 than it would element 1? If anything, this looks to me like O(n).
I'm sure my reasoning is flawed, however, I just wanted some clarification as to how this can be said to run in O(1)?
Array is a data structure, where objects are stored in continuous memory location. So in principle, if you know the address of base object, you will be able to find the address of ith object.
addr(a[i]) = addr(a[0]) + i*size(object)
This makes accessing ith element of array O(1).
EDIT
Theoretically, when we talk about complexity of accessing an array element, we talk for fixed index i.
Input size = O(n)
To access ith element, addr(a[0]) + i*size(object). This term is independent of n, so it is said to be O(1).
How ever multiplication still depends on i but not on n. It is constant O(1).
The address of an element in memory will be the base address of the array plus the index times the size of the element in the array. So to access that element, you just essentially access memory_location + 55 * sizeof(int).
This of course assumes you are under the assumption multiplication takes constant time regardless of size of inputs, which is arguably incorrect if you are being very precise
To find an element isn't O(1) - but accessing element in the array has nothing to do with finding an element - to be precise, you don't interact with other elements, you don't need to access anything but your single element - you just always calculate the address, regardless how big array is, and that is that single operation - hence O(1).
The machine code (or, in the case of Java, virtual machine code) generated for the statement
int foo = arr[55];
Would be essentially:
Get the starting memory address of arr into A
Add 55 to A
Take the contents of the memory address in A, and put it in the memory address of foo
These three instructions all take O(1) time on a standard machine.
In theory, array access is O(1), as others have already explained, and I guess your question is more or less a theoretical one. Still I like to bring in another aspect.
In practice, array access will get slower if the array gets large. There are two reasons:
Caching: The array will not fit into cache or only into a higher level (slower) cache.
Address calculation: For large arrays, you need larger index data types (for example long instead of int). This will make address calculation slower, at least on most platforms.
If we say the subscript operator (indexing) has O(1) time complexity, we make this statement excluding the runtime of any other operations/statements/expressions/etc. So addElements does not affect the operation.
Surely it'd take longer to find element 55 than it would element 1?
"find"? Oh no! "Find" implies a relatively complex search operation. We know the base address of the array. To determine the value at arr[55], we simply add 551 to the base address and retrieve the value at that memory location. This is definitely O(1).
1 Since every element of an int array occupies at least two bytes (when using C), this is no exactly true. 55 needs to be multiplied by the size of int first.
Arrays store the data contiguously, unlike Linked Lists or Trees or Graphs or other data structures using references to find the next/previous element.
It is intuitive to you that the access time of first element is O(1). However you feel that the access time to 55th element is O(55). That's where you got it wrong. You know the address to first element, so the access time to it is O(1).
But you also know the address to 55th element. It is simply address of 1st + size_of_each_element*54 .
Hence you access that element as well as any other element of an array in O(1) time. And that is the reason why you cannot have elements of multiple types in an array because that would completely mess up the math to find the address to nth element of an array.
So, access to any element in an array is O(1) and all elements have to be of same type.

Why is accessing any single element in an array done in constant time ( O(1) )?

According to Wikipedia, accessing any single element in an array takes constant time as only one operation has to be performed to locate it.
To me, what happens behind the scenes probably looks something like this:
a) The search is done linearly (e.g. I want to access element 5. I begin the search at index 0, if it's not equal to 5, I go to index 1 etc.)
This is O(n) -- where n is the length of the array
b) If the array is stored as a B-tree, this would give O(log n)
I see no other approach.
Can someone please explain why and how this is done in O(1)?
An array starts at a specific memory address start. Each element occupies the same amount of bytes element_size. The array elements are located one after another in the memory from the start address on. So you can calculate the memory address of the element i with start + i * element_size. This computation is independent of the array size and is therefor O(1).
In theory elements of an array are of the same known size and they are located in a continuous part of memory so if beginning of the array is located at A memory address if you want to access any element you have to compute its address like this:
A + item_size*index so this is a constant time operation.
Accessing a single element is NOT finding an element whose value is x.
Accessing an element i means getting the element at the i'th position of the array.
This is done in O(1) because it is pretty simple (constant number of math calculations) where the element is located given the index, the beginning of the array and the size of each element.
RAM memory offers a constant time (or to be more exact, a bounded time) to access each address in the RAM, and since finding the address is O(1), and retrieving the element in it is also O(1), it gives you total of O(1).
Finding if an element whose value is x is actually Omega(n) problem, unless there is some more information on the array (sorted, for example).
Usually an array is organized as a continuous block of memory where each position can be accessed by means of an index calculation. This index calculation cannot be done in constant time for arrays of arbitrary size, but for reasons of addressable space, the numbers involved are bounded by machine word size, so an assumption of constant time is justified.
If you have array of Int's. Each int is 32 bit's in java . If you have for example 10 integers in java it means you allocate memory of 320 bits. Then computer know
0 - index is at memory for example - 39200
last index is at memory 39200 + total memory of your array = 39200+320= 39520
So then if you want to access index 3. Then it is 39200 + 32*3 = 39296.
It all depends how much memory your object takes which you store in array. Just remember arrays take whole block of memory.

Resources