Delete the last integer in an array without it becoming zero - c

So I'm using an array based queue implementation.
After the array is filled, to delete the first element out, I shifted the array to the left.
This made the last element in the array zero, this also meant I can't enter a new element in to the array because the last spot is taken up by a zero.
Is there a way to completely delete the element instead of it becoming a zero?
I tried this to completely empty the queue using this, but it just turned all the elements in to zero.
case 3:
memset(queue, 0, sizeof(queue));
printf("\nThe entire queue has been emptied");
break;
Thank you.

Why don't you overwrite the last element with a new value? Let's say the queue is n elements long and you want to overwrite the last element array[n-1] with a new value e.g. 23...
array[n-1] = 23;
or
*(array+n-1) =23;
To completely delete the entry sounds like a difficult problem because you are making the array smaller, but there will still exist a bit of memory where your last element used to be. You should control your program so that it never tries to access array elements beyond the limit of your array as the results will be unpredictable.
It is really important in your code that you have a way of your program knowing how long arrays are. The computer may not stop you accessing memory beyond what is allocated for an array, but it is a very bad idea - particularly if you were to try to write to it.
So as long as you remember how long your array is all you have to do to delete the last element is reduce the length of your array by 1.

Related

Find duplicates in an array in linear time

Problem: You are given an array of n+1 integers from range 1..n. At least one number has duplicate. All array values can be same. Print all duplicates in linear time and constant space. Array can't be modified.
The obvious solution would be to create a bit array with default value false, set 1 in bitarray[array[i]] for each element, then check if it's already 1. That requires additional space, so no good. My another thought: reorder the array by hash and check if a current element and the element array[hash % n] are equal. This is also no good since we can't modify the original array. Now I think that it looks like an impossible task. Is there even a solution to this?

Find number of records in an array of structures

Suppose we have a structure array of up to 50 elements that will be added in turn from a buffer write function. How do I find the current number of recordings made in array if the maximum number of items has not been reached?
typedef struct
{
remoteInstructionReceived_t instruction;
uint16_t parameter;
} instructionData_type;
remoteInstructionReceived_t commandBuffer[50];
C arrays are fixed-size: there are always exactly 50 objects in your array. If your program logic requires some of them to be "inactive" (e.g. not written yet), you must keep track of such information separately. For example, you could use a size_t variable to store the number of "valid" entries in the array.
An alternative would be to designate a value of remoteInstructionReceived_t as a terminator, similarly to how 0 is used as a terminator for NUL-terminated strings. Then, you wouldn't have to track the "useful length" of the array separately, but you'd have to ensure a terminator always follows the last valid item in it.
In general, length-tracking is likely both more efficient and more maintainable. I am only mentioning the second (terminator) option out of a sense of completeness.
You can't, C doesn't have a way of knowing if a variable "has a value". All values are values, and no value is more real than any else.
The answer is that additional state, i.e. some form of counter variable, is required to hold this information. Typically you use that when inserting new records, to know where the next record should go.
Have you considered using a different data structure? You can wrap your structure to allow the creation of a linked list, for example. The deletion would be real just by freeing memory. Besides, it's more efficient for some kinds of operation, such as addin elements in the middle of the list.

Deletion in dynamic array

Can anyone explain the time complexity of (deletion at ending in dynamic array)?
I think the answer is O(1)
But in book its mentioned O(n).
Since we are talking about dynamic arrays, that is, arrays with the ability to add/remove elements to/from them, there are two possible solutions to implement a dynamic array:
You allocate enough memory to hold all the current and future elements. Also, you need to know the last possible index. Using this setup, the complexity of removing the last element is O(1), since you just decrement the last index. However, removing a non-last element has a linear complexity, since you need to copy all the later elements to the previous before you decrement the last index. Also, you might have difficulty in identifying the maximum possible size at allocation time, possibly leading to overflow issues or memory waste.
You can implement it using a list. This way you will not know what the address of the last element is, so you will need to iterate your list till the penultimate item and then free the memory of the last item and set the next of the penultimate item to point to nil. Since the book mentioned a complexity of O(n) to remove the last element, we can safely assume that by dynamic array, the book meant this second option.

Accessing Elements - Really O(1)?

It is said that an example of a O(1) operation is that of accessing an element in an array. According to one source, O(1) can be defined in the following manner:
[Big-O of 1] means that the execution time of the algorithm does not depend on
the size of the input. Its execution time is constant.
However, if one wants to access an element in an array, does not the efficiency of the operation depend on the amount of elements in the array? For example
int[] arr = new int[1000000];
addElements(arr, 1000000); //A function which adds 1 million arbitrary integers to the array.
int foo = arr[55];
I don't understand how the last statement can be described as running in O(1); does not the 1,000,000 elements in the array have a bearing on the running time of the operation? Surely it'd take longer to find element 55 than it would element 1? If anything, this looks to me like O(n).
I'm sure my reasoning is flawed, however, I just wanted some clarification as to how this can be said to run in O(1)?
Array is a data structure, where objects are stored in continuous memory location. So in principle, if you know the address of base object, you will be able to find the address of ith object.
addr(a[i]) = addr(a[0]) + i*size(object)
This makes accessing ith element of array O(1).
EDIT
Theoretically, when we talk about complexity of accessing an array element, we talk for fixed index i.
Input size = O(n)
To access ith element, addr(a[0]) + i*size(object). This term is independent of n, so it is said to be O(1).
How ever multiplication still depends on i but not on n. It is constant O(1).
The address of an element in memory will be the base address of the array plus the index times the size of the element in the array. So to access that element, you just essentially access memory_location + 55 * sizeof(int).
This of course assumes you are under the assumption multiplication takes constant time regardless of size of inputs, which is arguably incorrect if you are being very precise
To find an element isn't O(1) - but accessing element in the array has nothing to do with finding an element - to be precise, you don't interact with other elements, you don't need to access anything but your single element - you just always calculate the address, regardless how big array is, and that is that single operation - hence O(1).
The machine code (or, in the case of Java, virtual machine code) generated for the statement
int foo = arr[55];
Would be essentially:
Get the starting memory address of arr into A
Add 55 to A
Take the contents of the memory address in A, and put it in the memory address of foo
These three instructions all take O(1) time on a standard machine.
In theory, array access is O(1), as others have already explained, and I guess your question is more or less a theoretical one. Still I like to bring in another aspect.
In practice, array access will get slower if the array gets large. There are two reasons:
Caching: The array will not fit into cache or only into a higher level (slower) cache.
Address calculation: For large arrays, you need larger index data types (for example long instead of int). This will make address calculation slower, at least on most platforms.
If we say the subscript operator (indexing) has O(1) time complexity, we make this statement excluding the runtime of any other operations/statements/expressions/etc. So addElements does not affect the operation.
Surely it'd take longer to find element 55 than it would element 1?
"find"? Oh no! "Find" implies a relatively complex search operation. We know the base address of the array. To determine the value at arr[55], we simply add 551 to the base address and retrieve the value at that memory location. This is definitely O(1).
1 Since every element of an int array occupies at least two bytes (when using C), this is no exactly true. 55 needs to be multiplied by the size of int first.
Arrays store the data contiguously, unlike Linked Lists or Trees or Graphs or other data structures using references to find the next/previous element.
It is intuitive to you that the access time of first element is O(1). However you feel that the access time to 55th element is O(55). That's where you got it wrong. You know the address to first element, so the access time to it is O(1).
But you also know the address to 55th element. It is simply address of 1st + size_of_each_element*54 .
Hence you access that element as well as any other element of an array in O(1) time. And that is the reason why you cannot have elements of multiple types in an array because that would completely mess up the math to find the address to nth element of an array.
So, access to any element in an array is O(1) and all elements have to be of same type.

Julia uninitialize array at particular index

I'm writing a neural network in Julia that tests random topologies. I've left all indices in an array of nodes that are not occupied by a node (but which may be under a future topology) undefined as it saves memory. When a node in an old topology is no longer connected to other nodes in a new topology, is there a way to uninitialize the index to which the node belongs? Also, are there any reasons not to do it this way?
local preSyns = Array(Vector{Int64}, (2,2))
println(preSyns)
preSyns[1] = [3]
println(preSyns)
The output
[#undef #undef
#undef #undef]
[[1] #undef
#undef #undef]
How do I make the first index undefined as it was during the first printout?
In case you dont believe me please about the memory issue take a look below
function memtest()
y = Array(Vector{Int64}, 100000000)
end
function memtestF()
y = fill!(Array(Vector{Int64}, 100000000),[])
end
#time memtest()
#time memtestF()
Output
elapsed time: 0.468254929 seconds (800029916 bytes allocated)
elapsed time: 30.801266299 seconds (5600291712 bytes allocated, 69.42% gc time)
an un initialized array takes 0.8 gigs and initialized takes 5 gigs.
My activity monitor also confirms this.
Undefined values are essentially null pointers, and there's no first-class way to "unset" an array element back to a null pointer. This becomes more difficult with very large arrays since you don't want to have much (or any) overhead for your sentinel values that represent unconnected nodes. On a 64 bit system, an array of 100 million elements takes up ~800MB for just the pointers alone, and an empty array takes up 48 bytes for its header metadata. So if you assign an independent empty array to each element, you end up with ~5GB worth of array headers.
The behavior of fill! in Julia 0.3 is a little flakey (and has been corrected in 0.4). If, instead of filling your array with [], you fill! it with an explicitly typed Int64[], every element will point to the same empty array. This means that your array and its elements will only take up 48 more bytes than the uninitialized array. But it also means that modifying that subarray for one of your nodes (e.g., with push!) will mean that all nodes will get that connection. This probably isn't what you want. You could still use the empty array as a sentinel value, but you'd have to be very careful to not modify it.
If your array is going to be densely packed with subarrays, then there's no straightforward way around this overhead for the array headers. A more robust and forward-compatible way of initializing an array with independent empty arrays is with a comprehension: Vector{Int64}[Int64[] for i=1:2, j=1:2]. This will also be more performant in 0.3 as it doesn't need to convert [] to Int64[] for each element. If each element is likely to contain a non-empty array, you'll need to pay the cost of the array overhead in any case. To remove the node from the topology, you'd simply call empty!.
If, however, your array of nodes is going to be sparsely packed, you could try a different data structure that supports unset elements more directly. Depending upon your use-case, you could use a default dictionary that maps an index-tuple to your vectors (from DataStructures.jl; use a function to ensure that the "default" is a newly allocated and independent empty array every time), or try a package dedicated to topologies like Graphs.jl or LightGraphs.jl.
Finally, to answer the actual question you asked, yes, there is a hacky way to unset an element of an array back to #undef. This is unsupported and may break at any time:
function unset!{T}(A::Array{T}, I::Int...)
isbits(T) && error("cannot unset! an element of a bits array")
P = pointer_to_array(convert(Ptr{Ptr{T}}, pointer(A)), size(A))
P[I...] = C_NULL
return A
end

Resources