Why have arrays in Go? - arrays

I understand the difference between arrays and slices in Go. But what I don't understand is why it is helpful to have arrays at all. Why is it helpful that an array type definition specifies a length and an element type? Why can't every "array" that we use be a slice?

There is more to arrays than just the fixed length: they are comparable, and they are values (not reference or pointer types).
There are countless advantages of arrays over slices in certain situations, all of which together more than justify the existence of arrays (along with slices). Let's see them. (I'm not even counting arrays being the building blocks of slices.)
1. Being comparable means you can use arrays as keys in maps, but not slices. Yes, you could say now that why not make slices comparable then, so that this alone wouldn't justify the existence of both. Equality is not well defined on slices. FAQ: Why don't maps allow slices as keys?
They don't implement equality because equality is not well defined on such types; there are multiple considerations involving shallow vs. deep comparison, pointer vs. value comparison, how to deal with recursive types, and so on.
2. Arrays can also give you higher compile-time safety, as the index bounds can be checked at compile time (array length must evaluate to a non-negative constant representable by a value of type int):
s := make([]int, 3)
s[3] = 3 // "Only" a runtime panic: runtime error: index out of range
a := [3]int{}
a[3] = 3 // Compile-time error: invalid array index 3 (out of bounds for 3-element array)
3. Also passing around or assigning array values will implicitly make a copy of the entire array, so it will be "detached" from the original value. If you pass a slice, it will still make a copy but just of the slice header, but the slice value (the header) will point to the same backing array. This may or may not be what you want. If you want to "detach" a slice from the "original" one, you have to explicitly copy the content e.g. with the builtin copy() function to a new slice.
a := [2]int{1, 2}
b := a
b[0] = 10 // This only affects b, a will remain {1, 2}
sa := []int{1, 2}
sb := sa
sb[0] = 10 // Affects both sb and sa
4. Also since the array length is part of the array type, arrays with different length are distinct types. On one hand this may be a "pain in the ass" (e.g. you write a function which takes a parameter of type [4]int, you can't use that function to take and process an array of type [5]int), but this may also be an advantage: this may be used to explicitly specify the length of the array that is expected. E.g. you want to write a function which takes an IPv4 address, it can be modeled with the type [4]byte. Now you have a compile-time guarantee that the value passed to your function will have exactly 4 bytes, no more and no less (which would be an invalid IPv4 address anyway).
5. Related to the previous, the array length may also serve a documentation purpose. A type [4]byte properly documents that IPv4 has 4 bytes. An rgb variable of type [3]byte tells there are 1 byte for each color components. In some cases it is even taken out and is available, documented separately; for example in the crypto/md5 package: md5.Sum() returns a value of type [Size]byte where md5.Size is a constant being 16: the length of an MD5 checksum.
6. They are also very useful when planning memory layout of struct types, see JimB's answer here, and this answer in greater detail and real-life example.
7. Also since slices are headers and they are (almost) always passed around as-is (without pointers), the language spec is more restrictive regarding pointers to slices than pointers to arrays. For example the spec provides multiple shorthands for operating with pointers to arrays, while the same gives compile-time error in case of slices (because it's rare to use pointers to slices, if you still want / have to do it, you have to be explicit about handling it; read more in this answer).
Such examples are:
Slicing a p pointer to array: p[low:high] is a shorthand for (*p)[low:high]. If p is a pointer to slice, this is compile-time error (spec: Slice expressions).
Indexing a p pointer to array: p[i] is a shorthand for (*p)[i]. If p is pointer to a slice, this is a compile time error (spec: Index expressions).
Example:
pa := &[2]int{1, 2}
fmt.Println(pa[1:1]) // OK
fmt.Println(pa[1]) // OK
ps := &[]int{3, 4}
println(ps[1:1]) // Error: cannot slice ps (type *[]int)
println(ps[1]) // Error: invalid operation: ps[1] (type *[]int does not support indexing)
8. Accessing (single) array elements is more efficient than accessing slice elements; as in case of slices the runtime has to go through an implicit pointer dereference. Also "the expressions len(s) and cap(s) are constants if the type of s is an array or pointer to an array".
May be suprising, but you can even write:
type IP [4]byte
const x = len(IP{}) // x will be 4
It's valid, and is evaluated and compile-time even though IP{} is not a constant expression so e.g. const i = IP{} would be a compile-time error! After this, it's not even surprising that the following also works:
const x2 = len((*IP)(nil)) // x2 will also be 4
Note: When ranging over a complete array vs a complete slice, there may be no performance difference at all as obviously it may be optimized so that the pointer in the slice header is only dereferenced once. For details / example, see Array vs Slice: accessing speed.
See related questions where an array can be used / makes more sense than a slice:
Why use arrays instead of slices?
Why can't Go slice be used as keys in Go maps pretty much the same way arrays can be used as keys?
Hash with key as an array type
How do I check the equality of three values elegantly?
Slicing a slice pointer passed as argument
And this is just for curiosity: a slice can contain itself while an array can't. (Actually this property makes comparison easier as you don't have to deal with recursive data structures).
Must-read blogs:
Go Slices: usage and internals
Arrays, slices (and strings): The mechanics of 'append'

Arrays are values, and it is often useful to have a value instead of a pointer.
Values can be compared, hence you can use arrays as map keys.
Values are always initialized, so there's you don't need to initialize, or make them like you do with a slice.
Arrays give you better control of memory layout, where as you can't allocate space directly in a struct with a slice, you can with an array:
type Foo struct {
buf [64]byte
}
Here, a Foo value will contains a 64 byte value, rather than a slice header which needs to be separately initialized. Arrays are also used to pad structs to match alignment when interoperating with C code and to prevent false sharing for better cache performance.
Another aspect for improved performance is that you can better define memory layout than with slices, because data locality can have a very big impact on memory intensive calculations. Dereferencing a pointer can take considerable time compared to the operations being performed on the data, and copying values smaller than a cache line incurs very little cost, so performance critical code often uses arrays for that reason alone.

Arrays are more efficient in saving space. If you never update the size of the slice (i.e. start with a predefined size and never go past it) there really is not much of a performance difference. But there is extra overhead in space, as a slice is simply a wrapper containing the array at its core. Contextually, it also improves clarity as it makes the intended use of the variable more apparent.

Every array could be a slice but not every slice could be an array. If you have a fixed collection size you can get a minor performance improvement from using an array. At the very least you'll save the space occupied by the slice header.

Related

Why does the type signature of linear array change compared to normal array?

I'm going through an example in A Taste of Linear Logic.
It first introduces the standard array with the usual operations defined (page 24):
Then suggests that a linear equivalent (using a linear logic for type signatures to restrict array copying) would have a slightly different type signature:
This is designed with the idea that array contains values that are cheap to copy but that the array itself is expensive to copy and thus should be passed along from use to use as a handle.
Question: The signatures for lookup and update correspond well to the standard signatures, but how do I interpret the signature for new?
In particular:
The function new does not seem to return an array. How can I get an array to use if one is not provided?
I think I do understand that Arr –o Arr x X is not derivable using linear logic and therefore a function to extract individual values without consuming the array is needed, but I don't understand why new doesn't provide that function directly
In practical terms, this is about garbage collection.
Linear logic avoids making copies as well as leaving unused values lying around. So when you create an array with new, you also need to make sure it's eventually cleaned up again.
How can you make sure it is cleaned up? Well, in this example they do it by not giving back the array as the result, but instead “lending” it to the caller. The function Arr ⊸ Arr ⊗ X must give an array back in the end, in addition to the result you're actually interested in. It's assumed that this will be a modified form of the array you started out with. Only the X is passed back to the caller, the Arr is deallocated.

How is the term “dynamically sized” for slice justifiable, when maximum length of a slice is less than length of the underlying array?

From A Tour of Go:
An array has a fixed size. A slice, on the other hand, is a dynamically-sized, flexible view into the elements of an array.
How can the slice be called as dynamically sized when it cannot go beyond the size of the underlying array.
The size of a Go array is fixed at compile time. The size of a Go slice is set dynamically at runtime.
References:
The Go Blog: Go Slices: usage and internals
The Go Blog: Arrays, slices (and strings): The mechanics of 'append'
Even with a cap, a dynamic size is still dynamic: it can range between zero and whatever the cap is.
That said, as Flimzy noted in a comment, if you use an operation that can "grow" a slice, it always returns a new slice, or takes a pointer to the slice—usually the former—so that the routine that needed to go past the current capacity could allocate a new, larger array1 and make the slice use that instead of the old array.
That's why append returns a new value, and you must write:
s = append(s, element)
for instance.
(The previous underlying array, if any, is garbage collected if and when it is appropriate to do so. A nil slice has no underlying array and hence a zero capacity.)
1The runtime uses unsafe and other special tricks to allocate this array, bypassing type-checking, but coordinating with the runtime's own garbage collection code. Hence it can allocate an array whose size is chosen at runtime instead of at compile time. The compiler's new and make and append built-ins have access to this same ability.
You can write this same kind of tricky code yourself using unsafe, but if you do, you run the risk of having to rewrite your code when a new Go release comes out, if the new Go has changed something internally. So, don't do that: use append or make to create the runtime-sized array with the slice data already set up for you.
How can the slice be called as dynamically sized when it cannot go beyond the size of the underlying array.
The types are static vs dynamic. An array type is like [4]byte - the size is part of the type definition, and therefore set at compile time. Only a [4]byte can be stored in a variable of type [4]byte. Not a [3]byte, not a [5]byte. It's static.
A slice type is like []byte - the size isn't part of the type definition, so it is not set at compile time, and a slice of any size can be stored in a []byte at runtime. It could be zero-length, it could be a thousand, it could be a four-byte window into a million-length array. It's dynamic.
The size of a slice can also shrink and grow within its capacity at runtime, though the capacity can only change by replacing the underlying array (which, being an array, is of fixed size). This is done automatically behind the scenes by append, for example. But, to my understanding at least, this is not what makes slices "dynamic"; it's the fact that a slice can be of any size at runtime - it is not known at compile time. That is what defines them as "dynamic" to me.

How do I create array with dynamic length rather than slice in golang?

For example: I want to use reflect to get a slice's data as an array to manipulate it.
func inject(data []int) {
sh := (*reflect.SliceHeader)(unsafe.Pointer(&data))
dh := (*[len(data)]int)(unsafe.Pointer(sh.Data))
printf("%v\n", dh)
}
This function will emit a compile error for len(data) is not a constant. How should I fix it?
To add to the #icza's comment, you can easily extract the underlying array by using &data[0]—assuming data is an initialized slice. IOW, there's no need to jump through the hoops here: the address of the first slice's element is actually the address of the first slot in the slice's underlying array—no magic here.
Since taking an address of an element of an array is creating
a reference to that memory—as long as the garbage collector is
concerned—you can safely let the slice itself go out of scope
without the fear of that array's memory becoming inaccessible.
The only thing which you can't really do with the resulting
pointer is passing around the result of dereferencing it.
That's simply because arrays in Go have their length encoded in
their type, so you'll be unable to create a function to accept
such array—because you do not know the array's length in advance.
Now please stop and think.
After extracting the backing array from a slice, you have
a pointer to the array's memory.
To sensibly carry it around, you'll also need to carry around
the array's length… but this is precisely what slices do:
they pack the address of a backing array with the length of the
data in it (and also the capacity).
Hence really I think you should reconsider your problem
as from where I stand I'm inclined to think it's a non-problem
to begin with.
There are cases where wielding pointers to the backing arrays
extracted from slices may help: for instance, when "pooling"
such arrays (say, via sync.Pool) to reduce memory churn
in certain situations, but these are concrete problems.
If you have a concrete problem, please explain it,
not your attempted solution to it—what #Flimzy said.
Update I think I should may be better explain the
you can't really do with the resulting
pointer is passing around the result of dereferencing it.
bit.
A crucial point about arrays in Go (as opposed to slices)
is that arrays—as everything in Go—are passed around
by value, and for arrays that means their data is copied.
That is, if you have
var a, b [8 * 1024 * 1024]byte
...
b = a
the statement b = a would really copy 8 MiB of data.
The same obviously applies to arguments of functions.
Slices sidestep this problem by holding a pointer
to the underlying (backing) array. So a slice value
is a little struct type containing
a pointer and two integers.
Hence copying it is really cheap but "in exchange" it
has reference semantics: both the original value and
its copy point to the same backing array—that is,
reference the same data.
I really advise you to read these two pieces,
in the indicated order:
https://blog.golang.org/go-slices-usage-and-internals
https://blog.golang.org/slices

Julia uninitialize array at particular index

I'm writing a neural network in Julia that tests random topologies. I've left all indices in an array of nodes that are not occupied by a node (but which may be under a future topology) undefined as it saves memory. When a node in an old topology is no longer connected to other nodes in a new topology, is there a way to uninitialize the index to which the node belongs? Also, are there any reasons not to do it this way?
local preSyns = Array(Vector{Int64}, (2,2))
println(preSyns)
preSyns[1] = [3]
println(preSyns)
The output
[#undef #undef
#undef #undef]
[[1] #undef
#undef #undef]
How do I make the first index undefined as it was during the first printout?
In case you dont believe me please about the memory issue take a look below
function memtest()
y = Array(Vector{Int64}, 100000000)
end
function memtestF()
y = fill!(Array(Vector{Int64}, 100000000),[])
end
#time memtest()
#time memtestF()
Output
elapsed time: 0.468254929 seconds (800029916 bytes allocated)
elapsed time: 30.801266299 seconds (5600291712 bytes allocated, 69.42% gc time)
an un initialized array takes 0.8 gigs and initialized takes 5 gigs.
My activity monitor also confirms this.
Undefined values are essentially null pointers, and there's no first-class way to "unset" an array element back to a null pointer. This becomes more difficult with very large arrays since you don't want to have much (or any) overhead for your sentinel values that represent unconnected nodes. On a 64 bit system, an array of 100 million elements takes up ~800MB for just the pointers alone, and an empty array takes up 48 bytes for its header metadata. So if you assign an independent empty array to each element, you end up with ~5GB worth of array headers.
The behavior of fill! in Julia 0.3 is a little flakey (and has been corrected in 0.4). If, instead of filling your array with [], you fill! it with an explicitly typed Int64[], every element will point to the same empty array. This means that your array and its elements will only take up 48 more bytes than the uninitialized array. But it also means that modifying that subarray for one of your nodes (e.g., with push!) will mean that all nodes will get that connection. This probably isn't what you want. You could still use the empty array as a sentinel value, but you'd have to be very careful to not modify it.
If your array is going to be densely packed with subarrays, then there's no straightforward way around this overhead for the array headers. A more robust and forward-compatible way of initializing an array with independent empty arrays is with a comprehension: Vector{Int64}[Int64[] for i=1:2, j=1:2]. This will also be more performant in 0.3 as it doesn't need to convert [] to Int64[] for each element. If each element is likely to contain a non-empty array, you'll need to pay the cost of the array overhead in any case. To remove the node from the topology, you'd simply call empty!.
If, however, your array of nodes is going to be sparsely packed, you could try a different data structure that supports unset elements more directly. Depending upon your use-case, you could use a default dictionary that maps an index-tuple to your vectors (from DataStructures.jl; use a function to ensure that the "default" is a newly allocated and independent empty array every time), or try a package dedicated to topologies like Graphs.jl or LightGraphs.jl.
Finally, to answer the actual question you asked, yes, there is a hacky way to unset an element of an array back to #undef. This is unsupported and may break at any time:
function unset!{T}(A::Array{T}, I::Int...)
isbits(T) && error("cannot unset! an element of a bits array")
P = pointer_to_array(convert(Ptr{Ptr{T}}, pointer(A)), size(A))
P[I...] = C_NULL
return A
end

Cheapest structure for deferencing?

Let's say I have an associative array keyed by unsigned int; values could be of any fixed-size type. There is some pre-defined maximum no. of instances.
API usage example: MyStruct * valuePtr = get(1234); and put(6789, &myStructInstance); ...basic.
I want to minimise cache misses as I read entries rapidly and at random from this array, so I pre-malloc(sizeof(MyType) * MAX_ENTRIES) to ensure locality of reference inasmuch as possible.
Genericism is important for the values array. I've looked at C pseudo-generics, but prefer void * for simplicity; however, not sure if this is at odds with performance goals. Ultimately, I would like to know what is best for performance.
How should I implement my associative array for performance? Thoughts thus far...
Do I pass the associative array a single void * pointer to the malloced values array and allow it to use that internally (for which we would need to guarantee a matching keys array size)? Can I do this generically, since the type needs(?) to be known in order to index into values array?
Do I have a separate void * valuePtrs[] within the associative array, then have these pointers point to each element in the malloced values array? This would seem to avoid need to know about concrete type?
Do I use C pseudo-generics and thus allow get() to return a specific value type? Surely in this case, the only benefit is not having to explicitly cast, e.g. MyStruct* value = (MyStruct*) get(...)... the array element still has to be dereferenced and so has the same overhead?
And, in general, does the above approach to minimising cahce misses appear to make sense?
In both cases, the performance is basically the same.
In the first one (void* implementation), you will need to look up the value + dereference the pointer. So these are two instructions.
In the other implementation, you will need to multiply the index with the size of the values. So this implementation also asks for two instructions.
However, the first implementation will be easier and more clean to implement. Also, the array is fully transparent; the user will not need to know what kind of structures are in the array.
See solutions categorised below, in terms of pros and cons (thanks to Ruben for assisting my thinking)... I've implemented Option 2 and 5 for my use case, which is somewhat generalised; I recommend Option 4 if you need a very specific, one-off data structure. Option 3 is most flexible while being trivial to code, and is also the slowest. Option 4 is the quickest. Option 5 is a little slow but with flexibility on the array size and ease of general use.
Associative array struct points to array of typed pointers:
pros no failure value required, explicit casts not required, does not need compile-time size of array
cons costly double deref, requires generic library code
Associative array struct holds array of void * pointers:
pros no failure value required, no generic library code
cons costly double deref, explicit casts following get(), needs compile time size of array if VLAs are not used
Associative array struct points to array of void * values:
pros no generic library code, does not need compile-time size of array
cons costly triple deref, explicit casts following get(), requires offset calc which requires sizeof value passed in explicitly
Associative array struct holds array of typed values:
pros cheap single deref, explicit casts not required, keys and entries allocated contiguously
cons requires generic library code, failure value must be supplied, needs compile time size of array if VLAs are not used
Associative array struct points to array of typed values:
pros explicit casts not required, flexible array size
cons costly double deref, requires generic library code, failure value must be supplied, needs compile time size of array if VLAs are not used

Resources