Why are lists used infrequently in Go? - arrays

Is there a way to create an array/slice in Go without a hard-coded array size? Why is List ignored?
In all the languages I've worked with extensively: Delphi, C#, C++, Python - Lists are very important because they can be dynamically resized, as opposed to arrays.
In Golang, there is indeed a list.Liststruct, but I see very little documentation about it - whether in Go By Example or the three Go books that I have - Summerfield, Chisnal and Balbaert - they all spend a lot of time on arrays and slices and then skip to maps. In souce code examples I also find little or no use of list.List.
It also appears that, unlike Python, Range is not supported for List - big drawback IMO. Am I missing something?
Slices are lovely, but they still need to be based on an array with a hard-coded size. That's where List comes in.

Just about always when you are thinking of a list - use a slice instead in Go. Slices are dynamically re-sized. Underlying them is a contiguous slice of memory which can change size.
They are very flexible as you'll see if you read the SliceTricks wiki page.
Here is an excerpt :-
Copy
b = make([]T, len(a))
copy(b, a) // or b = append([]T(nil), a...)
Cut
a = append(a[:i], a[j:]...)
Delete
a = append(a[:i], a[i+1:]...) // or a = a[:i+copy(a[i:], a[i+1:])]
Delete without preserving order
a[i], a = a[len(a)-1], a[:len(a)-1]
Pop
x, a = a[len(a)-1], a[:len(a)-1]
Push
a = append(a, x)
Update: Here is a link to a blog post all about slices from the go team itself, which does a good job of explaining the relationship between slices and arrays and slice internals.

I asked this question a few months ago, when I first started investigating Go. Since then, every day I have been reading about Go, and coding in Go.
Because I did not receive a clear-cut answer to this question (although I had accepted one answer) I'm now going to answer it myself, based on what I have learned, since I asked it:
Is there a way to create an array /slice in Go without a hard coded
array size?
Yes. Slices do not require a hard coded array to slice from:
var sl []int = make([]int, len, cap)
This code allocates slice sl, of size len with a capacity of cap - len and cap are variables that can be assigned at runtime.
Why is list.List ignored?
It appears the main reasons list.List seem to get little attention in Go are:
As has been explained in #Nick Craig-Wood's answer, there is
virtually nothing that can be done with lists that cannot be done
with slices, often more efficiently and with a cleaner, more
elegant syntax. For example the range construct:
for i := range sl {
sl[i] = i
}
cannot be used with list - a C style for loop is required. And in
many cases, C++ collection style syntax must be used with lists:
push_back etc.
Perhaps more importantly, list.List is not strongly typed - it is very similar to Python's lists and dictionaries, which allow for mixing various types together in the collection. This seems to run contrary
to the Go approach to things. Go is a very strongly typed language - for example, implicit type conversions never allowed in Go, even an upCast from int to int64 must be
explicit. But all the methods for list.List take empty interfaces -
anything goes.
One of the reasons that I abandoned Python and moved to Go is because
of this sort of weakness in Python's type system, although Python
claims to be "strongly typed" (IMO it isn't). Go'slist.Listseems to
be a sort of "mongrel", born of C++'s vector<T> and Python's
List(), and is perhaps a bit out of place in Go itself.
It would not surprise me if at some point in the not too distant future, we find list.List deprecated in Go, although perhaps it will remain, to accommodate those rare situations where, even using good design practices, a problem can best be solved with a collection that holds various types. Or perhaps it's there to provide a "bridge" for C family developers to get comfortable with Go before they learn the nuances of slices, which are unique to Go, AFAIK. (In some respects slices seem similar to stream classes in C++ or Delphi, but not entirely.)
Although coming from a Delphi/C++/Python background, in my initial exposure to Go I found list.List to be more familiar than Go's slices, as I have become more comfortable with Go, I have gone back and changed all my lists to slices. I haven't found anything yet that slice and/or map do not allow me to do, such that I need to use list.List.

I think that's because there's not much to say about them as the container/list package is rather self-explanatory once you absorbed what is the chief Go idiom for working with generic data.
In Delphi (without generics) or in C you would store pointers or TObjects in the list, and then cast them back to their real types when obtaining from the list. In C++ STL lists are templates and hence parameterized by type, and in C# (these days) lists are generic.
In Go, container/list stores values of type interface{} which is a special type capable to represent values of any other (real) type—by storing a pair of pointers: one to the type info of the contained value, and a pointer to the value (or the value directly, if it's size is no greater than the size of a pointer). So when you want to add an element to the list, you just do that as function parameters of type interface{} accept values coo any type. But when you extract values from the list, and what to work with their real types you have to either type-asert them or do a type switch on them—both approaches are just different ways to do essentially the same thing.
Here is an example taken from here:
package main
import ("fmt" ; "container/list")
func main() {
var x list.List
x.PushBack(1)
x.PushBack(2)
x.PushBack(3)
for e := x.Front(); e != nil; e=e.Next() {
fmt.Println(e.Value.(int))
}
}
Here we obtain an element's value using e.Value() and then type-assert it as int a type of the original inserted value.
You can read up on type assertions and type switches in "Effective Go" or any other introduction book. The container/list package's documentation summaries all the methods lists support.

Note that Go slices can be expanded via the append() builtin function. While this will sometimes require making a copy of the backing array, it won't happen every time, since Go will over-size the new array giving it a larger capacity than the reported length. This means that a subsequent append operation can be completed without another data copy.
While you do end up with more data copies than with equivalent code implemented with linked lists, you remove the need to allocate elements in the list individually and the need to update the Next pointers. For many uses the array based implementation provides better or good enough performance, so that is what is emphasised in the language. Interestingly, Python's standard list type is also array backed and has similar performance characteristics when appending values.
That said, there are cases where linked lists are a better choice (e.g. when you need to insert or remove elements from the start/middle of a long list), and that is why a standard library implementation is provided. I guess they didn't add any special language features to work with them because these cases are less common than those where slices are used.

From: https://groups.google.com/forum/#!msg/golang-nuts/mPKCoYNwsoU/tLefhE7tQjMJ
It depends a lot on the number of elements in your lists,
whether a true list or a slice will be more efficient
when you need to do many deletions in the 'middle' of the list.
#1
The more elements, the less attractive a slice becomes.
#2
When the ordering of the elements isn't important,
it is most efficient to use a slice and
deleting an element by replacing it by the last element in the slice and
reslicing the slice to shrink the len by 1
(as explained in the SliceTricks wiki)
So
use slice
1. If order of elements in list is Not important, and you need to delete, just
use List swap the element to delete with last element, & re-slice to (length-1)
2. when elements are more (whatever more means)
There are ways to mitigate the deletion problem --
e.g. the swap trick you mentioned or
just marking the elements as logically deleted.
But it's impossible to mitigate the problem of slowness of walking linked lists.
So
use slice
1. If you need speed in traversal

Unless the slice is updated way too often (delete, add elements at random locations) the memory contiguity of slices will offer excellent cache hit ratio compared to linked lists.
Scott Meyer's talk on the importance of cache..
https://www.youtube.com/watch?v=WDIkqP4JbkE

list.List is implemented as a doubly linked list. Array-based lists (vectors in C++, or slices in golang) are better choice than linked lists in most conditions if you don't frequently insert into the middle of the list. The amortized time complexity for append is O(1) for both array list and linked list even though array list has to extend the capacity and copy over existing values. Array lists have faster random access, smaller memory footprint, and more importantly friendly to garbage collector because of no pointers inside the data structure.

Related

Is Array of Objects Access Time O(1)?

I know that accessing something from an array takes O(1) time if it is of one type using mathematical formula array[n]=(start address of array + (n * size Of(type)), but Assume you have an array of objects. These objects could have any number of fields including nested objects. Can we consider the access time to be constant?
Edit- I am mainly asking for JAVA, but I would like to know if there is a difference in case I choose another mainstream language like python, c++, JavaScript etc.
For example in the below code
class tryInt{
int a;
int b;
String s;
public tryInt(){
a=1;
b=0;
s="Sdaas";
}
}
class tryobject{
public class tryObject1{
int a;
int b;
int c;
}
public tryobject(){
tryObject1 o=new tryObject1();
sss="dsfsdf";
}
String sss;
}
public class Main {
public static void main(String[] args) {
System.out.println("Hello World!");
Object[] arr=new Object[5];
arr[0]=new tryInt();
arr[1]=new tryobject();
System.out.println(arr[0]);
System.out.println(arr[1]);
}
}
I want to know that since the tryInt type object should take less space than tryobject type object, how will an array now use the formula array[n]=(start address of array + (n * size Of(type)) because the type is no more same and hence this formula should/will fail.
The answer to your question is it depends.
If it's possible to random-access the array when you know the index you want, yes, it's a O(1) operation.
On the other hand, if each item in the array is a different length, or if the array is stored as a linked list, it's necessary to start looking for your element at the beginning of the array, skipping over elements until you find the one corresponding to your index. That's an O(n) operation.
In the real world of working programmers and collections of data, this O(x) stuff is inextricably bound up with the way the data is represented.
Many people reserve the word array to mean a randomly accessible O(1) collection. The pages of a book are an array. If you know the page number you can open the book to the page. (Flipping the book open to the correct page is not necessarily a trivial operation. You may have to go to your library and find the right book first. The analogy applies to multi-level computer storage ... hard drive / RAM / several levels of processor cache)
People use list for a sequentially accessible O(n) collection. The sentences of text on a page of a book are a list. To find the fifth sentence, you must read the first four.
I mention the meaning of the words list and array here for an important reason for professional programmers. Much of our work is maintaining existing code. In our justifiable rush to get things done, sometimes we grab the first collection class that comes to hand, and sometimes we grab the wrong one. For example, we might grab a list O(n) rather than an array O(1) or a hash O(1, maybe). The collection we grab works well for our tests. But, boom!, performance falls over just when the application gets successful and scales up to holding a lot of data. This happens all the time.
To remedy that kind of problem we need a practical understanding of these access issues. I once inherited a project with a homegrown hashed dictionary class that consumed O(n cubed) when inserting lots of items into the dictionary. It took a lot of digging to get past the snazzy collection-class documentation to figure out what was really going on.
In Java, the Object type is a reference to an object rather than an object itself. That is, a variable of type Object can be thought of as a pointer that says “here’s where you should go to find your Object” rather than “I am an actual, honest-to-goodness Object.” Importantly, the size of this reference - the number of bytes used up - is the same regardless of what type of thing the Object variable refers to.
As a result, if you have an Object[], then the cost of indexing into that array is indeed O(1), since the entries in that array are all the same size (namely, the size of an object reference). The sizes of the objects being pointed at might not all be the same, as in your example, but the pointers themselves are always the same size and so the math you’ve given provides a way to do array indexing in constant time.
The answer depends on context.
It's really common in some textbooks to treat array access as O(1) because it simplifies analysis.
And in fairness it is O(1) cpu instructions in today's architectures.
But:
As the dataset gets larger tending to infinity, it doesn't fit in memory. If the "array" is implemented as a database structure spread across multiple machines, you'll end up with a tree structure and probably have logarithmic worst case access times.
If you don't care about data size going to infinity, then big O notation may not be the right fit for your situation
On real hardware, memory accesses are not all equal -- there are many layers of caches and cache misses cost hundreds or thousands of cycles. The O(1) model for memory access tends to ignore that
In theory work, random access machines access memory in O(1), but turing machines cannot. Hierarchical cache effects tend to be ignored. Some models like transdichotomous RAM try to account for this.
In short, this is a property of your model of computation. There are many valid and interesting models of computation and what to choose depends on your needs and your situation.
In general. array denotes a fixed-size memory range, storing elements of the same size. If we consider this usual concept of array, then if objects are members of the array, then under the hood your array stores the object references and referring the i'th element in your array you find the reference of the object/pointer it contains, with an O(1) complexity and the address it points to is something the language is to find.
However, there are arrays which do not comply to this definition. For example, in Javascript you can easily add items to the array, which makes me think that in Javascript the arrays are somewhat different from an allocated fixed-sized range of elements of the same size/type. Also, in Javascript you can add any types of elements to an array. So, in general I would say the complexity is O(1), but there are quite a few important exceptions from this rule, depending on the technologies.

How to access an element in 2d array in Smalltalk

I started coding in Smalltalk and got stuck here. I have this 2d array:
testArr := Array new: 1.
testArr at: 1
put: ((Array new: 3)
at: 1 put: '1A';
at: 2 put: '1B';
at: 3 put: '1C';
yourself).
But if I want to access lets say first element of first array, what should I write to make it happen?
Thanks!
I'm tempted to take advantage of your question & answer, and elaborate a little bit on knowledge that belongs in the Smalltalk folklore (which you may already be aware of).
As we progresses in the use of Smalltalk, we will likely note that the Array class starts to play a decreasing role in our models. Why is that? Because it takes time to find out which are the objects our models will produce; the balance between too many and too few is a delicate matter, mostly clueless at the beginning.
Arrays and their composition are handy data structures. However, they solve the problem of organizing data at the expense of dealing with it. If the client of such a structure needs to know how data is stored as a prerequisite to act on it, then the message paradigm becomes semantically idle.
Let's imagine a matrix object. There are several ways to keep their entries: a one-dimensional array, an array of rows, an array of columns, a dictionary of (sparse) non-null entries, a triangular structure if the matrices are known to be symmetric/anti-symmetric/Hermitian, and a lot more for special cases. Of course, this variety makes no sense for the problem at hand and, in any case, it would be a bad idea to spend time considering the most general approach: In Smalltalk, generality is attained at the message, not at the storage.
Regardless of the internal organization of data, our objects should always offer protocols that are independent of the underlying structure. Back to the matrix example, even if our initial organization is an array of rows, the matrix object should work the same whether rows are arrays or more sophisticated vector objects that are also used for other ends. This means that when coding the internal access to entry (i,j) we should pretend we don't know the class of row i but only the message to access its jth element. Something on the lines of
atRow: i column: j
| row |
row := self row: i.
^row at: j
Here we are not assuming that row is an Array; we are only assuming that it understands the at: message, which is the least we can assume when talking to a row object, whatever its actual nature. Of course, this code is only good under the assumption that rows are not recreated on the fly, as they would, had our class kept instead collections of columns. But this is ok, otherwise we would only need to add another class and override this and some few other low level messages.
In any case, the idea is to defer as much as possible any explicit knowledge about the internal organization so that it gets confined to a few private messages. One way to test we are applying this good practice is to make sure no low level code repeats in two or more methods. For instance, the use of the row: message above moves low level code away from atRow:column:, deferring it to another that makes sense for the (ideal) matrix protocol.
This example illustrates an important point: be suspicious of any code that needs to compose two at: messages. And --why not-- enjoy the beauty of not having to declare types.
So, the problem was in brackets.
^(testArr at: 1) at:1
returns
1A
as I needed.

What are the advantages and disadvantages of 3d array in Mathematica

Edited...
Thanks for every one to try to help me!!!
i am trying to make a Finite Element Analysis in Mathemetica.... We can obtain all the local stiffness matrices that has 8x8 dimensions. I mean there are 2000 matrices they are similar but not same. every local stiffness matrix shown like a function that name is KK. For example KK[1] is first element local stiffness matrix
i am trying to assemble all the local matrices to make global stiffness matrix. To make it easy:
Do[K[e][i][j]=KK[[e]][[i]][[j]],{e,2000},{i,8},{j,8}]....edited
Here is my question.... this equality can affect the analysis time...If yes what can i do to improve this...
in matlab this is named as 3d array but i don't know what is called in Mathematica
what are the advantages and disadvantages of this explanation type in Mathematica...is t faster or is it easy way
Thanks for your help...
It is difficult to understand what your question is, so you might want to reformulate it.
As others have mentioned, there is no advantage to be expected from a switch from a 3D array to DownValues or SubValues. In fact you will then move from accessing data-structures to pattern matching, which is powerful and the real strength of Mathematica but not very efficient for what you plan to do, so I would strongly suggest to stay in the realm of ordinary arrays.
There is another thing that might not be clear for someone more familiar with matlab than with Mathematica: In Mathematica the "default" for arrays behave a lot like cell arrays in matlab: each entry can contain arbitrary content and they don't need to be rectangular (as High Performance Mark has mentioned they are just expressions with a head List and can roughly be compared to matlab cell arrays). But if such a nested list is a rectangular array and every element of it is of the same type such arrays can be converted to so called PackedArrays. PackedArrays are much more memory efficient and will also speed up many calculations, they behave in many respect like regular ("not-cell") arrays in matlab. This conversion is often done implicitly from functions like Table, which will oten return a packed array automatically. But if you are interested in efficiency it is a good idea to check with Developer`PackedArrayQ and convert explicitly with Developer`ToPackedArray if necessary. If you are working with PackedArrays speed and memory efficiency of many operations are much better and usually comparable to verctorized operations on normal matlab arrays. Unfortunately it can happen that packed arrays get "unpacked" by some operations, so if calculations become slow it is usually a good idea to check if that has happend.
Neither "normal" arrays nor PackedArrays are restricted in the rank (called Depth in Mathematica) they can have, so you can of course create and use "3D arrays" just as you can in matlab. I have never experienced or would know of any efficiency penalties when doing so.
It probably is of interest that newer versions of Mathematica (>= 10) bring the finite element method as one of the solver methods for NDSolve, so if you are not doing this as an exercise you might want to have a look what is available already, there is quite excessive documentation about it.
A final remark is that you can instead of kk[[e]][[i]][[j]] use the much more readable form kk[[e,i,j]] which is also easier and less error prone to type...
extended comment i guess, but
KK[e][[i]][[j]]
is not the (e,i,j) element of a "3d array". Note the single
brackets on the e. When you use the single brackets you are not denoting an array or list element but a DownValue, which is quite different from a list element.
If you do for example,
f[1]=0
f[2]=2
...
the resulting f appears similar to an array, but is actually more akin to an overloaded function in some other language. It is convenient because the indices need not be contiguous or even integers, but there is a significant performance drawback if you ever want to operate on the structure as a list.
Your 'do' loop example would almost certainly be better written as:
kk = Table[ k[e][i][j] ,{e,2000},{i,8},{j,8} ]
( Your loop wont even work as-is unless you previously "initialized" each of the kk[e] as an 8x8 array. )
Note now the list elements are all double bracketed, ie kk[[e]][[i]][[j]] or kk[[e,i,j]]

What is the actual definition of an array? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
Arrays, What’s the point?
I tried to ask this question before in What is the difference between an array and a list? but my question was closed before reaching a conclusive answer (more about that).
I'm trying to understand what is really meant by the word "array" in computer science. I am trying to reach an answer not have a discussion as per the spirit of this website. What I'm asking is language agnostic but you may draw on your knowledge of what arrays are/do in various languages that you've used.
Ways of thinking about this question:
Imagine you're designing a new programming language and you decide to implement arrays in it; what does that mean they do? What will the properties and capabilities of those things be. If it depends on the type of language, how so?
What makes an array an array?
When is an array not an array? When it is, for example, a list, vector, table, map, or collection?
It's possible there isn't one precise definition of what an array is, if that is the case then are there any standard or near-standard assumptions or what an array is? Are there any common areas at least? Maybe there are several definitions, if that is the case I'm looking for the most precision in each of them.
Language examples:
(Correct me if I'm wrong on any of these).
C arrays are contiguous blocks of memory of a single type that can be traversed using pointer arithmetic or accessed at a specific offset point. They have a fixed size.
Arrays in JavaScript, Ruby, and PHP, have a variable size and can store an object/scalar of any type they can also grow or have elements removed from them.
PHP arrays come in two types: numeric and associative. Associative arrays have elements that are stored and retrieved with string keys. Numeric arrays have elements that are stored and retrieved with integers. Interestingly if you have: $eg = array('a', 'b', 'c') and you unset($eg[1]) you still retrieve 'c' with $eg[2], only now $eg[1] is undefined. (You can call array_values() to re-index the array). You can also mix string and integer keys.
At this stage of sort of suspecting that C arrays are the only true array here and that strictly-speaking for an array to be an array it has to have all the characteristics I mention in that first bullet point. If that's the case then — again these are suspicions that I'm looking to have confirmed or rejected — arrays in JS and Ruby are actually vectors, and PHP arrays are probably tables of some kind.
Final note: I've made this community wiki so if answers need to be edited a few times in lieu of comments, go ahead and do that. Consensus is in order here.
It is, or should be, all about abstraction
There is actually a good question hidden in there, a really good one, and it brings up a language pet peeve I have had for a long time.
And it's getting worse, not better.
OK: there is something lowly and widely disrespected Fortran got right that my favorite languages like Ruby still get wrong: they use different syntax for function calls, arrays, and attributes. Exactly how abstract is that? In fortran function(1) has the same syntax as array(1), so you can change one to the other without altering the program. (I know, not for assignments, and in the case of Fortran it was probably an accident of goofy punch card character sets and not anything deliberate.)
The point is, I'm really not sure that x.y, x[y], and x(y) should have different syntax. What is the benefit of attaching a particular abstraction to a specific syntax? To make more jobs for IDE programmers working on refactoring transformations?
Having said all that, it's easy to define array. In its first normal form, it's a contiguous sequence of elements in memory accessed via a numeric offset and using a language-specific syntax. In higher normal forms it is an attribute of an object that responds to a typically-numeric message.
array |əˈrā|
noun
1 an impressive display or range of a particular type of thing : there is a vast array of literature on the topic | a bewildering array of choices.
2 an ordered arrangement, in particular
an arrangement of troops.
Mathematics: an arrangement of quantities or symbols in rows and columns; a matrix.
Computing: an ordered set of related elements.
Law: a list of jurors empaneled.
3 poetic/literary elaborate or beautiful clothing : he was clothed in fine array.
verb
[ trans. ] (usu. be arrayed) display or arrange (things) in a particular way : arrayed across the table was a buffet | the forces arrayed against him.
[ trans. ] (usu. be arrayed in) dress someone in (the clothes specified) : they were arrayed in Hungarian national dress.
[ trans. ] Law empanel (a jury).
ORIGIN Middle English (in the senses [preparedness] and [place in readiness] ): from Old French arei (noun), areer (verb), based on Latin ad- ‘toward’ + a Germanic base meaning ‘prepare.’
From FOLDOC:
array
1. <programming> A collection of identically typed data items
distinguished by their indices (or "subscripts"). The number
of dimensions an array can have depends on the language but is
usually unlimited.
An array is a kind of aggregate data type. A single
ordinary variable (a "scalar") could be considered as a
zero-dimensional array. A one-dimensional array is also known
as a "vector".
A reference to an array element is written something like
A[i,j,k] where A is the array name and i, j and k are the
indices. The C language is peculiar in that each index is
written in separate brackets, e.g. A[i][j][k]. This expresses
the fact that, in C, an N-dimensional array is actually a
vector, each of whose elements is an N-1 dimensional array.
Elements of an array are usually stored contiguously.
Languages differ as to whether the leftmost or rightmost index
varies most rapidly, i.e. whether each row is stored
contiguously or each column (for a 2D array).
Arrays are appropriate for storing data which must be accessed
in an unpredictable order, in contrast to lists which are
best when accessed sequentially. Array indices are
integers, usually natural numbers, whereas the elements of
an associative array are identified by strings.
2. <architecture> A processor array, not to be confused with
an array processor.
Also note that in some languages, when they say "array" they actually mean "associative array":
associative array
<programming> (Or "hash", "map", "dictionary") An array
where the indices are not just integers but may be
arbitrary strings.
awk and its descendants (e.g. Perl) have associative
arrays which are implemented using hash coding for faster
look-up.
If you ignore how programming languages model arrays and lists, and ignore the implementation details (and consequent performance characteristics) of the abstractions, then the concepts of array and list are indistinguishable.
If you introduce implementation details (still independent of programming language) you can compare data structures like linked lists, array lists, regular arrays, sparse arrays and so on. But then you are not longer comparing arrays and lists per se.
The way I see it, you can only talk about a distinction between arrays and lists in the context of a programming language. And of course you are then talking about arrays and lists as supported by that language. You cannot generalize to any other language.
In short, I think this question is based on a false premise, and has no useful answer.
EDIT: in response to Ollie's comments:
I'm not saying that it is not useful to use the words "array" and "list". What I'm saying is the words do not and cannot have precise and distinct definitions ... except in the context of a specific programming language. While you would like the two words to have distinct meaning, it is a fact that they don't. Just take a look at the way the words are actually used. Furthermore, trying to impose a new set of definitions on the world is doomed to fail.
My point about implementation is that when we compare and contrast the different implementations of arrays and lists, we are doing just that. I'm not saying that it is not a useful thing to do. What I am saying is that when we compare and contrast the various implementations we should not get all hung up about whether we call them arrays or lists or whatever. Rather we should use terms that we can agree on ... or not use terms at all.
To me, "array" means "ordered collection of things that is probably efficiently indexable" and "list" means "ordered collection of things that may be efficiently indexable". But there are examples of both arrays and lists that go against the trend; e.g. PHP arrays on the one hand, and Java ArrayLists on the other hand. So if I want to be precise ... in a language-agnostic context, I have to talk about "C-like arrays" or "linked lists" or some other terminology that makes it clear what data structure I really mean. The terms "array" and "list" are of no use if I want to be clear.
An array is an ordered collection of data items indexed by integer. It is not possible to be certain of anything more. Vote for this answer you believe this is the only reasonable outcome of this question.
An array:
is a finite collection of elements
the elements are ordered, and this is their only structure
elements of the same type
supported efficient random access
has no expectation of efficient insertions
may or may not support append
(1) differentiates arrays from things like iterators or generators. (2) differentiates arrays from sets. (3) differentiates arrays from things like tuples where you get an int and a string. (4) differentiates arrays from other types of lists. Maybe it's not always true, but a programmer's expectation is that random access is constant time. (5) and (6) are just there to deny additional requirements.
I would argue that a real array stores values in contiguous memory. Anything else is only called an array because it can be used like array, but they aren't really ("arrays" in PHP are definately not actual arrays (non-associative)). Vectors and such are extensions of arrays, adding additional functionality.
an array is a container, and the objects it holds have no any relationships except the order; the objects are stored in a continuous space abstractly (high level, of course low level may continuous too), so you could access them by slot[x,y,z...].
for example, per array[2,3,5,7,1], you could get 5 using slot[2] (slot[3] in some languages).
for a list, a container too, each object (well, each object-holder exactly such as slot or node) it holds has indicators which "point" to other object(s) and this is the main relationship; in general both high or low level the space is not continuous, but may be continuous; so accessing by slot[x,y,z...] is not recommended.
for example, per |-2-3-5-7-1-|, you need to do a travel from first object to 3rd one to get 5.

Array versus linked-list

Why would someone want to use a linked-list over an array?
Coding a linked-list is, no doubt, a bit more work than using an array and one may wonder what would justify the additional effort.
I think insertion of new elements is trivial in a linked-list but it's a major chore in an array. Are there other advantages to using a linked list to store a set of data versus storing it in an array?
This question is not a duplicate of this question because the other question is asking specifically about a particular Java class while this question is concerned with the general data structures.
Another good reason is that linked lists lend themselves nicely to efficient multi-threaded implementations. The reason for this is that changes tend to be local - affecting only a pointer or two for insert and remove at a localized part of the data structure. So, you can have many threads working on the same linked list. Even more, it's possible to create lock-free versions using CAS-type operations and avoid heavy-weight locks altogether.
With a linked list, iterators can also traverse the list while modifications are occurring. In the optimistic case where modifications don't collide, iterators can continue without contention.
With an array, any change that modifies the size of the array is likely to require locking a large portion of the array and in fact, it's rare that this is done without a global lock across the whole array so modifications become stop the world affairs.
It's easier to store data of different sizes in a linked list. An array assumes every element is exactly the same size.
As you mentioned, it's easier for a linked list to grow organically. An array's size needs to be known ahead of time, or re-created when it needs to grow.
Shuffling a linked list is just a matter of changing what points to what. Shuffling an array is more complicated and/or takes more memory.
As long as your iterations all happen in a "foreach" context, you don't lose any performance in iteration.
Wikipedia has very good section about the differences.
Linked lists have several advantages
over arrays. Elements can be inserted
into linked lists indefinitely, while
an array will eventually either fill
up or need to be resized, an expensive
operation that may not even be
possible if memory is fragmented.
Similarly, an array from which many
elements are removed may become
wastefully empty or need to be made
smaller.
On the other hand, arrays allow random
access, while linked lists allow only
sequential access to elements.
Singly-linked lists, in fact, can only
be traversed in one direction. This
makes linked lists unsuitable for
applications where it's useful to look
up an element by its index quickly,
such as heapsort. Sequential access on
arrays is also faster than on linked
lists on many machines due to locality
of reference and data caches. Linked
lists receive almost no benefit from
the cache.
Another disadvantage of linked lists
is the extra storage needed for
references, which often makes them
impractical for lists of small data
items such as characters or boolean
values. It can also be slow, and with
a naïve allocator, wasteful, to
allocate memory separately for each
new element, a problem generally
solved using memory pools.
http://en.wikipedia.org/wiki/Linked_list
I'll add another - lists can act as purely functional data structures.
For instance, you can have completely different lists sharing the same end section
a = (1 2 3 4, ....)
b = (4 3 2 1 1 2 3 4 ...)
c = (3 4 ...)
i.e.:
b = 4 -> 3 -> 2 -> 1 -> a
c = a.next.next
without having to copy the data being pointed to by a into b and c.
This is why they are so popular in functional languages, which use immutable variables - prepend and tail operations can occur freely without having to copy the original data - very important features when you're treating data as immutable.
Besides inserting into the middle of the list being easier - it's also much easier to delete from the middle of a linked list than an array.
But frankly, I've never used a linked list. Whenever I needed fast insertion and deletion, I also needed fast lookup, so I went to a HashSet or a Dictionary.
Merging two linked lists (especially two doubly linked lists) is much faster than merging two arrays (assuming the merge is destructive). The former takes O(1), the latter takes O(n).
EDIT: To clarify, I meant "merging" here in the unordered sense, not as in merge sort. Perhaps "concatenating" would have been a better word.
A widely unappreciated argument for ArrayList and against LinkedList is that LinkedLists are uncomfortable while debugging. The time spent by maintenance developers to understand the program, e.g. to find bugs, increases and IMHO does sometimes not justify the nanoseconds in performance improvements or bytes in memory consumption in enterprise applicatons. Sometimes (well, of course it depends on the type of applications), it's better to waste a few bytes but have an application which is more maintainable or easier to understand.
For example, in a Java environment and using the Eclipse debugger, debugging an ArrayList will reveal a very easy to understand structure:
arrayList ArrayList<String>
elementData Object[]
[0] Object "Foo"
[1] Object "Foo"
[2] Object "Foo"
[3] Object "Foo"
[4] Object "Foo"
...
On the other hand, watching the contents of a LinkedList and finding specific objects becomes a Expand-The-Tree clicking nightmare, not to mention the cognitive overhead needed to filter out the LinkedList internals:
linkedList LinkedList<String>
header LinkedList$Entry<E>
element E
next LinkedList$Entry<E>
element E "Foo"
next LinkedList$Entry<E>
element E "Foo"
next LinkedList$Entry<E>
element E "Foo"
next LinkedList$Entry<E>
previous LinkedList$Entry<E>
...
previous LinkedList$Entry<E>
previous LinkedList$Entry<E>
previous LinkedList$Entry<E>
First of all, in C++ linked-lists shouldn't be much more trouble to work with than an array. You can use the std::list or the boost pointer list for linked lists. The key issues with linked lists vs arrays are extra space required for pointers and terrible random access. You should use a linked list if you
you don't need random access to the data
you will be adding/deleting elements, especially in the middle of the list
For me it is like this,
Access
Linked Lists allow only sequential access to elements. Thus the algorithmic complexities is order of O(n)
Arrays allow random access to its elements and thus the complexity is order of O(1)
Storage
Linked lists require an extra storage for references. This makes them impractical for lists of small data items such as characters or boolean values.
Arrays do not need an extra storage to point to next data item. Each element can be accessed via indexes.
Size
The size of Linked lists are dynamic by nature.
The size of array is restricted to declaration.
Insertion/Deletion
Elements can be inserted and deleted in linked lists indefinitely.
Insertion/Deletion of values in arrays are very expensive. It requires memory reallocation.
Two things:
Coding a linked list is, no doubt, a bit more work than using an array and he wondered what would justify the additional effort.
Never code a linked list when using C++. Just use the STL. How hard it is to implement should never be a reason to choose one data structure over another because most are already implemented out there.
As for the actual differences between an array and a linked list, the big thing for me is how you plan on using the structure. I'll use the term vector since that's the term for a resizable array in C++.
Indexing into a linked list is slow because you have to traverse the list to get to the given index, while a vector is contiguous in memory and you can get there using pointer math.
Appending onto the end or the beginning of a linked list is easy, since you only have to update one link, where in a vector you may have to resize and copy the contents over.
Removing an item from a list is easy, since you just have to break a pair of links and then attach them back together. Removing an item from a vector can be either faster or slower, depending if you care about order. Swapping in the last item over top the item you want to remove is faster, while shifting everything after it down is slower but retains ordering.
Eric Lippert recently had a post on one of the reasons arrays should be used conservatively.
Fast insertion and removal are indeed the best arguments for linked lists. If your structure grows dynamically and constant-time access to any element isn't required (such as dynamic stacks and queues), linked lists are a good choice.
Here's a quick one: Removal of items is quicker.
Linked-list are especially useful when the collection is constantly growing & shrinking. For example, it's hard to imagine trying to implement a Queue (add to the end, remove from the front) using an array -- you'd be spending all your time shifting things down. On the other hand, it's trivial with a linked-list.
Other than adding and remove from the middle of the list, I like linked lists more because they can grow and shrink dynamically.
Arrays Vs Linked List:
Array memory allocation will fail sometimes because of fragmented memory.
Caching is better in Arrays as all elements are allocated contiguous memory space.
Coding is more complex than Arrays.
No size constraint on Linked List, unlike Arrays
Insertion/Deletion is faster in Linked List and access is faster in Arrays.
Linked List better from multi-threading point of view.
No one ever codes their own linked list anymore. That'd be silly. The premise that using a linked list takes more code is just wrong.
These days, building a linked list is just an exercise for students so they can understand the concept. Instead, everyone uses a pre-built list. In C++, based the on the description in our question, that'd probably mean an stl vector (#include <vector> ).
Therefore, choosing a linked list vs an array is entirely about weighing the different characteristics of each structure relative to the needs of your app. Overcoming the additional programming burden should have zero impact on the decision.
It's really a matter of efficiency, the overhead to insert, remove or move (where you are not simply swapping) elements inside a linked list is minimal, i.e. the operation itself is O(1), verses O(n) for an array. This can make a significant difference if you are operating heavily on a list of data. You chose your data-types based on how you will be operating on them and choose the most efficient for the algorithm you are using.
Arrays make sense where the exact number of items will be known, and where searching by index makes sense. For example, if I wanted to store the exact state of my video output at a given moment without compression I would probably use an array of size [1024][768]. This will provide me with exactly what I need, and a list would be much, much slower to get the value of a given pixel. In places where an array does not make sense there are generally better data types than a list to deal with data effectively.
as arrays are static in nature, therefore all operations
like memory allocation occur at the time of compilation
only. So processor has to put less effort at its runtime .
Suppose you have an ordered set, which you also want to modify by adding and removing elements. Further, you need ability to retain a reference to an element in such a way that later you can get a previous or next element. For example, a to-do list or set of paragraphs in a book.
First we should note that if you want to retain references to objects outside of the set itself, you will likely end up storing pointers in the array, rather than storing objects themselves. Otherwise you will not be able to insert into array - if objects are embedded into the array they will move during insertions and any pointers to them will become invalid. Same is true for array indexes.
Your first problem, as you have noted yourself, is insertion - linked list allows inserting in O(1), but an array would generally require O(n). This problem can be partially overcome - it is possible to create a data structure that gives array-like by-ordinal access interface where both reading and writing are, at worst, logarithmic.
Your second, and more severe problem is that given an element finding next element is O(n). If the set was not modified you could retain the index of the element as the reference instead of the pointer thus making find-next an O(1) operation, but as it is all you have is a pointer to the object itself and no way to determine its current index in the array other than by scanning the entire "array". This is an insurmountable problem for arrays - even if you can optimized insertions, there is nothing you can do to optimize find-next type operation.
In an array you have the privilege of accessing any element in O(1) time. So its suitable for operations like Binary search Quick sort, etc. Linked list on the other hand is suitable for insertion deletion as its in O(1) time. Both has advantages as well as disadvantages and to prefer one over the other boils down to what you want to implement.
-- Bigger question is can we have a hybrid of both. Something like what python and perl implement as lists.
Linked List
Its more preferable when it comes about insertion! Basically what is does is that it deals with the pointer
1 -> 3 -> 4
Insert (2)
1........3......4
.....2
Finally
1 -> 2 -> 3 -> 4
One arrow from the 2 points at 3 and the arrow of 1 points at 2
Simple!
But from Array
| 1 | 3 | 4 |
Insert (2)
| 1 | 3 | | 4 |
| 1 | | 3 | 4 |
| 1 | 2 | 3 | 4 |
Well anyone can visualize the difference!
Just for 4 index we are performing 3 steps
What if the array length is one million then? Is array efficient?
The answer is NO! :)
The same thing goes for deletion!
In Linked List we can simply use the pointer and nullify the element and next in the object class!
But for array, we need to perform shiftLeft()
Hope that helps! :)
Linked List are more of an overhead to maintain than array, it also requires additional memory storage all these points are agreed. But there are a few things which array cant do. In many cases suppose you want an array of length 10^9 you can't get it because getting one continous memory location has to be there. Linked list could be a saviour here.
Suppose you want to store multiple things with data then they can be easily extended in the linked list.
STL containers usually have linked list implementation behind the scene.
1- Linked list is a dynamic data structure so it can grow and shrink at runtime by allocating and deallocating memory. So there is no need to give an initial size of the linked list. Insertion and deletion of nodes are really easier.
2- size of the linked list can increase or decrease at run time so there is no memory wastage. In the case of the array, there is a lot of memory wastage, like if we declare an array of size 10 and store only 6 elements in it then space of 4 elements is wasted. There is no such problem in the linked list as memory is allocated only when required.
3- Data structures such as stack and queues can be easily implemented using linked list.
Only reason to use linked list is that insert the element is easy (removing also).
Disadvatige could be that pointers take a lot of space.
And about that coding is harder:
Usually you don't need code linked list (or only once) they are included in
STL
and it is not so complicated if you still have to do it.
i also think that link list is more better than arrays.
because we do traversing in link list but not in arrays
Depending on your language, some of these disadvantages and advantages could be considered:
C Programming Language: When using a linked list (through struct pointers typically), special consideration must be made sure that you are not leaking memory. As was mentioned earlier, linked lists are easy to shuffle, because all were doing is changing pointers, but are we going to remember to free everything?
Java: Java has an automatic garbage collect, so leaking memory won't be an issue, but hidden from the high level programmer is the implementation details of what a linked list is. Methods such as removing a node from the middle of the list is more complicated of a procedure than some users of the language would expect it to be.
Why a linked list over an array ? Well as some have already said, greater speed of insertions and deletions.
But maybe we don't have to live with the limits of either, and get the best of both, at the same time... eh ?
For array deletions, you can use a 'Deleted' byte, to represent the fact that a row has been deleted, thus reorging the array is no longer necessary. To ease the burden of insertions, or rapidly changing data, use a linked list for that. Then when referring to them, have your logic first search one, then the other. Thus, using them in combination gives you the best of both.
If you have a really large array, you could combine it with another, much smaller array or linked list where the smaller one hold thes 20, 50, 100 most recently used items. If the one needed is not in the shorter linked list or array, you go to the large array. If found there, you can then add it to the smaller linked list/array on the presumption that 'things most recently used are most likey to be re-used' ( and yes, possibly bumping the least recently used item from the list ). Which is true in many cases and solved a problem I had to tackle in an .ASP security permissions checking module, with ease, elegance, and impressive speed.
While many of you have touched upon major adv./dis of linked list vs array, most of the comparisons are how one is better/ worse than the other.Eg. you can do random access in array but not possible in linked list and others. However, this is assuming link lists and array are going to be applied in a similar application. However a correct answer should be how link list would be preferred over array and vice-versa in a particular application deployment.
Suppose you want to implement a dictionary application, what would you use ?
Array : mmm it would allow easy retrieval through binary search and other search algo .. but lets think how link list can be better..Say you want to search "Blob" in dictionary. Would it make sense to have a link list of A->B->C->D---->Z and then each list element also pointing to an array or another list of all words starting with that letter ..
A -> B -> C -> ...Z
| | |
| | [Cat, Cave]
| [Banana, Blob]
[Adam, Apple]
Now is the above approach better or a flat array of [Adam,Apple,Banana,Blob,Cat,Cave] ? Would it even be possible with array ?
So a major advantage of link list is you can have an element not just pointing to the next element but also to some other link list/array/ heap/ or any other memory location.
Array is a one flat contigous memory sliced into blocks size of the element it is going to store.. Link list on the other hand is a chunks of non-contigous memory units (can be any size and can store anything) and pointing to each other the way you want.
Similarly lets say you are making a USB drive. Now would you like files to be saved as any array or as a link list ? I think you get the idea what I am pointing to :)

Resources