Why aren't empty arrays and hashes interned? - arrays

Recently I found out that Ruby doesn't optimize [] and {} to be interned to point to a common shared object. Demo:
irb(main):001:0> [].object_id
=> 70284401361960
irb(main):002:0> [].object_id
=> 70284392762340 # different
irb(main):003:0> [].object_id
=> 70284124310100 # different
irb(main):005:0> {}.object_id
=> 70284392857480
irb(main):006:0> {}.object_id
=> 70284392870480 # different
irb(main):007:0> {}.object_id
=> 70284392904360 # different
I understand that often empty hashes and array literals are used to initialize values that will be immediately mutated. But this happens even if you do [].freeze.object_id or {}.freeze.object_id instead.
Contrast this with String, when the env var RUBYOPT is set to --enable-frozen-string-literal:
irb(main):001:0> ""
=> 70284400947400
irb(main):002:0> ""
=> 70284400947400 # same
irb(main):003:0> ""
=> 70284400947400 # same
Even you don't enable frozen string literals, if you call "".freeze.object_id instead, you'll get the same object id each time, though I suspect that initial "" literal is still allocated an intermediate string object that freeze is being called on.
In a performance-sensitive codebase (well, as performance-sensitive you can allow yourself to be while still using MRI lol) I've seen this workaround, which uses the following shared constants instead of [] or {} for cases when the hash or array doesn't need to be mutable:
module LessAllocations
EMPTY_HASH = {}.freeze
EMPTY_ARRAY = [].freeze
# String literals are already frozen, use '' instead
# EMPTY_STRING = ''
end
So my questions are:
Is this a missed optimization opportunity?
It seems like a peephole optimization could be written to intern frozen empty hash or array literals. Would need to be part of the language's semantics, i.e. would there be any observable differences, (obviously besides the behavior of object_id)?
Are empty hash and array literals represented by tagged pointers? Do they even cause any allocation to happen?

Is this a missed optimization opportunity?
To answer this question one would to have to do a survey of real world memory usage, but I believe it is unlikely. You could perform a little experiment...
class Array
EMPTY_ARRAY = [].freeze
def freeze
empty? ? EMPTY_ARRAY : super
end
end
Empty objects are very small. Having so many that they use significant memory compared to everything else your program is using memory for is an edge case.
I understand that often empty hashes and array literals are used to initialize values that will be immediately mutated.
For that reason, adding copy-on-write to empty hashes and arrays might slow things down.
But this happens even if you do [].freeze.object_id or
{}.freeze.object_id instead.
Freezing them means you know ahead of time they will remain empty, that's extremely rare. Having so many known empty hashes and arrays that it becomes a performance issue is an edge case. The constant work-around seems fine.

Related

Why Val and Var behave same for arrays in koltin?

fun Sample(){
val x = 10
x = 11 // will give error as it cannot be reassigned
val arr = arrayOf(1,2,3)
arr[0] = 5 // will not give any error but why ? aren't they supposed to be final?
}
Why val doesn't works for arrays?
arr itself is immutable and can't be reassigned....the contents of array are not. For example you could not do:
val arr1 = arrayOf(1,2,3)
val arr2 = arrayOf(4,5,6)
arr1 = arr2 // error
An alternative if you want a "read-only" list is to use listOf()
By declaring and array as with val, you basically declare the reassigning for the array isn't possible, but array items are still reassignable
vals aren't really immutable data, it's better to think of them as read-only references. You can change the value of a var, but you can't change the thing a val points at. But that doesn't mean the thing itself can't change!
If the object itself is mutable, or has mutable var properties, or has val properties that point to other mutable things... you get the idea. You can't guarantee absolute immutable state, it's just not baked into the language (or Java itself - final just means you can't reassign the variable!)
That's why Kotlin has a bunch of features to try and help you avoid mutable state, or at least to prevent you from changing things. Collections (and the return types of the functions that operate on them) are usually immutable by default, and you have to explicitly ask for a mutableList or whatever. That's enforcing immutability by stopping you from accessing methods like add and remove, but it still doesn't stop you from putting mutable objects in there!
data classes are another tool, and that's sort of aimed at a more functional, data-driven approach. The idea is that ideally, you'll just put vals in them, and any non-primitive data types you use will also comprise of vals - so when you drill down, everything ends in immutable variables, so the whole thing is immutable overall. Things like the copy method allow you to "change" the data without actually mutating it, similar to a Copy On Write approach.
I'd also say that in my experience, it's not uncommon for arrays specifically to always be treated as a mutable type, though. Like there's no immutable version of an array in Kotlin, and things like F# (which encourages functional transformations on its types) explicitly has setters on its Array types. So if you need a read-only array... use a List!

If var seems to deep copy arrays in Swift. Does if let?

In Swift 3.0, the code below gives different addresses for thisArray[0], suggesting that the array was deep copied. Is this actually the case, or am I missing something in my analysis? Does if let behave the same way? It may be irrelevant for if let, as it is immutable...
var thisArray: [String]? = ["One", "Two"]
withUnsafePointer(to: &thisArray![0]) {
print("thisArray[0] has address \($0)")
}
if var thisArray = thisArray {
withUnsafePointer(to: &thisArray[0]) {
print("thisArray[0] has address \($0)")
}
}
Relevant: https://developer.apple.com/swift/blog/?id=10.
In Swift, Array, String, and Dictionary are all value types.
So, if you assign an existing value type via var or let then a copy occurs. If you assign an existing reference type (such as a class) via var or let then you'll be assigning a reference.
#CharlieS's answer is mostly correct but glosses over some important details...
Semantically, assigning a value type to a different binding (whether a var variable or let constant) always creates a copy. That is, your program code can always safely assume that modifications to one binding of a value type will never affect others.
Or to put it a different way: if you were building your own version of the Swift compiler / runtime / standard library from scratch, you could make every var a = b allocate new memory for a and copy all the memory contents of b, regardless of which value type a and b are. All other things being equal, your implementation would be compatible with all Swift programs.
The downside to value type reassignment always being a copy is that for large types (like collections or composite types), all that copying wastes time and memory. So...
In practice, value types can be implemented in ways that maintain the semantic always-a-copy guarantee of value types while providing performance optimizations like copy-on-write. The Swift Standard Library collection types (arrays, dictionaries, sets, etc) do this, and it's possible for custom value types (including yours) to implement copy-on-write too. (For details on how, this WWDC 2015 talk provides a good overview.)
To make copy-on-write work, an implementing value type needs to use reference types internally (as noted in that WWDC talk). And it has to do it in such a way that the language guarantee for value types — that assignments are always semantically copies — continues to hold in all cases.
One of the ways that a copy-on-write array implementation could fail that guarantee would be to allow unguarded access to its underlying storage buffer — if you can get a raw pointer into that storage, you could mutate the contents in ways that cause other bindings (that is, semantic copies) to mutate, violating the language guarantee.
To preserve the copy-on-write guarantee, the standard library's collection types make sure that copies certain operations that could perform unguarded mutation create copies. (Although even then, sometimes the copies created involve enough reference manipulation that the memory and time costs of the copies remain low up until an actual mutation happens.)
You can see a bit of how this works in the Swift compiler & standard library source code — start from a search for isUniquelyReferenced and follow the callers and callees of is various use cases in ArrayBuffer etc.
For an illustration of what's going on here, let's try a variation on your test:
var thisArray: [String] = ["One", "Two"]
withUnsafePointer(to: &thisArray[0]) {
print("thisArray[0] has address \($0)")
}
var thatArray = thisArray // comment/uncomment here
withUnsafePointer(to: &thisArray[0]) {
print("thisArray[0] has address \($0)")
}
When you comment out the assignment thatArray = thisArray, both addresses are the same. Once thisArray is no longer uniquely referenced, though, accessing even the original array's underlying buffer requires a copy (or at least some internal indirection).

How do I use Array#dig and Hash#dig introduced in Ruby 2.3?

Ruby 2.3 introduces a new method on Array and Hash called dig. The examples I've seen in blog posts about the new release are contrived and convoluted:
# Hash#dig
user = {
user: {
address: {
street1: '123 Main street'
}
}
}
user.dig(:user, :address, :street1) # => '123 Main street'
# Array#dig
results = [[[1, 2, 3]]]
results.dig(0, 0, 0) # => 1
I'm not using triple-nested flat arrays. What's a realistic example of how this would be useful?
UPDATE
It turns out these methods solve one of the most commonly-asked Ruby questions. The questions below have something like 20 duplicates, all of which are solved by using dig:
How to avoid NoMethodError for missing elements in nested hashes, without repeated nil checks?
Ruby Style: How to check whether a nested hash element exists
In our case, NoMethodErrors due to nil references are by far the most common errors we see in our production environments.
The new Hash#dig allows you to omit nil checks when accessing nested elements. Since hashes are best used for when the structure of the data is unknown, or volatile, having official support for this makes a lot of sense.
Let's take your example. The following:
user.dig(:user, :address, :street1)
Is not equivalent to:
user[:user][:address][:street1]
In the case where user[:user] or user[:user][:address] is nil, this will result in a runtime error.
Rather, it is equivalent to the following, which is the current idiom:
user[:user] && user[:user][:address] && user[:user][:address][:street1]
Note how it is trivial to pass a list of symbols that was created elsewhere into Hash#dig, whereas it is not very straightforward to recreate the latter construct from such a list. Hash#dig allows you to easily do dynamic access without having to worry about nil references.
Clearly Hash#dig is also a lot shorter.
One important point to take note of is that Hash#dig itself returns nil if any of the keys turn out to be, which can lead to the same class of errors one step down the line, so it can be a good idea to provide a sensible default. (This way of providing an object which always responds to the methods expected is called the Null Object Pattern.)
Again, in your example, an empty string or something like "N/A", depending on what makes sense:
user.dig(:user, :address, :street1) || ""
One way would be in conjunction with the splat operator reading from some unknown document model.
some_json = JSON.parse( '{"people": {"me": 6, ... } ...}' )
# => "{"people" => {"me" => 6, ... }, ... }
a_bunch_of_args = response.data[:query]
# => ["people", "me"]
some_json.dig(*a_bunch_of_args)
# => 6
It's useful for working your way through deeply nested Hashes/Arrays, which might be what you'd get back from an API call, for instance.
In theory it saves a ton of code that would otherwise check at each level whether another level exists, without which you risk constant errors. In practise you still may need a lot of this code as dig will still create errors in some cases (e.g. if anything in the chain is a non-keyed object.)
It is for this reason that your question is actually really valid - dig hasn't seen the usage we might expect. This is commented on here for instance: Why nobody speaks about dig.
To make dig avoid these errors, try the KeyDial gem, which I wrote to wrap around dig and force it to return nil/default if any error crops up.

Optional array vs. empty array in Swift

I have a simple Person class in Swift that looks about like this:
class Person {
var name = "John Doe"
var age = 18
var children = [Person]?
\\ init function goes here, but does not initialize children array
}
Instead of declaring children to be an optional array, I could simply declare it and initialize it as an empty array like this:
var children = [Person]()
I am trying to decide which approach is better. Declaring the array as an optional array means that it will not take up any memory at all, whereas an empty array has at least some memory allocated for it, correct? So using the optional array means that there will be at least some memory saving. I guess my first question is: Is there really any actual memory saving involved here, or are my assumptions about this incorrect?
On the other hand, if it is optional then each time I try to use it I will have to check to see if it is nil or not before adding or removing objects from it. So there will be be some loss of efficiency there (but not much, I imagine).
I kind of like the optional approach. Not every Person will have children, so why not let children be nil until the Person decides to settle down and raise a family?
At any rate, I would like to know if there are any other specific advantages or disadvantages to one approach or the other. It is a design question that will come up over and over again.
I'm going to make the opposite case from Yordi - an empty array just as clearly says "this Person has no children", and will save you a ton of hassle. children.isEmpty is an easy check for the existence of kids, and you won't ever have to unwrap or worry about an unexpected nil.
Also, as a note, declaring something as optional doesn't mean it takes zero space - it's the .None case of an Optional<Array<Person>>.
The ability to choose between an empty array or an optional gives us the ability to apply the one that better describe the data from a semantic point of view.
I would choose:
An empty array if the list can be empty, but it's a transient status and in the end it should have at least one element. Being non optional makes clear that the array should not be empty
An optional if it's possible for the list to be empty for the entire life cycle of the container entity. Being an optional makes clear that the array can be empty
Let me make some examples:
Purchase order with master and details (one detail per product): a purchase order can have 0 details, but that's a transient status, because it wouldn't make sense having a purchase order with 0 products
Person with children: a person can have no children for his entire life. It is not a transient status (although not permanent as well), but using an optional it's clear that it's legit for a person to have no children.
Note that my opinion is only about making the code more clear and self-explainatory - I don't think there is any significant difference in terms of performance, memory usage, etc. for choosing one option or the other.
Interestingly enough, we have recently had few discussions regarding this very same question at work.
Some suggest that there are subtle semantic differences. E.g. nil means a person has no children whatsoever, but then what does 0 mean? Does it mean "has children, the whole 0 of them"? Like I said, pure semantics "has 0 children" and "has no children" makes no difference when working with this model in code. In that case why not choosing more straightforwards and less guard-let-?-y approach?
Some suggest that keeping a nil there may be an indication that, for example, when fetching model from backend something went wrong and we got error instead of children. But I think model should not try to have this type of semantics and nil should not be used as indication of some error in the past.
I personally think that the model should be as dumb as possible and the dumbest option in this case is empty array.
Having an optional will make you drag that ? until the end of days and use guard let, if let or ?? over and over again.
You will have to have extra unwrapping logic for NSCoding implementation, you will have to do person.children?.count ?? 0 instead of straightforward person.children.count when you display that model in any view controller.
The final goal of all that manipulation is to display something on UI.
Would you really say
"This person has no children" and "This person has 0 children" for nil and empty array correspondingly? I hope you would not :)
Last Straw
Finally, and this is really the strongest argument I have
What is the type of subviews property of UIView: it's var subviews: [UIView] { get }
What is the type of children property of SKNode: it's var children: [SKNode] { get }
There's tons of examples like this in Cocoa framework: UIViewController::childViewControllers and more.
Even from pure Swift world: Dictionary::keys though this may be a bit far fetched.
Why is it OK for person to have nil children, but not for SKNode? For me the analogy is perfect. Hey, even the SKNode's method name is children :)
My view: there must be an obvious reason for keeping those arrays as optionals, like a really good one, otherwise empty array offers same semantics with less unwrapping.
The Last Last Straw
Finally, some references to very good articles, each of those
http://www.theswiftlearner.com/2015/05/08/empty-or-optional-arrays/
https://www.natashatherobot.com/ios-optional-vs-empty-data-source-swift/
In Natasha's post, you will find a link to NSHipster's blog post and in Swiftification paragraph you can read this:
For example, instead of marking NSArray return values as nullable, many APIs have been modified to return an empty array—semantically these have the same value (i.e., nothing), but a non-optional array is far simpler to work with
Sometimes there's a difference between something not existing and being empty.
Let's say we have an app where a user can modify a list of phone numbers and we save said modifications as modifiedPhoneNumberList. If no modification has ever occurred the array should be nil. If the user has modified the parsed numbers by deleting them all the array should be empty.
Empty means we're going to delete all the existing phone numbers, nil means we keep all the existing phone numbers. The difference matters here.
When we can't differentiate between a property being empty or not existing or it doesn't matter empty is the way to go. If a Person were to lose their only child we should simply have to remove that child and have an empty array rather than have to check if the count is 1 then set the entire array to nil.
I always use empty arrays.
In my humble opinion, the most important purpose of optionals in Swift is to safely wrap some value that may be nil. An array already act as this type of wrapper - you can ask the array if it has anything inside & access its value(s) safely with for loops, mapping, etc. Do we need to put a wrapper within a wrapper? I don't think so.
Swift is designed to take advantage of optional value's and optional unwrapping.
You could also declare the array as nil, as it will save you a very small (almost not noticable) amount of memory.
I would go with an optional array instead of an array that represents a nil value to keep Swift's Design Patterns happy :)
I also think
if let children = children {
}
looks nicer than :
if(children != nil){
}

Is there a reason that Swift array assignment is inconsistent (neither a reference nor a deep copy)?

I'm reading the documentation and I am constantly shaking my head at some of the design decisions of the language. But the thing that really got me puzzled is how arrays are handled.
I rushed to the playground and tried these out. You can try them too. So the first example:
var a = [1, 2, 3]
var b = a
a[1] = 42
a
b
Here a and b are both [1, 42, 3], which I can accept. Arrays are referenced - OK!
Now see this example:
var c = [1, 2, 3]
var d = c
c.append(42)
c
d
c is [1, 2, 3, 42] BUT d is [1, 2, 3]. That is, d saw the change in the last example but doesn't see it in this one. The documentation says that's because the length changed.
Now, how about this one:
var e = [1, 2, 3]
var f = e
e[0..2] = [4, 5]
e
f
e is [4, 5, 3], which is cool. It's nice to have a multi-index replacement, but f STILL doesn't see the change even though the length has not changed.
So to sum it up, common references to an array see changes if you change 1 element, but if you change multiple elements or append items, a copy is made.
This seems like a very poor design to me. Am I right in thinking this? Is there a reason I don't see why arrays should act like this?
EDIT: Arrays have changed and now have value semantics. Much more sane!
Note that array semantics and syntax was changed in Xcode beta 3 version (blog post), so the question no longer applies. The following answer applied to beta 2:
It's for performance reasons. Basically, they try to avoid copying arrays as long as they can (and claim "C-like performance"). To quote the language book:
For arrays, copying only takes place when you perform an action that has the potential to modify the length of the array. This includes appending, inserting, or removing items, or using a ranged subscript to replace a range of items in the array.
I agree that this is a bit confusing, but at least there is a clear and simple description of how it works.
That section also includes information on how to make sure an array is uniquely referenced, how to force-copy arrays, and how to check whether two arrays share storage.
From the official documentation of the Swift language:
Note that the array is not copied when you set a new value with subscript syntax, because setting a single value with subscript syntax does not have the potential to change the array’s length. However, if you append a new item to array, you do modify the array’s length. This prompts Swift to create a new copy of the array at the point that you append the new value. Henceforth, a is a separate, independent copy of the array.....
Read the whole section Assignment and Copy Behavior for Arrays in this documentation. You will find that when you do replace a range of items in the array then the array takes a copy of itself for all items.
The behavior has changed with Xcode 6 beta 3. Arrays are no longer reference types and have a copy-on-write mechanism, meaning as soon as you change an array's content from one or the other variable, the array will be copied and only the one copy will be changed.
Old answer:
As others have pointed out, Swift tries to avoid copying arrays if possible, including when changing values for single indexes at a time.
If you want to be sure that an array variable (!) is unique, i.e. not shared with another variable, you can call the unshare method. This copies the array unless it already only has one reference. Of course you can also call the copy method, which will always make a copy, but unshare is preferred to make sure no other variable holds on to the same array.
var a = [1, 2, 3]
var b = a
b.unshare()
a[1] = 42
a // [1, 42, 3]
b // [1, 2, 3]
The behavior is extremely similar to the Array.Resize method in .NET. To understand what's going on, it may be helpful to look at the history of the . token in C, C++, Java, C#, and Swift.
In C, a structure is nothing more than an aggregation of variables. Applying the . to a variable of structure type will access a variable stored within the structure. Pointers to objects do not hold aggregations of variables, but identify them. If one has a pointer which identifies a structure, the -> operator may be used to access a variable stored within the structure identified by the pointer.
In C++, structures and classes not only aggregate variables, but can also attach code to them. Using . to invoke a method will on a variable ask that method to act upon the contents of the variable itself; using -> on a variable which identifies an object will ask that method to act upon the object identified by the variable.
In Java, all custom variable types simply identify objects, and invoking a method upon a variable will tell the method what object is identified by the variable. Variables cannot hold any kind of composite data type directly, nor is there any means by which a method can access a variable upon which it is invoked. These restrictions, although semantically limiting, greatly simplify the runtime, and facilitate bytecode validation; such simplifications reduced the resource overhead of Java at a time when the market was sensitive to such issues, and thus helped it gain traction in the marketplace. They also meant that there was no need for a token equivalent to the . used in C or C++. Although Java could have used -> in the same way as C and C++, the creators opted to use single-character . since it was not needed for any other purpose.
In C# and other .NET languages, variables can either identify objects or hold composite data types directly. When used on a variable of a composite data type, . acts upon the contents of the variable; when used on a variable of reference type, . acts upon the object identified by it. For some kinds of operations, the semantic distinction isn't particularly important, but for others it is. The most problematical situations are those in which a composite data type's method which would modify the variable upon which it is invoked, is invoked on a read-only variable. If an attempt is made to invoke a method on a read-only value or variable, compilers will generally copy the variable, let the method act upon that, and discard the variable. This is generally safe with methods that only read the variable, but not safe with methods that write to it. Unfortunately, .does has not as yet have any means of indicating which methods can safely be used with such substitution and which can't.
In Swift, methods on aggregates can expressly indicate whether they will modify the variable upon which they are invoked, and the compiler will forbid the use of mutating methods upon read-only variables (rather than having them mutate temporary copies of the variable which will then get discarded). Because of this distinction, using the . token to call methods that modify the variables upon which they are invoked is much safer in Swift than in .NET. Unfortunately, the fact that the same . token is used for that purpose as to act upon an external object identified by a variable means the possibility for confusion remains.
If had a time machine and went back to the creation of C# and/or Swift, one could retroactively avoid much of the confusion surrounding such issues by having languages use the . and -> tokens in a fashion much closer to the C++ usage. Methods of both aggregates and reference types could use . to act upon the variable upon which they were invoked, and -> to act upon a value (for composites) or the thing identified thereby (for reference types). Neither language is designed that way, however.
In C#, the normal practice for a method to modify a variable upon which it is invoked is to pass the variable as a ref parameter to a method. Thus calling Array.Resize(ref someArray, 23); when someArray identifies an array of 20 elements will cause someArray to identify a new array of 23 elements, without affecting the original array. The use of ref makes clear that the method should be expected to modify the variable upon which it is invoked. In many cases, it's advantageous to be able to modify variables without having to use static methods; Swift addresses that means by using . syntax. The disadvantage is that it loses clarify as to what methods act upon variables and what methods act upon values.
To me this makes more sense if you first replace your constants with variables:
a[i] = 42 // (1)
e[i..j] = [4, 5] // (2)
The first line never needs to change the size of a. In particular, it never needs to do any memory allocation. Regardless of the value of i, this is a lightweight operation. If you imagine that under the hood a is a pointer, it can be a constant pointer.
The second line may be much more complicated. Depending on the values of i and j, you may need to do memory management. If you imagine that e is a pointer that points to the contents of the array, you can no longer assume that it is a constant pointer; you may need to allocate a new block of memory, copy data from the old memory block to the new memory block, and change the pointer.
It seems that the language designers have tried to keep (1) as lightweight as possible. As (2) may involve copying anyway, they have resorted to the solution that it always acts as if you did a copy.
This is complicated, but I am happy that they did not make it even more complicated with e.g. special cases such as "if in (2) i and j are compile-time constants and the compiler can infer that the size of e is not going to change, then we do not copy".
Finally, based on my understanding of the design principles of the Swift language, I think the general rules are these:
Use constants (let) always everywhere by default, and there won't be any major surprises.
Use variables (var) only if it is absolutely necessary, and be vary careful in those cases, as there will be surprises [here: strange implicit copies of arrays in some but not all situations].
What I've found is: The array will be a mutable copy of the referenced one if and only if the operation has the potential to change the array's length. In your last example, f[0..2] indexing with many, the operation has the potential to change its length (it might be that duplicates are not allowed), so it's getting copied.
var e = [1, 2, 3]
var f = e
e[0..2] = [4, 5]
e // 4,5,3
f // 1,2,3
var e1 = [1, 2, 3]
var f1 = e1
e1[0] = 4
e1[1] = 5
e1 // - 4,5,3
f1 // - 4,5,3
Delphi's strings and arrays had the exact same "feature". When you looked at the implementation, it made sense.
Each variable is a pointer to dynamic memory. That memory contains a reference count followed by the data in the array. So you can easily change a value in the array without copying the whole array or changing any pointers. If you want to resize the array, you have to allocate more memory. In that case the current variable will point to the newly allocated memory. But you can't easily track down all of the other variables that pointed to the original array, so you leave them alone.
Of course, it wouldn't be hard to make a more consistent implementation. If you wanted all variables to see a resize, do this:
Each variable is a pointer to a container stored in dynamic memory. The container holds exactly two things, a reference count and pointer to the actual array data. The array data is stored in a separate block of dynamic memory. Now there is only one pointer to the array data, so you can easily resize that, and all variables will see the change.
A lot of Swift early adopters have complained about this error-prone array semantics and Chris Lattner has written that the array semantics had been revised to provide full value semantics ( Apple Developer link for those who have an account). We will have to wait at least for the next beta to see what this exactly means.
I use .copy() for this.
var a = [1, 2, 3]
var b = a.copy()
a[1] = 42
Did anything change in arrays behavior in later Swift versions ? I just run your example:
var a = [1, 2, 3]
var b = a
a[1] = 42
a
b
And my results are [1, 42, 3] and [1, 2, 3]

Resources