Why do I need a '<' overload for an Array class? - arrays

I'm trying to add functionality to an Array class.
So I attempted to add a sort() similar to Ruby's lexicon.
For this purpose I chose the name 'ricSort()' if deference to Swift's sort().
But the compiler says it can't find an overload for '<', albeit the 'sort({$0, $1}' by
itself works okay.
Why?
var myArray:Array = [5,4,3,2,1]
myArray.sort({$0 < $1}) <-- [1, 2, 3, 4, 5]
myArray.ricSort() <-- this doesn't work.

Here's a solution that is close to what you are looking for, followed by a discussion.
var a:Int[] = [5,4,3,2,1]
extension Array {
func ricSort(fn: (lhs: T, rhs: T) -> Bool) -> T[] {
let tempCopy = self.copy()
tempCopy.sort(fn)
return tempCopy
}
}
var b = a.ricSort(<) // [1, 2, 3, 4, 5]
There are two problems with the original code. The first, a fairly simple mistake, is that Array.sort returns no value whatsoever (represented as () which is called void or Unit in some other languages). So your function, which ends with return self.sort({$0 < $1}) doesn't actually return anything, which I believe is contrary to your intention. So that's why it needs to return tempCopy instead of return self.sort(...).
This version, unlike yours, makes a copy of the array to mutate, and returns that instead. You could easily change it to make it mutate itself (the first version of the post did this if you check the edit history). Some people argue that sort's behavior (mutating the array, instead of returning a new one) is undesirable. This behavior has been debated on some of the Apple developer lists. See http://blog.human-friendly.com/swift-arrays-the-bugs-the-bad-and-the-ugly-incomplete
The other problem is that the compiler does not have enough information to generate the code that would implement ricSort, which is why you are getting the type error. It sounds like you are wondering why it is able to work when you use myArray.sort but not when you try to execute the same code inside a function on the Array.
The reason is because you told the compiler why myArray consists of:
var myArray:Array = [5,4,3,2,1]
This is shorthand for
var myArray: Array<Int> = [5,4,3,2,1]
In other words, the compiler inferred that the myArray consists of Int, and it so happens that Int conforms to the Comparable Protocol that supplies the < operator (see: https://developer.apple.com/library/prerelease/ios/documentation/General/Reference/SwiftStandardLibraryReference/Comparable.html#//apple_ref/swift/intf/Comparable)[1]. From the docs, you can see that < has the following signature:
#infix func < (lhs: Self, rhs: Self) -> Bool
Depending on what languages you have a background in, it may surprise you that < is defined in terms of the language, rather than just being a built in operator. But if you think about it, < is just a function that takes two arguments and returns true or false. The #infix means that it can appear between its two functions, so you don't have to write < 1 2.
(The type "Self" here means, "whatever the type is that this protocol implements," see Protocol Associated Type Declaration in https://developer.apple.com/library/prerelease/ios/documentation/swift/conceptual/swift_programming_language/Declarations.html#//apple_ref/doc/uid/TP40014097-CH34-XID_597)
Compare this to the signature of Array.sort: isOrderedBefore: (T, T) -> Bool
That is the generic signature. By the time the compiler is working on this line of code, it knows that the real signature is isOrderedBefore: (Int, Int) -> Bool
The compiler's job is now simple, it just has to figure out, is there a function named < that matches the expected signature, namely, one that takes two values of type Int and returns a Bool. Obviously < does match the signature here, so the compiler allows the function to be used here. It has enough information to guarantee that < will work for all values in the array. This is in contrast to a dynamic language, which cannot anticipate this. You have to actually attempt to perform the sort in order to learn if the types can actually be sorted. Some dynamic languages, like JavaScript, will make every possible attempt to continue without failing, so that expressions such as 0 < "1" evaluate correctly, while others, such as Python and Ruby, will throw an exception. Swift does neither: it prevents you from running the program, until you fixed the bug in your code.
So, why doesn't ricSort work? Because there is no type information for it to work with until you have created an instance of a particular type. It cannot infer whether the ricSort will be correct or not.
For example, suppose instead of myArray, I had this:
enum Color {
case Red, Orange, Yellow, Green, Blue, Indigo, Violet
}
var myColors = [Color.Red, Color.Blue, Color.Green]
var sortedColors = myColors.ricSort() // Kaboom!
In that case, myColors.ricSort would fail based on a type error, because < hasn't been defined for the Color enumeration. This can happen in dynamic languages, but is never supposed to happen in languages with sophisticated type systems.
Can I still use myColors.sort? Sure. I just need to define a function that takes two colors and returns then in some order that makes sense for my domain (EM wavelength? Alphabetical order? Favorite color?):
func colorComesBefore(lhs: Color, rhs: Color) -> Bool { ... }
Then, I can pass that in: myColors.sort(colorComesBefore)
This shows, hopefully, that in order to make ricSort work, we need to construct it in such a way that its definition guarantees that when it is compiled, it can be shown to be correct, without having to run it or write unit tests.
Hopefully that explains the solution. Some proposed modifications to the Swift language may make this less painful in the future. In particular creating parameterized extensions should help.

The reason you are getting an error is that the compiler cannot guarantee that the type stored in the Array can be compared with the < operator.
You can see the same sort closure on an array whose type can be compared using < like an Int:
var list = [3,1,2]
list.sort {$0 < $1}
But you will get an error if you try to use a type that cannot be compared with <:
var URL1 = NSURL()
var URL2 = NSURL()
var list = [URL1, URL2]
list.sort {$0 < $1} // error
Especially with all the syntax you can leave out in Swift, I don't see a reason to define a method for this. The following is valid and works as expected:
list.sort(<)
You can do this because < actually defines a function that takes two Ints and returns a Bool just like the sort method is expecting.

Related

Is it possible to define the size of a Float32Array type in typescript?

I know that with tuples sizes of arrays can be defined. Not applicable to float32array which is a class itself though.
Can that somehow be done with float32arrays as well?
I tried const foo: FloatArray32[4] but that casts the type directly to the number.
I also tried to check if types might be compatible:
let foo: [number, number, number, number];
foo = new Float32Array([1, 2, 3, 4]);
But they are not.
Changing all the types in my code to '[number, number, number, number];' (in my case I need a 4 float array for a point coordinate) is a possibility, although I would need to make changes in quite a lot of places in the code.
However, I was wondering if there might be a 'childtype' extending Float32Array type, where the number of the elements of the array can be fixed in the type.
Javascript typed arrays, are in fact, fixed length - see the docs for your example. The constructors in particular:
new Float32Array(); // new in ES2017
new Float32Array(length);
new Float32Array(typedArray);
new Float32Array(object);
new Float32Array(buffer [, byteOffset [, length]]);
all have the length deducible on creation (that new first one creates an empty array with 0 elements. I guess it simplified some edge cases).
I'm not sure how you are determining the type, but as soon as you get an item from your array it will be converted to a number, the only number type available in JS - so looking at your log is misleading here. Take a look at the following static property:
Float32Array.prototype.byteLength
Returns the length (in bytes) of the Float32Array. Fixed at construction time and thus read only.
This is the only thing that counts. If you still don't believe the docs, try logging a cell after you overflow it (easier with int8 - put 200 or something). This is relevant to your example - nothing is being converted to a number. The array object is a view in fixed length numbers - again, run your test with an Int8Array and try to assign 200 to the cell, and read the cell.
This is a view into raw data. If you extract it and make mathematical operations, you are now in JS realm and working with Numbers - but once you assign stuff back, you better make sure the data fits. You cannot get JS/TS to show you something like float32 in your console, but each cell of the array itself does have an exact byte length.
unfortunately, making the length a part of the type is non-trivial within the type system as far as I can tell since the length is a property determined in construction (even if static and read only) and not a part of the type. If you do want something like this a thin wrapper could do the trick:
class vec4 extends Float32Array {
constructor(initial_values? : [number, number, number, number]) {
initial_values? super(initial_values) : super(4);
}
}
would do the trick. If you are willing to give up square brackets you can add index out-of-bound checking in the different methods (you can set in a fixed width array any cell, but it will do nothing, and retrieving it will yield undefined if out of bounds, which may be error prone):
get(index : number) {
if(index > 4 || index < 0) ...
return this.private_data[index];
}
set(index : number, value : number) {
if(index > 4 || index < 0) ...
this.private_data[index] = value;
}
Of course, without LSP in JS/TS the array and your class are still interchangeable, so enforcement is really only done on construction, and only if you do not try to break your own code (let foo : vec4; foo = new Float32Array([1, 2]); etc...).

How to use an array of functions in Swift

I have read all the posts I can find here about arrays of functions - great you can do it. I figured. But none of the posts show practically how to use them (at least not what I'm trying to do). Here's what I want - they can all take the same args, but that's not a requirement.
This article is close, and will allow me to loop through to execute each function (which meets the first goal).
https://stackoverflow.com/a/24447484/11114752
But... what if I want to execute a single function by reference?
In other words, how to call just the referenced Arity2 function - for example:
// None of these work (with or without the parameter labels)
funcs.Arity2(n: 2, S: "Fred) // value of type [MyFuncs] has no member .Arity2
funcs[Arity2](n: 2, S: "Fred") // no exact matches to call in subscript
funcs[.Arity2](n: 2, S: "Fred") // Cannot call value of non-function type...
let fn = funcs.first(where: { a whole ton of permutations here to try to match Arity2 }) -- a whole lotta frustrating nope...
Help, please! Nothing I've tried works. The pre-compiler just goes in circles making suggestions that don't pan out and it will not compile.
EDIT:
The reason for the array in the first place is that I'm going to have a quite a few functions, and I don't know what they all are in advance. Essentially, I want a plugin type of architecture. Where I can add to the list of functions (ideally within an extension of the class, but that's another problem..) and not change the processing loop that executes each function in order.
I assume you need something like
_ = funcs.first {
if case let MyFuncs.Arity2(f) = $0 {
f(2, "Fred")
return true
}
return false
}
It can be achieved in a much simpler way if you know the position of the function in the array.
Assuming you have:
func someFunc(n: Int, s: String) {
print("call \(n) \(s)")
}
var funcs = [MyFuncs.Arity2(someFunc)]
you can do:
if case .Arity2(let f) = funcs.first {
f(2, "Fred")
}
By replacing funcs.first with funcs[i] you can access the i-th index (first make sure it does exist).

Scala overloading operators with generic types

I am doing a project in scala and i am struggling with a certain thing. I am making a matrix DSL so i am overloading some operators like +,- or * so that i can do :
matrixMult = matrix1*matrix2
The thing is i made this class where the matrix was represented as a Array[Array[Double]] type but i would like to make it generic: Array[Array[T]]
The thing is i do not know how to handle this in the class methods since for operations like +,- and *. It should work for doubles or ints, but strings should throw an error. Here is my current code:
def +(other : Matrix[Double]): Matrix[Double] = {
var array = new Array[Array[Double]](rows)
for (i <- 0 to (rows - 1)) {
var arrayRow = new Array[Double](columns)
for (j <- 0 to (columns - 1)) {
arrayRow(j) = this.array(i)(j) + other.array(i)(j)
}
array(i) = arrayRow
}
return new Matrix(array)
}
I get an error on the arrayRow(j) =... line which is normal because it does not know what type the "this" object is.
What should i do to make this work? Like i would like this method only to be accessible to doubles (or ints) and not strings, if this method was to be invoked on a Matrix[String] object it should throw an error. I tried pattern matching with isInstanceOf() but that doesn't remove the error and i can't compile.
If kind of have the same issue with all of my methods in my class, so i'd like a generic answer if possible.
Any help is appreciated,
Thank you very much!
Not sure which version of Scala you are using, but if you're on 2.8, I found this thread on Scala-lang, and it looks like you may be able to use T:Numerics to limit it to Int, Long, Float, Double.
A little farther down in the thread, to limit it to JUST a subset of those (like Int, Double), they say to define your own generic Trait.
https://www.scala-lang.org/old/node/4787
Answer can be found in comments:
Matrix addition has been asked, and answered, before. Even though the question posses a different matrix implementation, I believe the answer from the redoubtable Rex Kerr is still applicable. – jwvh

Swift: optional array count

In Objective-C, if I had the following property:
#property (strong, nonatomic) NSArray * myArray;
A method to return a number of objects in myArray would look like:
- (NSInteger) numberOfObjectsInMyArray
{
return [self.myArray count];
}
This would return either the number of objects in the array, or 0 if myArray == nil;
The best equivalent I can think of for doing this in Swift is:
var myArray: Array<String>?
func numberOfObjectsInMyArray() -> Int
{
return myArray ? myArray!.count : 0
}
So checking the optional array contains a value, and if so unwrap the array and return that value, otherwise return 0.
Is this the correct way to do this? Or is there something simpler?
Try using the nil coalescing operator.
According to the Apple Documentation:
The nil coalescing operator (a ?? b) unwraps an optional a if it contains a value, or returns a default value b if a is nil.
So your function could look like this:
func numberOfObjectsInMyArray() -> Int {
return (myArray?.count ?? 0)
}
I agree with others that this could be a bad idea for a number of reasons (like making it look like there is an array with a count of "0" when there isn't actually an array at all) but hey, even bad ideas need an implementation.
EDIT:
So I'm adding this because two minutes after I posted this answer, I came across a reason for doing exactly what the author wants to do.
I am implementing the NSOutlineViewDataSource protocol in Swift. One of the functions required by the protocol is:
optional func outlineView(_ outlineView: NSOutlineView,
numberOfChildrenOfItem item: AnyObject?) -> Int
That function requires that you return the number of children of the item parameter. In my code, if the item has any children, they will be stored in an array, var children: [Person]?
I don't initialize that array until I actually add a child to the array.
In other words, at the time that I am providing data to the NSOutlineView, children could be nil or it could be populated, or it could have once been populated but subsequently had all objects removed from it, in which case it won't be nil but it's count will be 0. NSOutlineView doesn't care if children is nil - all it wants to know is how many rows it will need to display the item's children.
So, it makes perfect sense in this situation to return 0 if children is nil. The only reason for calling the function is to determine how many rows NSOutlineView will need. It doesn't care whether the answer is 0 because children is nil or because it is empty.
return (children?.count ?? 0) will do what I need. If children is nil it will return 0. Otherwise it will return count. Perfect!
That looks like the simpler way.
The Objective-C code is shorter only because nil is also a form of 0, being a C-based language.
Since swift is strongly typed you don't have such a shorthand. In this specific case it requires a little more effort, but in general it saves you most of the headaches caused by loose typing.
Concerning the specific case, is there a reason for making the array optional in the first place? You could just have an empty array. Something like this might work for you:
var myArray: Array<String> = []
func numberOfObjectsInMyArray() -> Int {
return myArray.count
}
(Source for this information)
How about using optional for return value?
var myArray: Array<String>?
func numberOfObjectsInMyArray() -> Int? {
return myArray?.count
}
I think that this way is safer.
(Source for this information)

Implementing chained iterators in a Ruby C extension

I see that there's a relatively new feature in Ruby which allows chained iteration -- in other words, instead of each_with_indices { |x,i,j| ... } you might do each.with_indices { |x,i,j| ... }, where #each returns an Enumerator object, and Enumerator#with_indices causes the additional yield parameters to be included.
So, Enumerator has its own method #with_index, presumably for one-dimensional objects, source found here. But I can't figure out the best way to adapt this to other objects.
To be clear, and in response to comments: Ruby doesn't have an #each_with_indices right now -- it's only got an #each_with_index. (That's why I want to create one.)
A series of questions, themselves chained:
How would one adapt chained iteration to a one-dimensional object? Simply do an include Enumerable?
Presumably the above (#1) would not work for an n-dimensional object. Would one create an EnumerableN class, derived from Enumerable, but with #with_index converted into #with_indices?
Can #2 be done for Ruby extensions written in C? For example, I have a matrix class which stores various types of data (floats, doubles, integers, sometimes regular Ruby objects, etc.). Enumeration needs to check the data type (dtype) first as per the example below.
Example:
VALUE nm_dense_each(VALUE nm) {
volatile VALUE nm = nmatrix; // Not sure this actually does anything.
DENSE_STORAGE* s = NM_STORAGE_DENSE(nm); // get the storage pointer
RETURN_ENUMERATOR(nm, 0, 0);
if (NM_DTYPE(nm) == nm::RUBYOBJ) { // matrix stores VALUEs
// matrix of Ruby objects -- yield those objects directly
for (size_t i = 0; i < nm_storage_count_max_elements(s); ++i)
rb_yield( reinterpret_cast<VALUE*>(s->elements)[i] );
} else { // matrix stores non-Ruby data (int, float, etc)
// We're going to copy the matrix element into a Ruby VALUE and then operate on it. This way user can't accidentally
// modify it and cause a seg fault.
for (size_t i = 0; i < nm_storage_count_max_elements(s); ++i) {
// rubyobj_from_cval() converts any type of data into a VALUE using macros such as INT2FIX()
VALUE v = rubyobj_from_cval((char*)(s->elements) + i*DTYPE_SIZES[NM_DTYPE(nm)], NM_DTYPE(nm)).rval;
rb_yield( v ); // yield to the copy we made
}
}
}
So, to combine my three questions into one: How would I write, in C, a #with_indices to chain onto the NMatrix#each method above?
I don't particularly want anyone to feel like I'm asking them to code this for me, though if you did want to, we'd love to have you involved in our project. =)
But if you know of some example elsewhere on the web of how this is done, that'd be perfect -- or if you could just explain in words, that'd be lovely too.
#with_index is a method of Enumerator: http://ruby-doc.org/core-1.9.3/Enumerator.html#method-i-with_index
I suppose you could make a subclass of Enumerator that has #with_indices and have your #each return an instance of that class? That's the first thing that comes to mind, although your enumerator might have to be pretty coupled to the originating class...
Since you are saying that you are also interested in Ruby linguistics, not just C, let me contribute my 5 cents, without claiming to actually answer the question. #each_with_index and #with_index already became so idiomatic, that majority of the people rely on the index being a number. Therefore, if you go and implement your NMatrix#each_with_index in such way, that in the block { |e, i| ... } it would supply eg. arrays [0, 0], [0, 1], [0, 2], [1, 0], [1, 1], ... as index i, you would surprise people. Also, if others chain your NMatrix#each enumerator with #with_index method, they will receive just a single number as index. So, indeed, you are right to conclude that you need a distinct method to take care for the 2 indices-type (or, more generally, n indices for higher dimension matrices):
matrix.each_with_indices { |e, indices| ... }
This method should return a 2-dimensional (n-dimensional) array as indices == [i, j] . You should not go for the version:
matrix.each_with_indices { |e, i, j| ... }
As for the #with_index method, it is not your concern at all. If your NMatrix provides #each method (which it certainly does), then #with_index will work normally with it, out of your control. And you do not need to ponder about introducing matrix-specific #with_indices, because #each itself is not really specific to matrices, but to one-dimensional ordered collections of any sort. Finally, sorry for not being a skilled C programmer to cater to your C-related part of the question.

Resources