Concise notation for last element of an array - arrays

Is there a concise notation to access last element of an array, similar to std::vector::back() in C++? Do I have to write:
veryLongArrayName.[veryLongArrayName.Length-1]
each time?

Expanding from comment
The built-in option is Seq.last veryLongArrayName, but note that this is O(N) rather than O(1), so for all but the smallest arrays probably too inefficient for practical use.
That said, there's no harm in abstracting this functionality yourself:
[<CompilationRepresentation(CompilationRepresentationFlags.ModuleSuffix)>]
[<RequireQualifiedAccess>]
module Array =
let inline last (arr:_[]) = arr.[arr.Length - 1]
Now you can do Array.last veryLongArrayName with no overhead whatsoever, while keeping the code very idiomatic and readable.

I can not find it in the official documents, but F# 4 seems to have Array.last implemented out of the box:
/// Returns the last element of the array.
/// array: The input array.
val inline last : array:'T [] -> 'T
Link to implementation at github.

As an alternative to writing a function for _[], you can also write an extension property for IList<'T>:
open System.Collections.Generic
[<AutoOpen>]
module IListExtensions =
type IList<'T> with
member self.Last = self.[self.Count - 1]
let lastValue = [|1; 5; 13|].Last // 13

Related

What's the fastest way of finding the index of the maximum value in an array?

I have a 2D array of type f32 (from ndarray::ArrayView2) and I want to find the index of the maximum value in each row, and put the index value into another array.
The equivalent in Python is something like:
import numpy as np
for i in range (0, max_val, batch_size):
sims = xp.dot(batch, vectors.T)
# sims is the dot product of batch and vectors.T
# the shape is, for example, (1024, 10000)
best_rows[i: i+batch_size] = sims.argmax(axis = 1)
In Python, the function .argmax is very fast, but I don't see any function like that in Rust. What's the fastest way of doing so?
Consider the easy case of a general Ord type: The answer will differ slightly depending on whether you know the values are Copy or not, but here's the code:
fn position_max_copy<T: Ord + Copy>(slice: &[T]) -> Option<usize> {
slice.iter().enumerate().max_by_key(|(_, &value)| value).map(|(idx, _)| idx)
}
fn position_max<T: Ord>(slice: &[T]) -> Option<usize> {
slice.iter().enumerate().max_by(|(_, value0), (_, value1)| value0.cmp(value1)).map(|(idx, _)| idx)
}
The basic idea is that we pair [a reference to] each item in the array (really, a slice - it doesn't matter if it's a Vec or an array or something more exotic) with its index, use std::iter::Iterator functions to find the maximum value according to the value only (not the index), then return just the index. If the slice is empty None will be returned. Per the documentation, the rightmost index will be returned; if you need the leftmost, do rev() after enumerate().
rev(), enumerate(), max_by_key(), and max_by() are documented here; slice::iter() is documented here (but that one needs to be on your shortlist of things to recall without documentation as a rust dev); map is Option::map() documented here (ditto). Oh, and cmp is Ord::cmp but most of the time you can use the Copy version which doesn't need it (e.g. if you're comparing integers).
Now here's the catch: f32 isn't Ord because of the way IEEE floats work. Most languages ignore this and have subtly wrong algorithms. The most popular crate to provide a total order on Ord (by declaring all NaN to be equal, and greater than all numbers) seems to be ordered-float. Assuming it's implemented correctly it should be very very lightweight. It does pull in num_traits but this is part of the most popular numerics library so might well be pulled in by other dependencies already.
You'd use it in this case by mapping ordered_float::OrderedFloat (the "constructor" of the tuple type) over the slice iter (slice.iter().map(ordered_float::OrderedFloat)). Since you only want the position of the maximum element, no need to extract the f32 afterward.
The approach from #David A is cool, but as mentioned, there's a catch: f32 & f64 do not implement Ord::cmp. (Which is really a pain in your-know-where.)
There are multiple ways of solving that: You can implement cmp yourself, or you can use ordered-float, etc..
In my case, this is a part of a bigger project and we are very careful about using external packages. Besides, I am pretty sure we don't have any NaN values. Therefore I would prefer using fold, which, if you take a close look at the max_by_key source code, is what they have been using too.
for (i, row) in matrix.axis_iter(Axis(1)).enumerate() {
let (max_idx, max_val) =
row.iter()
.enumerate()
.fold((0, row[0]), |(idx_max, val_max), (idx, val)| {
if &val_max > val {
(idx_max, val_max)
} else {
(idx, *val)
}
});
}

Actionscript/Animate - Fill in next array spot if this one is already filled

so I am working on a graphical calculator (bit more of a challenge than the basic windows one), and I want to be able to do the entire "math" in one textfield, just like typing in "5+3-5*11/3" and it gives you the solution when you press '='
I decided to make it with arrays of numbers and symbols, but I have no idea how to make it to fill the next array if this one already is used:
var numbers:Array = new Array("","","","","","","","","","","","","","","","");
var actions:Array = new Array("","","","","","","","","","","","","","","","");
I am using split to split the numbers I input with symbols, and I want the numbers to be placed in the arrays. Example: I type in 555+666 and then I need to have something like
if (numbers[0] = "") {numbers[0] = 555}
else if (numbers[1] = "") {numbers[1] = 555}
else if.....
Know what I mean?
Pretty hard to describe...
something like... When I type in a number, if the numbers[0] is already filled, go fill in numbers[1], if numbers[1] is filled, go to numbers[2] etc
Even if I agree with #Nbooo and the Reverse Polish Notation
However Vectors may have a fixed length.
This is not an answer but just an example (if the length of Your Array must be defined):
//Just for information..
var numbs:Vector.<Number> = new Vector.<Number>(10,true);
var count:uint = 1;
for (var i in numbs){
numbs[i] = count++
}
trace(numbs);
// If You try to add an element to a Vector,
// You will get the following Error at compile time :
/*
RangeError: Error #1126: Cannot change the length of a fixed Vector.
at Vector$double/http://adobe.com/AS3/2006/builtin::push()
at Untitled_fla::MainTimeline/frame1()
*/
numbs.push(11);
// Will throw an Error #1126
trace(numbs);
If You use this code to update a fixed Vector, this will not throw an ERROR :
numbs[4]=11;
trace(numbs);
Output :
1,2,3,4,5,6,7,8,9,10
1,2,3,4,11,6,7,8,9,10
// length is 10, so no issue...
If You consider the performance between Arrays and vectors check this reference : Vector class versus Array class
I hope this may be helpful.
[EDIT]
I suggest you to check at those links too :
ActionScript 3 fundamentals: Arrays
ActionScript 3 fundamentals: Associative arrays, maps, and dictionaries
ActionScript 3 fundamentals: Vectors and ByteArrays
[/EDIT]
Best regards.
Nicolas.
What you want to implement is the Reverse Polish Notation. In actionscript3 arrays are dynamic, not fixed size, that means you can add elements to the array without concern about capacity (at least in your case).
const array:Array = new Array();
trace(array.length); // prints 0
array.push(1);
array.push(2);
trace(array.length); // prints 2
I suggest using "push" and "pop" methods of Array/Vector, since it's much more natural for such task. Using those methods will simplify your implementation, since you'll get rid of unnecessary checks like
if (numbers[1] == "") {...}
and replace it just with:
numbers.push(value);
and then to take a value from the top:
const value:String = numbers.pop();

What is the difference between ArrayBuffer and Array

I'm new to scala/java and I have troubles getting the difference between those two.
By reading the scala doc I understood that ArrayBuffer are made to be interactive (append, insert, prepend, etc).
1) What are the fundamental implementation differences?
2) Is there performance variation between those two?
Both Array and ArrayBuffer are mutable, which means that you can modify elements at particular indexes: a(i) = e
ArrayBuffer is resizable, Array isn't. If you append an element to an ArrayBuffer, it gets larger. If you try to append an element to an Array, you get a new array. Therefore to use Arrays efficiently, you must know its size beforehand.
Arrays are implemented on JVM level and are the only non-erased generic type. This means that they are the most efficient way to store sequences of objects – no extra memory overhead, and some operations are implemented as single JVM opcodes.
ArrayBuffer is implemented by having an Array internally, and allocating a new one if needed. Appending is usually fast, unless it hits a limit and resizes the array – but it does it in such a way, that the overall effect is negligible, so don't worry. Prepending is implemented as moving all elements to the right and setting the new one as the 0th element and it's therefore slow. Appending n elements in a loop is efficient (O(n)), prepending them is not (O(n²)).
Arrays are specialized for built-in value types (except Unit), so Array[Int] is going to be much more optimal than ArrayBuffer[Int] – the values won't have to be boxed, therefore using less memory and less indirection. Note that the specialization, as always, works only if the type is monomorphic – Array[T] will be always boxed.
The one other difference is, Array's element created as on when its declared but Array Buffer's elements not created unless you assign values for the first time.
For example. You can write Array1(0)="Stackoverflow" but not ArrayBuffer1(0)="Stackoverflow" for the first time value assignments.
(Array1 = Array variable & ArrayBuffer1 = ArrayBuffer variable)
Because as we know, Array buffers are re-sizable, so elements created when you insert values at the first time and then you can modify/reassign them at the particular element.
Array:
Declaring and assigning values to Int Array.
val favNums= new Array[Int](20)
for(i<-0 to 19){
favNums(i)=i*2
}
favNums.foreach(println)
ArrayBuffer:
Declaring and assigning values to Int ArrayBuffer.
val favNumsArrayBuffer= new ArrayBuffer[Int]
for(j<-0 to 19){
favNumsArrayBuffer.insert(j, (j*2))
//favNumsArrayBuffer++=Array(j*3)
}
favNumsArrayBuffer.foreach(println)
If you include favNumsArrayBuffer(j)=j*2 at the first line in the for loop, It doesn't work. But it works fine if you declare it in 2nd or 3rd line of the loop. Because values assigned already at the first line now you can modify by element index.
This simple one-hour video tutorial explains a lot.
https://youtu.be/DzFt0YkZo8M?t=2005
Use an Array if the length of Array is fixed, and an ArrayBuffer if the length can vary.
Another difference is in term of reference and value equality
Array(1,2) == Array(1,2) // res0: Boolean = false
ArrayBuffer(1, 2) == ArrayBuffer(1,2) // res1: Boolean = true
The reason for the difference is == routes to .equals where Array.equals is implemented using Java's == which compares references
public boolean equals(Object obj) {
return (this == obj);
}
whilst ArrayBuffer.equals compares elements contained by ArrayBuffer using sameElements method
override def equals(o: scala.Any): Boolean = this.eq(o.asInstanceOf[AnyRef]) || (
o match {
case it: Seq[A] => (it eq this) || (it canEqual this) && sameElements(it)
case _ => false
}
)
Similarly, contains behaves differently
Array(Array(1,2)).contains(Array(1,2)) // res0: Boolean = false
ArrayBuffer(ArrayBuffer(1,2)).contains(ArrayBuffer(1,2)) // res1: Boolean = true

Proper way to build an array from a slice? [duplicate]

This question already has answers here:
How to get a slice as an array in Rust?
(7 answers)
Closed 6 years ago.
Ok, this seems a bit silly, but I'm having trouble finding a function to return a statically sized array from the contents of a slice.
The Rust Book sections on arrays and slices says nothing about it. (It does show how to take a slice from an array, but I want to go the other way.) I also checked the documentation for std::slice and std::array, but if it's there, I'm not seeing it.
There is of course the option of writing out each element one by one, but that seems ridiculous. For now, I ended up writing a python one-liner to do it for me.
", ".join(["k[{}]".format(i) for i in range(32)])
So I ended up with this:
use db_key::Key;
#[derive(Clone)]
pub struct Sha256{
bits : [u8;32]
}
impl Key for Sha256 {
fn from_u8(k: &[u8]) -> Self {
Sha256{bits:
// FIXME: This is dumb.
[ k[0], k[1], k[2], k[3], k[4], k[5], k[6], k[7], k[8], k[9], k[10], k[11], k[12], k[13], k[14], k[15], k[16], k[17], k[18], k[19], k[20], k[21], k[22], k[23], k[24], k[25], k[26], k[27], k[28], k[29], k[30], k[31] ]
}
}
fn as_slice<T, F: Fn(&[u8]) -> T>(&self, f: F) -> T {
f(&self.bits)
}
}
I'd like to know if there's a proper way, like k.to_array(32) or something along those lines.
And, yes, I realize the above code could fail with out-of-bounds access. I'm not sure what db_key::Key expects on invalid input.
Edit:
Is there a good way to convert a Vec to an array? is similar but less general. A good answer to this will probably also be a good answer to that question with the addition of taking a slice from the vec, which can be done efficiently and concisely. I also don't consider "write a separate conversion function for each size you care about" to be a proper solution.
How to get a slice as a static array in rust? is also similar, but the accepted answer is the hack I had already come up with independently.
You can use a loop to solve it the straightforward (but maybe disappointing) way:
let input = b"abcdef";
let mut array = [0u8; 32];
for (x, y) in input.iter().zip(array.iter_mut()) {
*y = *x;
}
We can use a function to do a runtime size check and turn a slice into a reference to a fixed size array.
Libstd doesn't provide enough traits to reliably check that the input and output types match here, but we could in theory develop that ourselves (for a finite number of array types). Either way, the cast looks like this, U is arbitrary array type you specify.
/// Return a reference to a fixed size array from a slice.
///
/// Return **Some(array)** if the dimensions match, **None** otherwise.
///
/// **Note:** Unsafe because we can't check if the **U** type is really an array.
pub unsafe fn as_array<T, U>(xs: &[T]) -> Option<&U> where
U: AsRef<[T]>,
{
let sz = std::mem::size_of::<U>();
let input_sz = xs.len() * std::mem::size_of::<T>();
// The size check could be relaxed to sz <= input_sz
if sz == input_sz {
Some(&*(xs.as_ptr() as *const U))
} else {
None
}
}

Why do I need a '<' overload for an Array class?

I'm trying to add functionality to an Array class.
So I attempted to add a sort() similar to Ruby's lexicon.
For this purpose I chose the name 'ricSort()' if deference to Swift's sort().
But the compiler says it can't find an overload for '<', albeit the 'sort({$0, $1}' by
itself works okay.
Why?
var myArray:Array = [5,4,3,2,1]
myArray.sort({$0 < $1}) <-- [1, 2, 3, 4, 5]
myArray.ricSort() <-- this doesn't work.
Here's a solution that is close to what you are looking for, followed by a discussion.
var a:Int[] = [5,4,3,2,1]
extension Array {
func ricSort(fn: (lhs: T, rhs: T) -> Bool) -> T[] {
let tempCopy = self.copy()
tempCopy.sort(fn)
return tempCopy
}
}
var b = a.ricSort(<) // [1, 2, 3, 4, 5]
There are two problems with the original code. The first, a fairly simple mistake, is that Array.sort returns no value whatsoever (represented as () which is called void or Unit in some other languages). So your function, which ends with return self.sort({$0 < $1}) doesn't actually return anything, which I believe is contrary to your intention. So that's why it needs to return tempCopy instead of return self.sort(...).
This version, unlike yours, makes a copy of the array to mutate, and returns that instead. You could easily change it to make it mutate itself (the first version of the post did this if you check the edit history). Some people argue that sort's behavior (mutating the array, instead of returning a new one) is undesirable. This behavior has been debated on some of the Apple developer lists. See http://blog.human-friendly.com/swift-arrays-the-bugs-the-bad-and-the-ugly-incomplete
The other problem is that the compiler does not have enough information to generate the code that would implement ricSort, which is why you are getting the type error. It sounds like you are wondering why it is able to work when you use myArray.sort but not when you try to execute the same code inside a function on the Array.
The reason is because you told the compiler why myArray consists of:
var myArray:Array = [5,4,3,2,1]
This is shorthand for
var myArray: Array<Int> = [5,4,3,2,1]
In other words, the compiler inferred that the myArray consists of Int, and it so happens that Int conforms to the Comparable Protocol that supplies the < operator (see: https://developer.apple.com/library/prerelease/ios/documentation/General/Reference/SwiftStandardLibraryReference/Comparable.html#//apple_ref/swift/intf/Comparable)[1]. From the docs, you can see that < has the following signature:
#infix func < (lhs: Self, rhs: Self) -> Bool
Depending on what languages you have a background in, it may surprise you that < is defined in terms of the language, rather than just being a built in operator. But if you think about it, < is just a function that takes two arguments and returns true or false. The #infix means that it can appear between its two functions, so you don't have to write < 1 2.
(The type "Self" here means, "whatever the type is that this protocol implements," see Protocol Associated Type Declaration in https://developer.apple.com/library/prerelease/ios/documentation/swift/conceptual/swift_programming_language/Declarations.html#//apple_ref/doc/uid/TP40014097-CH34-XID_597)
Compare this to the signature of Array.sort: isOrderedBefore: (T, T) -> Bool
That is the generic signature. By the time the compiler is working on this line of code, it knows that the real signature is isOrderedBefore: (Int, Int) -> Bool
The compiler's job is now simple, it just has to figure out, is there a function named < that matches the expected signature, namely, one that takes two values of type Int and returns a Bool. Obviously < does match the signature here, so the compiler allows the function to be used here. It has enough information to guarantee that < will work for all values in the array. This is in contrast to a dynamic language, which cannot anticipate this. You have to actually attempt to perform the sort in order to learn if the types can actually be sorted. Some dynamic languages, like JavaScript, will make every possible attempt to continue without failing, so that expressions such as 0 < "1" evaluate correctly, while others, such as Python and Ruby, will throw an exception. Swift does neither: it prevents you from running the program, until you fixed the bug in your code.
So, why doesn't ricSort work? Because there is no type information for it to work with until you have created an instance of a particular type. It cannot infer whether the ricSort will be correct or not.
For example, suppose instead of myArray, I had this:
enum Color {
case Red, Orange, Yellow, Green, Blue, Indigo, Violet
}
var myColors = [Color.Red, Color.Blue, Color.Green]
var sortedColors = myColors.ricSort() // Kaboom!
In that case, myColors.ricSort would fail based on a type error, because < hasn't been defined for the Color enumeration. This can happen in dynamic languages, but is never supposed to happen in languages with sophisticated type systems.
Can I still use myColors.sort? Sure. I just need to define a function that takes two colors and returns then in some order that makes sense for my domain (EM wavelength? Alphabetical order? Favorite color?):
func colorComesBefore(lhs: Color, rhs: Color) -> Bool { ... }
Then, I can pass that in: myColors.sort(colorComesBefore)
This shows, hopefully, that in order to make ricSort work, we need to construct it in such a way that its definition guarantees that when it is compiled, it can be shown to be correct, without having to run it or write unit tests.
Hopefully that explains the solution. Some proposed modifications to the Swift language may make this less painful in the future. In particular creating parameterized extensions should help.
The reason you are getting an error is that the compiler cannot guarantee that the type stored in the Array can be compared with the < operator.
You can see the same sort closure on an array whose type can be compared using < like an Int:
var list = [3,1,2]
list.sort {$0 < $1}
But you will get an error if you try to use a type that cannot be compared with <:
var URL1 = NSURL()
var URL2 = NSURL()
var list = [URL1, URL2]
list.sort {$0 < $1} // error
Especially with all the syntax you can leave out in Swift, I don't see a reason to define a method for this. The following is valid and works as expected:
list.sort(<)
You can do this because < actually defines a function that takes two Ints and returns a Bool just like the sort method is expecting.

Resources