Tile a small Array in a large Array multiple times in scala - arrays

I want to tile a small array multiple times in a large array. I'm looking for an "official" way of doing this. A naive solution follows:
val arr = Array[Int](1, 2, 3)
val array = {
val arrBuf = ArrayBuffer[Int]()
for (_ <- 1 until 10) {
arrBuf ++= arr
}
arrBuf.toArray
}

If you do not know why Arrays are good for performance (meaning you do not really need raw performance in this case) I would recommend you do not use them, and rather stick with List or Vector instead.
Arrays are not proper Scala collections, they are just plain JVM arrays. Meaning, they are mutable, very efficient (especially for unboxed primitives), fixed in memory size, and very restricted. They behave like normal scala collections because of implicit conversions and extension methods. But, due to their mutability and invariance, you really should avoid them unless you have good reasons for using them.
The proposed solution by Andronicus is not ideal for arrays (but it would be a very good solution for any real collection) because given arrays have fixed memory size, this fattening will end in constant reallocations and memory copying under the hood.
Anyways, here is a slight variation to such solution, using lists instead; which is a little bit more efficient.
implicit class ListOps[A](private val list: List[A]) extends AnyVal {
def times[B >: A](n: Int): List[B] =
Iterator.fill(n)(list).flatten.toList
}
List(1, 2, 3).times(3)
// res: List[Int] = List(1, 2, 3, 1, 2, 3, 1, 2, 3)
And here is also an efficient version using the new ArraySeq introduced in 2.13; which is an immutable Array.
(Note, you can do this using plain Arrays too)
implicit class ArraySeqOps[A](private val arr: ArraySeq[A]) extends AnyVal {
def times[B >: A](n: Int): ArraySeq[B] =
ArraySeq.tabulate(n * arr.lenght) { i => arr(i % arr.length) }
}
ArraySeq(1, 2, 3).times(3)
// res: ArraySeq[Int] = ArraySeq(1, 2, 3, 1, 2, 3, 1, 2, 3)

You can use Array.fill:
Array.fill(10)(Array(1, 2, 3)).flatten

Related

Scala - Efficient element wise sum of two arrays

I have two arrays which I would like to reduce to one array in which at each index you have the sum of the two elements in the original arrays. For example:
val arr1: Array[Int] = Array(1, 1, 3, 3, 5)
val arr1: Array[Int] = Array(2, 1, 2, 2, 1)
val arr3: Array[Int] = sum(arr1, arr2)
// This should result in:
// arr3 = Array(3, 2, 5, 5, 6)
I've seen this post: Element-wise sum of arrays in Scala, and I currently use this approach (zip/map). However, using this for a big data application I am concerned about its performance. Using this approach one has to traverse the array(s) at least twice. Is there a better approach in terms of efficiency?
The most efficient way might well be to do it lazily.
As with anything collection-oriented, Scala 2.12 and 2.13 are going to be different (this code is Scala 2.13, but 2.12 will be similar... might extend IndexedSeqLike, but I don't know for sure)
import scala.collection.IndexedSeq
import scala.math.Numeric
case class SumIndexedSeq[+T: Numeric](seq1: IndexedSeq[T], seq2: IndexedSeq[T]) extends IndexedSeq[T] {
override val length: Int = seq1.length.min(seq2.length)
override def apply(i: Int) =
if (i >= length) throw new IndexOutOfBoundsException
else seq1(i) + seq2(i)
}
Arrays are implicitly convertible to a subtype of collection.IndexedSeq. This will compute the sum of the corresponding elements on every access (which may be generally desirable as it's possible to use a mutable IndexedSeq).
If you need an Array, you can get one with only a single traversal via
val arr3: Array[Int] = SumIndexedSeq(arr1, arr2).toArray
but SumIndexedSeq can be used anywhere a Seq can be used without a traversal.
As a further optimization, especially if you're sure that the underlying collections/arrays won't mutate, you can add a cache so you don't add the same elements together twice. It can also be generalized, if you so care, to any binary operations on T (in which case the Numeric constraint can be removed).
As Luis noted, for a performance question: experiment and benchmark. It's worth keeping in mind that a cache implementation may well entail boxing every element to put in the cache, so you might need to be accessing the same elements many times in order for the cache to be a win (and a sufficiently large cache may have implications for the stability of a distributed system).
Well, first of all, as with all things related to performance the only answer is to benchmark.
Second, are you sure you need plain mutable, invariant, weird Arrays? Can't you use something like Vector or ArraySeq?
Third, you can just do something like this or using a while loop, which would be the same.
val result = ArraySeq.tabulate(math.min(arr1.length, arr2.length)) { i =>
arr1(i) + arr2(i)
}

Does joined() or flatMap(_:) perform better in Swift 3?

I'm curious about the performance characteristics of joined() and .flatMap(_:) in flattening a multidimensional array:
let array = [[1,2,3],[4,5,6],[7,8,9]]
let j = Array(array.joined())
let f = array.flatMap{$0}
They both flatten the nested array into [1, 2, 3, 4, 5, 6, 7, 8, 9]. Should I prefer one over the other for performance? Also, is there a more readable way to write the calls?
TL; DR
When it comes just to flattening 2D arrays (without any transformations or separators applied, see #dfri's answer for more info about that aspect), array.flatMap{$0} and Array(array.joined()) are both conceptually the same and have similar performance.
The main difference between flatMap(_:) and joined() (note that this isn't a new method, it has just been renamed from flatten()) is that joined() is always lazily applied (for arrays, it returns a special FlattenBidirectionalCollection<Base>).
Therefore in terms of performance, it makes sense to use joined() over flatMap(_:) in situations where you only want to iterate over part of a flattened sequence (without applying any transformations). For example:
let array2D = [[2, 3], [8, 10], [9, 5], [4, 8]]
if array2D.joined().contains(8) {
print("contains 8")
} else {
print("doesn't contain 8")
}
Because joined() is lazily applied & contains(_:) will stop iterating upon finding a match, only the first two inner arrays will have to be 'flattened' to find the element 8 from the 2D array. Although, as #dfri correctly notes below, you are also able to lazily apply flatMap(_:) through the use of a LazySequence/LazyCollection – which can be created through the lazy property. This would be ideal for lazily applying both a transformation & flattening a given 2D sequence.
In cases where joined() is iterated fully through, it is conceptually no different from using flatMap{$0}. Therefore, these are all valid (and conceptually identical) ways of flattening a 2D array:
array2D.joined().map{$0}
Array(array2D.joined())
array2D.flatMap{$0}
In terms of performance, flatMap(_:) is documented as having a time-complexity of:
O(m + n), where m is the length of this sequence and n is the length of the result
This is because its implementation is simply:
public func flatMap<SegmentOfResult : Sequence>(
_ transform: (${GElement}) throws -> SegmentOfResult
) rethrows -> [SegmentOfResult.${GElement}] {
var result: [SegmentOfResult.${GElement}] = []
for element in self {
result.append(contentsOf: try transform(element))
}
return result
}
}
As append(contentsOf:) has a time-complexity of O(n), where n is the length of sequence to append, we get an overall time-complexity of O(m + n), where m will be total length of all sequences appended, and n is the length of the 2D sequence.
When it comes to joined(), there is no documented time-complexity, as it is lazily applied. However, the main bit of source code to consider is the implementation of FlattenIterator, which is used to iterate over the flattened contents of a 2D sequence (which will occur upon using map(_:) or the Array(_:) initialiser with joined()).
public mutating func next() -> Base.Element.Iterator.Element? {
repeat {
if _fastPath(_inner != nil) {
let ret = _inner!.next()
if _fastPath(ret != nil) {
return ret
}
}
let s = _base.next()
if _slowPath(s == nil) {
return nil
}
_inner = s!.makeIterator()
}
while true
}
Here _base is the base 2D sequence, _inner is the current iterator from one of the inner sequences, and _fastPath & _slowPath are hints to the compiler to aid with branch prediction.
Assuming I'm interpreting this code correctly & the full sequence is iterated through, this also has a time complexity of O(m + n), where m is the length of the sequence, and n is the length of the result. This is because it goes through each outer iterator and each inner iterator to get the flattened elements.
So, performance wise, Array(array.joined()) and array.flatMap{$0} both have the same time complexity.
If we run a quick benchmark in a debug build (Swift 3.1):
import QuartzCore
func benchmark(repeatCount:Int = 1, name:String? = nil, closure:() -> ()) {
let d = CACurrentMediaTime()
for _ in 0..<repeatCount {
closure()
}
let d1 = CACurrentMediaTime()-d
print("Benchmark of \(name ?? "closure") took \(d1) seconds")
}
let arr = [[Int]](repeating: [Int](repeating: 0, count: 1000), count: 1000)
benchmark {
_ = arr.flatMap{$0} // 0.00744s
}
benchmark {
_ = Array(arr.joined()) // 0.525s
}
benchmark {
_ = arr.joined().map{$0} // 1.421s
}
flatMap(_:) appears to be the fastest. I suspect that joined() being slower could be due to the branching that occurs within the FlattenIterator (although the hints to the compiler minimise this cost) – although just why map(_:) is so slow, I'm not too sure. Would certainly be interested to know if anyone else knows more about this.
However, in an optimised build, the compiler is able to optimise away this big performance difference; giving all three options comparable speed, although flatMap(_:) is still fastest by a fraction of a second:
let arr = [[Int]](repeating: [Int](repeating: 0, count: 10000), count: 1000)
benchmark {
let result = arr.flatMap{$0} // 0.0910s
print(result.count)
}
benchmark {
let result = Array(arr.joined()) // 0.118s
print(result.count)
}
benchmark {
let result = arr.joined().map{$0} // 0.149s
print(result.count)
}
(Note that the order in which the tests are performed can affect the results – both of above results are an average from performing the tests in the various different orders)
From the Swiftdoc.org documentation of Array (Swift 3.0/dev) we read [emphasis mine]:
func flatMap<SegmentOfResult : Sequence>(_: #noescape (Element) throws -> SegmentOfResult)
Returns an array containing the concatenated results of calling the
given transformation with each element of this sequence.
...
In fact, s.flatMap(transform) is equivalent to Array(s.map(transform).flatten()).
We may also take a look at the actual implementations of the two in the Swift source code (from which Swiftdoc is generated ...)
swift/stdlib/public/core/Join.swift
swift/stdlib/public/core/FlatMap.swift
Most noteably the latter source file, where the flatMap implementations where the used closure (transform) does not yield and optional value (as is the case here) are all described as
/// Returns the concatenated results of mapping `transform` over
/// `self`. Equivalent to
///
/// self.map(transform).joined()
From the above (assuming the compiler can be clever w.r.t. a simple over self { $0 } transform), it would seem as if performance-wise, the two alternatives should be equivalent, but joined does, imo, better show the intent of the operation.
In addition to intent in semantics, there is one apparent use case where joined is preferable over (and not entirely comparable to) flatMap: using joined with it's init(separator:) initializer to join sequences with a separator:
let array = [[1,2,3],[4,5,6],[7,8,9]]
let j = Array(array.joined(separator: [42]))
print(j) // [1, 2, 3, 42, 4, 5, 6, 42, 7, 8, 9]
The corresponding result using flatMap is not really as neat, as we explicitly need to remove the final additional separator after the flatMap operation (two different use cases, with or without trailing separator)
let f = Array(array.flatMap{ $0 + [42] }.dropLast())
print(f) // [1, 2, 3, 42, 4, 5, 6, 42, 7, 8, 9]
See also a somewhat outdated post of Erica Sadun dicussing flatMap vs. flatten() (note: joined() was named flatten() in Swift < 3).
Erica Sadun- Beta 6: flatten #swiftlang

How to randomly sample from a Scala list or array?

I want to randomly sample from a Scala list or array (not an RDD), the sample size can be much longer than the length of the list or array, how can I do this efficiently? Because the sample size can be very big and the sampling (on different lists/arrays) needs to be done a large number of times.
I know for a Spark RDD we can use takeSample() to do it, is there an equivalent for Scala list/array?
Thank you very much.
An easy-to-understand version would look like this:
import scala.util.Random
Random.shuffle(list).take(n)
Random.shuffle(array.toList).take(n)
// Seeded version
val r = new Random(seed)
r.shuffle(...)
For arrays:
import scala.util.Random
import scala.reflect.ClassTag
def takeSample[T:ClassTag](a:Array[T],n:Int,seed:Long) = {
val rnd = new Random(seed)
Array.fill(n)(a(rnd.nextInt(a.size)))
}
Make a random number generator (rnd) based on your seed. Then, fill an array with random numbers from 0 until the size of your array.
The last step is applying each random value to the indexing operator of your input array. Using it in the REPL could look as follows:
scala> val myArray = Array(1,3,5,7,8,9,10)
myArray: Array[Int] = Array(1, 3, 5, 7, 8, 9, 10)
scala> takeSample(myArray,20,System.currentTimeMillis)
res0: scala.collection.mutable.ArraySeq[Int] = ArraySeq(7, 8, 7, 3, 8, 3, 9, 1, 7, 10, 7, 10,
1, 1, 3, 1, 7, 1, 3, 7)
For lists, I would simply convert the list to Array and use the same function. I doubt you can get much more efficient for lists anyway.
It is important to note, that the same function using lists would take O(n^2) time, whereas converting the list to arrays first will take O(n) time
If you want to sample without replacement -- zip with randoms, sort O(n*log(n), discard randoms, take
import scala.util.Random
val l = Seq("a", "b", "c", "d", "e")
val ran = l.map(x => (Random.nextFloat(), x))
.sortBy(_._1)
.map(_._2)
.take(3)
Using a for comprehension, for a given array xs as follows,
for (i <- 1 to sampleSize; r = (Math.random * xs.size).toInt) yield a(r)
Note the random generator here produces values within the unit interval, which are scaled to range over the size of the array, and converted to Int for indexing over the array.
Note For pure functional random generator consider for instance the State Monad approach from Functional Programming in Scala, discussed here.
Note Consider also NICTA, another pure functional random value generator, it's use illustrated for instance here.
Using classical recursion.
import scala.util.Random
def takeSample[T](a: List[T], n: Int): List[T] = {
n match {
case n: Int if n <= 0 => List.empty[T]
case n: Int => a(Random.nextInt(a.size)) :: takeSample(a, n - 1)
}
}
package your.pkg
import your.pkg.SeqHelpers.SampleOps
import scala.collection.generic.CanBuildFrom
import scala.collection.mutable
import scala.language.{higherKinds, implicitConversions}
import scala.util.Random
trait SeqHelpers {
implicit def withSampleOps[E, CC[_] <: Seq[_]](cc: CC[E]): SampleOps[E, CC] = SampleOps(cc)
}
object SeqHelpers extends SeqHelpers {
case class SampleOps[E, CC[_] <: Seq[_]](cc: CC[_]) {
private def recurse(n: Int, builder: mutable.Builder[E, CC[E]]): CC[E] = n match {
case 0 => builder.result
case _ =>
val element = cc(Random.nextInt(cc.size)).asInstanceOf[E]
recurse(n - 1, builder += element)
}
def sample(n: Int)(implicit cbf: CanBuildFrom[CC[_], E, CC[E]]): CC[E] = {
require(n >= 0, "Cannot take less than 0 samples")
recurse(n, cbf.apply)
}
}
}
Either:
Mixin SeqHelpers, for example, with a Scalatest spec
Include import your.pkg.SeqHelpers._
Then the following should work:
Seq(1 to 100: _*) sample 10 foreach { println }
Edits to remove the cast are welcome.
Also if there is a way to create an empty instance of the collection for the accumulator, without knowing the concrete type ahead of time, please comment. That said, the builder is probably more efficient.
Did not test for performance, but the following code is a simple and elegant way to do the sampling and I believe can help many that come here just to get a sampling code. Just change the "range" according to the size of your end sample. If pseude-randomness is not enough for your need, you can use take(1) in the inner list and increase the range.
Random.shuffle((1 to 100).toList.flatMap(x => (Random.shuffle(yourList))))

Can we improve on my code to swap adjacent array elements in Scala?

I'm learning Scala by working the exercises from the book "Scala for the Impatient". One exercise asks that:
Write a loop that swaps adjacent elements of an array of integers. For
example, Array(1, 2, 3, 4, 5) becomes Array(2, 1, 4, 3, 5)
I did it in 3 different ways, one of which is as follows.I'm curious if this can be improved as per the comments I've put inline.
def swapWithGrouped(a: Array[Int]) = {
a.grouped(2).map {
// TODO: Can we use reverse here?
case Array(x, y) => Array(y, x)
// TODO: Can we use identity function here?
case Array(x) => Array(x)
}.flatten.toArray
}
You could use .flatMap instead of .map{..}.flatten.
You also don't really need to match a single element array, so you could simply use a variable (although I feel that this really depends on the problem, sometimes showing the symmetry in the patterns is nice and makes the intent more explicit).
So :
scala> def swapWithGrouped(a: Array[Int]) = {
a.grouped(2).flatMap {
case Array(x, y) => Array(y, x)
case single => single
}.toArray
}
swapWithGrouped: (a: Array[Int])Array[Int]
scala> swapWithGrouped(a) // a is Array(1,2,3,4,5)
res0: Array[Int] = Array(2, 1, 4, 3, 5)
The Array(x,y) => Array(y,x) is also pretty easy to read wrong, .reverse makes the intention more explicit and has the added benefit that you can remove the single-element case.
scala> def swapWithGrouped(a: Array[Int]) = a.grouped(2).flatMap(_.reverse).toArray
swapWithGrouped: (a: Array[Int])Array[Int]
No, AFAIK there's no way to bind the patterns to names. The closest thing to doing that is the # operator, but it doesn't work on the pattern as a whole - only on parameters.
On the other hand:
def swapWithGrouped(a: Array[Int]) = {
a.grouped(2).map{ _.reverse }.flatten.toArray
}
should do fine. I'm assuming you don't care about performance; if you do, the code will need to be very different.

How do I create a heterogeneous Array in Scala?

In javascript, we can do:
["a string", 10, {x : 1}, function() {}].push("another value");
What is the Scala equivalent?
Arrays in Scala are very much homogeneous. This is because Scala is a statically typed language. If you really need pseudo-heterogeneous features, you need to use an immutable data structure that is parametrized covariantly (most immutable data structures are). List is the canonical example there, but Vector is also an option. Then you can do something like this:
Vector("a string", 10, Map("x" -> 1), ()=>()) + "another value"
The result will be of type Vector[Any]. Not very useful in terms of static typing, but everything will be in there as promised.
Incidentally, the "literal syntax" for arrays in Scala is as follows:
Array(1, 2, 3, 4) // => Array[Int] containing [1, 2, 3, 4]
See also: More info on persistent vectors
Scala will choose the most specific Array element type which can hold all values, in this case it needs the most general type Any which is a supertype of every other type:
Array("a string", 10, new { val x = 1 }, () => ()) :+ "another value"
The resulting array will be of type Array[Any].
Scala might get the ability for a "heterogeneous" list soon:
HList in Scala
Personally, I would probably use tuples, as herom mentions in a comment.
scala> ("a string", 10, (1), () => {})
res1: (java.lang.String, Int, Int, () => Unit) = (a string,10,1,<function0>)
But you cannot append to such structures easily.
The HList mentioned by ePharaoh is "made for this" but I would probably stay clear of it myself. It's heavy on type programming and therefore may carry surprising loads with it (i.e. creating a lot of classes when compiled). Just be careful. A HList of the above (needs MetaScala library) would be (not proven since I don't use MetaScala):
scala> "a string" :: 10 :: (1) :: () => {} :: HNil
You can append etc. (well, at least prepend) to such a list, and it will know the types. Prepending creates a new type that has the old type as the tail.
Then there's one approach not mentioned yet. Classes (especially case classes) are very light on Scala and you can make them as one-liners:
scala> case class MyThing( str: String, int: Int, x: Int, f: () => Unit )
defined class MyThing
scala> MyThing( "a string", 10, 1, ()=>{} )
res2: MyThing = MyThing(a string,10,1,<function0>)
Of course, this will not handle appending either.

Resources