Count elements of array A in array B with Scala - arrays

I have two arrays of strings, say
A = ('abc', 'joia', 'abas8', '09ma09', 'oiam0')
and
B = ('gfdg', '89jkjj', '09ma09', 'asda', '45645ghf', 'dgfdg', 'yui345gd', '6456ds', '456dfs3', 'abas8', 'sfgds').
What I want to do is simply to count the number of elements of every string in A that appears in B (if any). For example, the resulted array here should be: C = (0, 0, 1, 1, 0). How can I do that?

try this:
A.map( x => B.count(y => y == x)))

You can do it how idursun suggested, it will work, but may be not efficient as if you'll prepare intersection first. If B is much bigger than A it will give massive speedup. 'intersect' method has better 'big-O' complexity then doing linear search for each element of A in B.
val A = Array("abc", "joia", "abas8", "09ma09", "oiam0")
val B = Array("gfdg", "89jkjj", "09ma09", "asda", "45645ghf", "dgfdg", "yui345gd", "6456ds", "456dfs3", "abas8", "sfgds")
val intersectCounts: Map[String, Int] =
A.intersect(B).map(s => s -> B.count(_ == s)).toMap
val count = A.map(intersectCounts.getOrElse(_, 0))
println(count.toSeq)
Result
(0, 0, 1, 1, 0)

Use a foldLeft construction as the yield off of each element of A:
val A = List("a","b")
val B = List("b","b")
val C = for (a <- A)
yield B.foldLeft(0) { case (totalc : Int, w : String) =>
totalc + (if (w == a) 1 else 0)
}
And the result:
C: List[Int] = List(0, 2)

Related

Find out frequency of certain integers appearing in an array in Scala

Say I have two arrays. One array A of a set of integers - all distinct. Another array B of a list of integers, all appearing in array A, but not necessarily distinct. For example:
A could be Array(123, 456, 789)
B could be Array(123, 123, 456, 123, 789, 456)
I want to create an array C, which tells us the frequency of each element (from array A) appearing in array B. In this case, C would be Array(3, 2, 1) because 123 appears 3 times, 456 appears 2 times, and 789 appears 1 time.
What is an efficient way to do this in Scala?
My attempt is
val C: Array[Int] = Array.fill(3)(0)
var idx = 0
for(i <- A){for(j <- B){if(j == i){C(idx) += 1}}
idx += 1}
for(i <- C){println(i)}
But I understand that this is probably inefficient, and would take a long time if I am dealing with a much larger array A and array B. But I am restricted to for loops and if statements since I am only a beginner with Scala. Is there a more efficient way to do this?
Lets say that n is length of Array A and m is length of array B.
As of now your solution is O(n * m)
You can improve this to O(n + m) by using a mutable HashMap and O(n) extra space.
import scala.collection.mutable
val a = Array(123, 456, 789)
val b = Array(123, 123, 456, 123, 789, 456)
val countMap = mutable.HashMap.empty[Int, Int]
// add all integers in `a` with count 0
for (i <- a) {
countMap.put(i, 0)
}
// iterate on b
// and update the count in countMap (if exists)
for (i <- b) {
countMap.get(i).foreach(c => countMap.put(i, c + 1))
}
// fill your array `c`
val c = Array.ofDim[Int](a.length)
for ((i, index) <- a.zipWithIndex) {
c(index) = countMap.getOrElse(i, 0)
}
println(c.mkString(", "))
// 3, 2, 1
Keep in mind that for's for Scala collections have their own costs, you can improve it further by using while loops.
import scala.collection.mutable
val a = Array(123, 456, 789)
val b = Array(123, 123, 456, 123, 789, 456)
val countMap = mutable.HashMap.empty[Int, Int]
// to use with our while loops
var i = 0
// add all integers in `a` with count 0
i = 0
while (i < a.length) {
countMap.put(a(i), 0)
i = i + 1
}
// iterate on b
// and update the count in countMap (if exists)
i = 0
while (i < b.length) {
if (countMap.contains(b(i))) {
countMap.put(b(i), countMap(b(i)) + 1)
}
i = i + 1
}
// fill your array `c`
val c = Array.ofDim[Int](a.length)
i = 0
while (i < a.length) {
c(i) = countMap.getOrElse(a(i), 0)
i = i + 1
}
println(c.mkString(", "))
// 3, 2, 1

Spark Scala apply function on array of arrays element-wise

Disclaimer: I'm VERY new to spark and scala. I am working on a document similarity project in Scala with Spark. I have a dataframe which looks like this:
+--------+--------------------+------------------+
| text| shingles| hashed_shingles|
+--------+--------------------+------------------+
| qwerty|[qwe, wer, ert, rty]| [-4, -6, -1, -9]|
|qwerasfg|[qwe, wer, era, r...|[-4, -6, 6, -2, 2]|
+--------+--------------------+------------------+
Where I split the document text into shingles and computed a hash value for each one.
Imagine I have a hash_function(integer, seed) -> integer.
Now I want to apply n different hash functions of this form to the hashed_shingles arrays. I.e. obtain an array of n arrays such that each array is hash_function(hashed_shingles, seed) with seed from 1 to n.
I'm trying something like this, but I cannot get it to work:
val n = 3
df = df.withColumn("tmp", array_repeat($"hashed_shingles", n)) // Repeat minhashes
val minhash_expr = "transform(tmp,(x,i) -> hash_function(x, i))"
df = df.withColumn("tmp", expr(minhash_expr)) // Apply hash to each array
I know how to do it with a udf, but as I understand they are not optimized and I should try to avoid using them, so I try to do everything with org.apache.spark.sql.functions.
Any ideas on how to approach it without udf?
The udf which achieves the same goal is this:
// Family of hashing functions
class Hasher(seed: Int, max_val : Int, p : Int = 104729) {
private val random_generator = new scala.util.Random(seed)
val a = 1 + 2*random_generator.nextInt((p-2)/2)// a odd in [1, p-1]
val b = 1 + random_generator.nextInt(p - 2) // b in [1, p-1]
def getHash(x : Int) : Int = ((a*x + b) % p) % max_val
}
// Compute a list of minhashes from a list of hashers given a set of ids
class MinHasher(hashes : List[Hasher]) {
def getMinHash(set : Seq[Int])(hasher : Hasher) : Int = set.map(hasher.getHash).min
def getMinHashes(set: Seq[Int]) : Seq[Int] = hashes.map(getMinHash(set))
}
// Minhasher
val minhash_len = 100
val hashes = List.tabulate(minhash_len)(n => new Hasher(n, shingle_bins))
val minhasher = new MinHasher(hashes)
// Compute Minhashes
val minhasherUDF = udf[Seq[Int], Seq[Int]](minhasher.getMinHashes)
df = df.withColumn("minhashes", minhasherUDF('hashed_shingles))

How do I take slice from an array position to end of the array?

How do I get an array of array with elements like this? Is there an inbuilt scala api that can provide this value (without using combinations)?
e.g
val inp = Array(1,2,3,4)
Output
Vector(
Vector((1,2), (1,3), (1,4)),
Vector((2,3), (2,4)),
Vector((3,4))
)
My answer is below. I feel that there should be an elegant answer than this in scala.
val inp = Array(1,2,3,4)
val mp = (0 until inp.length - 1).map( x => {
(x + 1 until inp.length).map( y => {
(inp(x),inp(y))
})
})
print(mp)
+Edit
Added combination constraint.
Using combinations(2) and groupBy() on the first element (0) of each combination will give you the values and structure you want. Getting the result as a Vector[Vector]] will require some conversion using toVector
scala> inp.combinations(2).toList.groupBy(a => a(0)).values
res11: Iterable[List[Array[Int]]] = MapLike.DefaultValuesIterable
(
List(Array(2, 3), Array(2, 4)),
List(Array(1, 2), Array(1, 3), Array(1, 4)),
List(Array(3, 4))
)
ORIGINAL ANSWER
Note This answer is OK only if the elements in the Seq are unique and sorted (according to <). See edit for the more general case.
With
val v = a.toVector
and by foregoing combinations, I can choose tuples instead and not have to cast at the end
for (i <- v.init) yield { for (j <- v if i < j) yield (i, j) }
or
v.init.map(i => v.filter(i < _).map((i, _)))
Not sure if there's a performance hit for using init on vector
EDIT
For non-unique elements, we can use the indices
val v = a.toVector.zipWithIndex
for ((i, idx) <- v.init) yield { for ((j, jdx) <- v if idx < jdx) yield (i, j) }

how to insert element to rdd array in spark

Hi I've tried to insert element to rdd array[String] using scala in spark.
Here is example.
val data = RDD[Array[String]] = Array(Array(1,2,3), Array(1,2,3,4), Array(1,2)).
I want to make length 4 of all arrays in this data.
If the length of array is less than 4, I want to fill the NULL value in the array.
here is my code that I tried to solve.
val newData = data.map(x =>
if(x.length < 4){
for(i <- x.length until 4){
x.union("NULL")
}
}
else{
x
}
)
But The result is Array[Any] = Array((), Array(1, 2, 3, 4), ()).
So I tried another ways. I used yield on for loop.
val newData = data.map(x =>
if(x.length < 4){
for(i <- x.length until 4)yield{
x.union("NULL")
}
}
else{
x
}
)
The result is Array[Object] = Array(Vector(Array(1, 2, 3, N, U, L, L)), Array(1, 2, 3, 4), Vector(Array(1, 2, N, U, L, L), Array(1, 2, N, U, L, L)))
these are not what I want. I want to return like this
RDD[Array[String]] = Array(Array(1,2,3,NULL), Array(1,2,3,4), Array(1,2,NULL,NULL)).
What should I do?
Is there a method to solve it?
union is a functional operation, it doesn't change the array x. You don't need to do this with a loop, though, and any loop implementations will probably be slower -- it's much better to create one new collection with all the NULL values instead of mutating something every time you add a null. Here's a lambda function that should work for you:
def fillNull(x: Array[Int], desiredLength: Int): Array[String] = {
x.map(_.toString) ++ Array.fill(desiredLength - x.length)("NULL")
}
val newData = data.map(fillNull(_, 4))
I solved your use case with the following code:
val initialRDD = sparkContext.parallelize(Array(Array[AnyVal](1, 2, 3), Array[AnyVal](1, 2, 3, 4), Array[AnyVal](1, 2, 3)))
val transformedRDD = initialRDD.map(array =>
if (array.length < 4) {
val transformedArray = Array.fill[AnyVal](4)("NULL")
Array.copy(array, 0, transformedArray, 0, array.length)
transformedArray
} else {
array
}
)
val result = transformedRDD.collect()

Scala logical indexing with for comprehension

I'm trying to translate the following Matlab logical-indexing pattern into Scala code:
% x is an [Nx1] array of Int32
% y is an [Nx1] array of Int32
% myExpensiveFunction() processes batches of unique x.
ux = unique(x);
z = nan(size(x));
for i = 1:length(ux)
idx = x == ux(i);
z(idx) = myExpensiveFuntion(x(idx), y(idx));
end
Assume I'm working with val x: Array[Int] in Scala. What is the best way to do this?
Edit: To clarify, I'm looking to process batches of (x,y) at a time, grouped by unique x, and return a result (z) with an order corresponding to the initial input. I'm open to sorting x, but eventually need to get back to the original unsorted order. My primary requirement is to handle all the indexing/mapping/sorting in a clear and reasonably efficient way.
Most of this is pretty straightforward in Scala; the only thing that's a bit out of the ordinary is the unique x indices. In Scala you'd do that with a `groupBy'. Since this is a really index-heavy method, I'm just going to give in and go with indices all the way:
val z = Array.fill(x.length)(Double.NaN)
x.indices.groupBy(i => x(i)).foreach{ case (xi, is) =>
is.foreach(i => z(i) = myExpensiveFunction(xi, y(i)))
}
z
assuming you can live with a lack of vectors going to myExpensiveFunction. If not,
val z = Array.fill(x.length)(Double.NaN)
x.indices.groupBy(i => x(i)).foreach{ case (xi, is) =>
val xs = Array.fill(is.length)(xi)
val ys = is.map(i => y(i)).toArray
val zs = myExpensiveFunction(xs, ys)
is.foreach(i => z(i) = zs(i))
}
z
This isn't the most natural way to do the computation in Scala, or the most efficient, but you don't care about efficiency if your expensive function is expensive, and it's the closest I can come to a literal translation.
(Translating your matlab-algorithms into almost everything else involves a certain amount of pain or rethinking, since the "natural" computations in matlab are not like those in most other languages.)
The important point is to get Matlab's unique right. A simple solution would be to use a Set to determine the unique values:
val occurringValues = x.toSet
occurringValues.foreach{ value =>
val indices = x.indices.filter(i => x(i) == value)
for (i <- indices) {
z(i) = myExpensiveFunction(x(i), y(i))
}
}
Note: I assume that it is possible to change myExpensiveFunction to element-wise operation...
scala> def process(xs: Array[Int], ys: Array[Int], f: (Seq[Int], Seq[Int]) => Double): Array[Double] = {
| val ux = xs.distinct
| val zs = Array.fill(xs.size)(Double.NaN)
| for(x <- ux) {
| val idx = xs.indices.filter{ i => xs(i) == x }
| val res = f(idx.map(xs), idx.map(ys))
| idx foreach { i => zs(i) = res }
| }
| zs
| }
process: (xs: Array[Int], ys: Array[Int], f: (Seq[Int], Seq[Int]) => Double)Array[Double]
scala> val xs = Array(1,2,1,2,3)
xs: Array[Int] = Array(1, 2, 1, 2, 3)
scala> val ys = Array(1,2,3,4,5)
ys: Array[Int] = Array(1, 2, 3, 4, 5)
scala> val f = (a: Seq[Int], b: Seq[Int]) => a.sum/b.sum.toDouble
f: (Seq[Int], Seq[Int]) => Double = <function2>
scala> process(xs, ys, f)
res0: Array[Double] = Array(0.5, 0.6666666666666666, 0.5, 0.6666666666666666, 0.6)

Resources