I have code in scala :
val graph = new Array [Set[Int]] (n)
def addedge(i:Int,j:Int)
{
graph(i)+=j
}
What does graph(i)+=j mean?
Can anybody translate it in any other languages like c, c++ or java?
graph is an Array, just like in C or Java. graph(i) means "access the ith element of graph". Each element in graph is a Set of Ints. The += method on Set adds an item to the Set. So graph(i) += j adds the number j into the Set stored at index i in graph.
Trying things out in the REPL shows the behavior:
scala> val graph = Array(Set(1,2), Set(2,3), Set(1))
graph: Array[scala.collection.immutable.Set[Int]] = Array(Set(1, 2), Set(2, 3), Set(1))
scala> graph(1) += 4
scala> graph
res0: Array[scala.collection.immutable.Set[Int]] = Array(Set(1, 2), Set(2, 3, 4), Set(1))
Related
This question already has answers here:
Combinations from range of values for given sizes
(3 answers)
Closed 3 years ago.
I would like to effectively generate a numpy array of tuples which size is the multiple of the dimensions of each axis using numpy.arange() and exclusively using numpy functions. For example: the size of a_list below is max_i*max_j*max_k.
Moreover, the array that I would like to obtain for the example below looks like this : [(0,0,0), (0,0,1), ..., (0, 0, 9), (0, 1, 0), (0, 1, 1), ..., (9, 4, 14)]
a_list = list()
max_i = 10
max_j = 5
max_k = 15
for i in range(0, max_i):
for j in range(0, max_j):
for k in range(0, max_k):
a_list.append((i, j, k))
The loop's complexity above, relying on list and for loops, is O(max_i*max_j*max_k), I would like to use a factorized way to generate a lookalike array of tuples in numpy. Is it possible ?
I like Divakar's solution in the comments better, but here's another.
What you're describing is a cartesian product. With some help from this post, you can achieve this as follows
import numpy as np
# Input
max_i, max_j, max_k = (10, 5, 15)
# Build sequence arrays 0, 1, ... N
arr_i = np.arange(0, max_i)
arr_j = np.arange(0, max_j)
arr_k = np.arange(0, max_k)
# Build cartesian product of sequence arrays
grid = np.meshgrid(arr_i, arr_j, arr_k)
cartprod = np.stack(grid, axis=-1).reshape(-1, 3)
# Convert to list of tuples
result = list(map(tuple, cartprod))
I have a flat array like this and another flat array that describes the dimensions:
val elems = Array(0,1,2,3)
val dimensions = Array(2,2)
So now I must be able to unflatten that and return a 2*2 array like this:
val unflattened = {{0,1},{2,3}}
The dimensions could be of any order. The only condition is that the length of the flat array will equal to the product of the dimensions. So for example., if the dimensions is
Array(3,3)
then I expect that the elems flat array will have to have 9 elements in it! The preconditions will be checked elsewhere so I do not have to worry about it here! All I need to do is to return an unflattened array.
Since this has to work on any dimension size, I think I probably have to define a recursive structure to put my results! Something like this?
case class Elem(elem: Array[Elem])
Could this work?
Any clue on how to go about implementing this function?
Although you should be able to do this with a simple recursive structure, I went along with a structure more suited to the problem.
case class Row(elems: List[Int])
trait Matrix
case class SimpleMatrix(rows: List[Row]) extends Matrix
case class HigherMatrix(matrices: List[Matrix]) extends Matrix
// since your flat arrays are always of proper sizes... we are not handling error cases
// so we are dealing with higher N-dimension matrices with size List(s1, s2, ...,sN)
// I have chosen List for the example (as its easy to print), you should choose Array
def arrayToMatrix(flat: List[Int], dimension: Int, sizes: List[Int]): Matrix = dimension match {
case 1 | 2 =>
// since your flat arrays are always of proper sizes... there should not be any problems here
SimpleMatrix(
flat
.grouped(sizes.head)
.map(Row)
.toList
)
case _ =>
HigherMatrix(
flat
.grouped(sizes.tail.reduce(_ * _))
.map(g => arrayToMatrix(g, dimension - 1, sizes.tail))
.toList
)
}
def arrayToSquareMatrix(flat: List[Int], dimension: Int, size: Int): Matrix =
arrayToMatrix(flat, dimension, Range.inclusive(1, dimension).map(_ => size).toList)
Here are the examples
val sm_2__2_2 = arrayToSquareMatrix(Range.inclusive(1, 4).toList, 2, 2)
// sm_2__2_2: Matrix = SimpleMatrix(List(Row(List(1, 2)), Row(List(3, 4))))
val m_2__3_2 = arrayToMatrix(Range.inclusive(1, 6).toList, 2, List(3, 2))
// m_2__3_2: Matrix = SimpleMatrix(List(Row(List(1, 2, 3)), Row(List(4, 5, 6))))
val sm_3__2_2_2 = arrayToSquareMatrix(Range.inclusive(1, 8).toList, 3, 2)
// sm_3__2_2_2: Matrix = HigherMatrix(List(SimpleMatrix(List(Row(List(1, 2)), Row(List(3, 4)))), SimpleMatrix(List(Row(List(5, 6)), Row(List(7, 8))))))
val m_3__3_2_2 = arrayToMatrix(Range.inclusive(1, 12).toList, 3, List(3, 2, 2))
// m_3__3_2_2: Matrix = HigherMatrix(List(SimpleMatrix(List(Row(List(1, 2)), Row(List(3, 4)))), SimpleMatrix(List(Row(List(5, 6)), Row(List(7, 8)))), SimpleMatrix(List(Row(List(9, 10)), Row(List(11, 12))))))
Here is a solution:
def unflatten(flat: Vector[Any], dims: Vector[Int]): Vector[Any] =
if (dims.length <= 1) {
flat
} else {
val (Vector(dim), rest) = dims.splitAt(1)
flat.grouped(flat.length/dim).map(a => unflatten(a, rest)).toVector
}
I have used Vector because Array isn't really a Scala type and doesn't allow conversion from Array[Int] to Array[Any].
Note that this implements only one of the possible partitions with the given dimensions, so it may or may not be what is required.
This is a version using types based on the Matrix trait in another answer:
trait Matrix
case class SimpleMatrix(rows: Vector[Int]) extends Matrix
case class HigherMatrix(matrices: Vector[Matrix]) extends Matrix
def unflatten(flat: Vector[Int], dims: Vector[Int]): Matrix =
if (dims.length <= 1) {
SimpleMatrix(flat)
} else {
val (Vector(dim), rest) = dims.splitAt(1)
val subs = flat.grouped(flat.length/dim).map(a => unflatten(a, rest)).toVector
HigherMatrix(subs)
}
There is a function grouped on Arrays which does what you want.
# Array(0,1,2,3).grouped(2).toArray
res2: Array[Array[Int]] = Array(Array(0, 1), Array(2, 3))
How do I get an array of array with elements like this? Is there an inbuilt scala api that can provide this value (without using combinations)?
e.g
val inp = Array(1,2,3,4)
Output
Vector(
Vector((1,2), (1,3), (1,4)),
Vector((2,3), (2,4)),
Vector((3,4))
)
My answer is below. I feel that there should be an elegant answer than this in scala.
val inp = Array(1,2,3,4)
val mp = (0 until inp.length - 1).map( x => {
(x + 1 until inp.length).map( y => {
(inp(x),inp(y))
})
})
print(mp)
+Edit
Added combination constraint.
Using combinations(2) and groupBy() on the first element (0) of each combination will give you the values and structure you want. Getting the result as a Vector[Vector]] will require some conversion using toVector
scala> inp.combinations(2).toList.groupBy(a => a(0)).values
res11: Iterable[List[Array[Int]]] = MapLike.DefaultValuesIterable
(
List(Array(2, 3), Array(2, 4)),
List(Array(1, 2), Array(1, 3), Array(1, 4)),
List(Array(3, 4))
)
ORIGINAL ANSWER
Note This answer is OK only if the elements in the Seq are unique and sorted (according to <). See edit for the more general case.
With
val v = a.toVector
and by foregoing combinations, I can choose tuples instead and not have to cast at the end
for (i <- v.init) yield { for (j <- v if i < j) yield (i, j) }
or
v.init.map(i => v.filter(i < _).map((i, _)))
Not sure if there's a performance hit for using init on vector
EDIT
For non-unique elements, we can use the indices
val v = a.toVector.zipWithIndex
for ((i, idx) <- v.init) yield { for ((j, jdx) <- v if idx < jdx) yield (i, j) }
what's the efficient way to sum up every n elements of an array in Scala? For example, if my array is like below:
val arr = Array(3,1,9,2,5,8,...)
and I want to sum up every 3 elements of this array and get a new array like below:
newArr = Array(13, 15, ...)
How can I do this efficiently in Spark Scala? Thank you very much.
grouped followed by map should do the trick:
scala> val arr = Array(3,1,9,2,5,8)
arr: Array[Int] = Array(3, 1, 9, 2, 5, 8)
scala> arr.grouped(3).map(_.sum).toArray
res0: Array[Int] = Array(13, 15)
Calling the toIterator method on the array before calling grouped should speed things up a bit, i.e.
arr.toIterator.grouped(3).map(_.sum).toArray
For example, using
val xs = Array.range(0, 10000)
10000 iterations of
xs.toIterator.grouped(3).map(_.sum).toArray
takes about 16.93 seconds, while 10000 iterations of
xs.grouped(3).map(_.sum).toArray
requires approximately 21.49 seconds.
I have two 2D Theano tensors, call them x_1 and x_2, and suppose for the sake of example, both x_1 and x_2 have shape (1, 50). Now, to compute their mean squared error, I simply run:
T.sqr(x_1 - x_2).mean(axis = -1).
However, what I wanted to do was construct a new tensor that consists of their mean squared error in chunks of 10. In other words, since I'm more familiar with NumPy, what I had in mind was to create the following tensor M in Theano:
M = [theano.tensor.sqr(x_1[:, i:i+10] - x_2[:, i:i+10]).mean(axis = -1) for i in xrange(0, 50, 10)]
Now, since Theano doesn't have for loops, but instead uses scan (which map is a special case of), I thought I would try the following:
sequence = T.arange(0, 50, 10)
M = theano.map(lambda i: theano.tensor.sqr(x_1[:, i:i+10] - x_2[:, i:i+10]).mean(axis = -1), sequence)
However, this does not seem to work, as I get the error:
only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices
Is there a way to loop through the slices using theano.scan (or map)? Thanks in advance, as I'm new to Theano!
Similar to what can be done in numpy, a solution would be to reshape your (1, 50) tensor to a (1, 10, 5) tensor (or even a (10, 5) tensor), and then to compute the mean along the second axis.
To illustrate this with numpy, suppose I want to compute means by slices of 2
x = np.array([0, 2, 0, 4, 0, 6])
x = x.reshape([3, 2])
np.mean(x, axis=1)
outputs
array([ 1., 2., 3.])