Go: nested loop causes scope issue - loops

I have some code in golang which is suppose to discover the next possibilities in a Tic-Tac-Toe board.
This is the buggy part:
var next []State
for row := 0; row < len(board); row++ {
for place := 0; place < len(board[row]); place++ {
if board[row][place] == 0 {
nPos := board
fmt.Print(nPos)
nPos[row][place] = play
fmt.Print(nPos, row, place, play, "\n")
next = append(next, nPos)
}
}
}
State is a type of [][]int.
board is a State, play is an int and next is a []State .
The output is as follows:
[[0 0 0] [0 0 0] [0 0 0]][[1 0 0] [1 0 0] [1 0 0]] 0 0 1
[[1 0 0] [1 0 0] [1 0 0]][[1 1 0] [1 1 0] [1 1 0]] 0 1 1
[[1 1 0] [1 1 0] [1 1 0]][[1 1 1] [1 1 1] [1 1 1]] 0 2 1
[[[1 1 1] [1 1 1] [1 1 1]] [[1 1 1] [1 1 1] [1 1 1]] [[1 1 1] [1 1 1] [1 1 1]]]
You can clearly see two things:
One iteration changes the whole column (I guess it has to do with the outer loop, row)
For some reason the changes are saved (nPos is not reinitialized through iterations)
I am somewhat new to Go, am I wrong when expect nPos to be a new variable in every iteration?
I have already looked for issues in the line nPos[row][place] = play, but apparently no specific line causes the issue. I guess it is just the scope.

As #zerkms pointed out:
nPos := board <--- here both nPos, and board contain the same slice, Go does not implicitly do a deep slice copy. If you want a duplicate of a slice - you should manually clone it.
One option is:
cpy := make([]T, len(orig)) copy(cpy, orig)
Other answers are here:
Concisely deep copy a slice?

Related

How to calculate cumulative sums of ones with a reset each time a zero is encountered

I have an array made of 0 and 1. I want to calculate a cumulative sum of all consecutive 1 with a reset each time a 0 is met, using numpy as I have thousands of arrays of thousands of lines and columns.
I can do it with loops but I suspect it will not be efficient.
Would you have a smarter and quick way to run it on the array.
Here is short example of the input and the expected output:
import numpy as np
arr_in = np.array([[1,1,1,1,1,1], [0,0,0,0,0,0], [1,0,1,0,1,1], [0,1,1,1,0,0]])
print(arr_in)
print("expected result:")
arr_out = np.array([[1,2,3,4,5,6], [0,0,0,0,0,0], [1,0,1,0,1,2], [0,1,2,3,0,0]])
print(arr_out)
When you run it:
[[1 1 1 1 1 1]
[0 0 0 0 0 0]
[1 0 1 0 1 1]
[0 1 1 1 0 0]]
expected result:
[[1 2 3 4 5 6]
[0 0 0 0 0 0]
[1 0 1 0 1 2]
[0 1 2 3 0 0]]
With numba.vectorize you can define a custom numpy ufunc to use for accumulation.
import numba as nb # v0.56.4, no support for numpy >= 1.22.0
import numpy as np # v1.21.6
#nb.vectorize([nb.int64(nb.int64, nb.int64)])
def reset_cumsum(x, y):
return x + y if y else 0
arr_in = np.array([[1,1,1,1,1,1],
[0,0,0,0,0,0],
[1,0,1,0,1,1],
[0,1,1,1,0,0]])
reset_cumsum.accumulate(arr_in, axis=1)
Output
array([[1, 2, 3, 4, 5, 6],
[0, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 1, 2],
[0, 1, 2, 3, 0, 0]])
You can compute the cumsum for the 1s, then identify the 0s and forward-fill the cumulated sum to subtract it:
# identify 0s
mask = arr_in==0
# get classical cumsum
cs = arr_in.cumsum(axis=1)
# ffill the cumsum value on 1s
# subtract from cumsum
out = cs-np.maximum.accumulate(np.where(mask, cs, 0), axis=1)
Output:
[[1 2 3 4 5 6]
[0 0 0 0 0 0]
[1 0 1 0 1 2]
[0 1 2 3 0 0]]
Output on second example:
[[1 2 3 4 5 6 0 1]
[0 1 2 0 0 0 1 0]]

Method to convolve a 4D array with a 2D array (different kernel for each element)?

I would like to do a convolution using a different kernel for each element. My signal array has the shape n x m and the kernels have the shape [i, i]. I have the kernels in a 4D array of shape n x m x i x i, such that the value at [x, y] is the i x i kernel to apply to the element at [x, y] in the signal array. For example:
2D signal array:
[[0 1]
[5 9]]
4D kernel array, having a different 3x3 kernel for each signal array element:
[[[[1 0 1] | [[0 0 0]
[0 1 0] | [0 0 0]
[1 0 1]] | [0 0 0]]
------------------------
[[1 1 1] | [[0 1 0]
[1 1 1] | [1 0 1]
[1 1 1]] | [0 1 0]]]]
Desired "convolution" process:
[[1 0 1] [[- - -] [[0 0 0] [[- - -]
[0 1 0] • [- 0 1] = 9 [0 0 0] • [0 1 -] = 0
[1 0 1]] [- 5 9]] [0 0 0]] [5 9 -]]
[[1 1 1] [[- 0 1] [[0 1 0] [[0 1 -]
[1 1 1] • [- 5 9] = 15 [1 0 1] • [5 9 -] = 6
[1 1 1]] [- - -]] [0 1 0]] [- - -]]
Desired result:
[[9 0]
[15 6]]
I can do this by looping over each convolution window, but that's slow for large arrays:
def fancy_convolve(signal, kernels):
kernel_size = kernels[0][0][0].shape[0]
pad_width = int(kernel_size / 2)
padded_signal = numpy.pad(signal, pad_width, 'constant',
constant_values=0)
output = numpy.empty(signal.shape)
for x in range(signal.shape[0]):
for y in range(signal.shape[1]):
signal_window = padded_signal[x:x+kernel_size, y:y+kernel_size]
kernel = kernels[x, y]
output[x, y] = numpy.dot(
signal_window.flatten(), kernel.flatten())
return output
Is there a function to do this efficiently in numpy, scipy, or another library? Is convolution the right word for it? I've looked at scipy.ndimage.convolve and scipy.signal.convolve, which allow higher dimensions but still only one kernel, and numpy.tensordot, which doesn't do the sliding window part of convolution. Thanks!
To make it straightforward without additional consideration about the layout of the arrays. I would combine a stride trick and an Einsteing sum.
def efficient_fancy_convolve(signal, kernels):
kernel_size = kernels[0][0][0].shape[0]
pad_width = int(kernel_size / 2)
padded_signal = numpy.pad(signal, pad_width, 'constant',
constant_values=0)
p1 = numpy.lib.stride_tricks.as_strided(
padded_signal, kernels.shape, 2*padded_signal.strides)
return np.einsum('xyjk,xyjk->xy', p1, kernels)
Then a quick test
x = np.random.randn(1000, 1000)
kernels = np.random.randn(1000, 1000, 10, 10)
x1 = fancy_convolve(x, kernels) # 3.7s
x2 = efficient_fancy_convolve(x, kernels) # 139ms
assert np.allclose(x1, x2)

Numpy: How to replace a column in numpy array with another column? Why this emample doesn't work?

I would like to replace a column of a numpy array with another array conditionally, a sample example is shown here:
import numpy as np
t = np.asarray([[1,0,0],[1,0,0],[2,0,0],[3,0,0]])
print(t):
[[1 0 0]
[1 0 0]
[2 0 0]
[3 0 0]]
# replace
t[t[:,0]==1][:,1] = np.asarray([2,3])
print(t) again:
[[1 0 0]
[1 0 0]
[2 0 0]
[3 0 0]]
what I want to get is:
t =
[[1 2 0]
[1 3 0]
[2 0 0]
[3 0 0]]
After assigning the new value [2,3], t doesn't change. Does anyone knows how to get the new t?
Thanks in advance!

Iterate Clojure vectors

I am implementing a Clojure function (gol [coll]) that receives a vector of vectors of the same size with 1 and 0, iterates it checking the near positions of each index and returns a new board; something like Conway’s Game of Life
Input:
`(gol [[0 0 0 0 0]
[0 0 0 0 0]
[0 1 1 1 0]
[0 0 0 0 0]
[0 0 0 0 0]])`
Output:
`[[0 0 0 0 0]
[0 0 1 0 0]
[0 0 1 0 0]
[0 0 1 0 0]
[0 0 0 0 0]]`
How can I iterate the vectors and change the values at the same time?
Use assoc-in:
(assoc-in v [0 0] 1)
The above will set the top left value to 1.
To set many at once you can reduce over assoc-in.
(def new-values [[[0 0] 1]
[[0 1] 2]
[[0 2] 3]])
(reduce
(fn [acc ele]
(apply assoc-in acc ele))
v
new-values)
;;=> [[1 2 3 0 0] ...]
To go from your input to your output the transform would be:
[[[2 1] 0]
[[2 3] 0]
[[1 2] 1]
[[3 2] 1]]

Tensorflow : Get indices of array rows which are zero

For a tensor
[[1 2 3 1]
[0 0 0 0]
[1 3 5 7]
[0 0 0 0]
[3 5 7 8]]
how can I get the indices of the 0 rows? I.e. the list [1,3], in Tensorflow?
As far as I know, you can't really do that in one command like you would with a more advanced library like NumPy.
If you really want to use TF functions I could suggest a few like:
x = tf.Variable([
[1,2,3,1],
[0,0,0,0],
[1,3,5,7],
[0,0,0,0],
[3,5,7,8]])
y = tf.Variable([0,0,0,0])
condition = tf.equal(x, y)
indices = tf.where(condition)
This would result the following:
[[1 0]
[1 1]
[1 2]
[1 3]
[3 0]
[3 1]
[3 2]
[3 3]]
Or you could use the following if you just want to get only the zero lines:
row_wise_sum = tf.reduce_sum(tf.abs(x),1)
select_zero_sum = tf.where(tf.equal(row_wise_sum,0))
with tf.Session() as sess:
tf.global_variables_initializer().run()
print(sess.run(select_zero_sum))
The result being:
[[1]
[3]]
It can be done in an easier way too:
g = tf.Graph()
with g.as_default():
a = tf.placeholder(dtype=tf.float32, shape=[3, 4])
b = tf.placeholder(dtype=tf.float32, shape=[1, 4])
res = tf.not_equal(a, b)
res = tf.reduce_sum(tf.cast(res, tf.float32), 1)
res = tf.where(tf.equal(res1, [0.0]))[0]
with tf.Session(graph=g) as sess:
sess.run(tf.global_variables_initializer())
dict_ = {
a:np.array([[2.0,6.0,3.0,2.0],
[1.0,8.0,32.0,1.0],
[1.0,8.0,3.0,11.0]]),
b:np.array([[1.0,8.0,3.0,11.0]])
}
print(sess.run(res, feed_dict=dict_))
:[2]

Resources