I am implementing a Clojure function (gol [coll]) that receives a vector of vectors of the same size with 1 and 0, iterates it checking the near positions of each index and returns a new board; something like Conway’s Game of Life
Input:
`(gol [[0 0 0 0 0]
[0 0 0 0 0]
[0 1 1 1 0]
[0 0 0 0 0]
[0 0 0 0 0]])`
Output:
`[[0 0 0 0 0]
[0 0 1 0 0]
[0 0 1 0 0]
[0 0 1 0 0]
[0 0 0 0 0]]`
How can I iterate the vectors and change the values at the same time?
Use assoc-in:
(assoc-in v [0 0] 1)
The above will set the top left value to 1.
To set many at once you can reduce over assoc-in.
(def new-values [[[0 0] 1]
[[0 1] 2]
[[0 2] 3]])
(reduce
(fn [acc ele]
(apply assoc-in acc ele))
v
new-values)
;;=> [[1 2 3 0 0] ...]
To go from your input to your output the transform would be:
[[[2 1] 0]
[[2 3] 0]
[[1 2] 1]
[[3 2] 1]]
Related
I have an array made of 0 and 1. I want to calculate a cumulative sum of all consecutive 1 with a reset each time a 0 is met, using numpy as I have thousands of arrays of thousands of lines and columns.
I can do it with loops but I suspect it will not be efficient.
Would you have a smarter and quick way to run it on the array.
Here is short example of the input and the expected output:
import numpy as np
arr_in = np.array([[1,1,1,1,1,1], [0,0,0,0,0,0], [1,0,1,0,1,1], [0,1,1,1,0,0]])
print(arr_in)
print("expected result:")
arr_out = np.array([[1,2,3,4,5,6], [0,0,0,0,0,0], [1,0,1,0,1,2], [0,1,2,3,0,0]])
print(arr_out)
When you run it:
[[1 1 1 1 1 1]
[0 0 0 0 0 0]
[1 0 1 0 1 1]
[0 1 1 1 0 0]]
expected result:
[[1 2 3 4 5 6]
[0 0 0 0 0 0]
[1 0 1 0 1 2]
[0 1 2 3 0 0]]
With numba.vectorize you can define a custom numpy ufunc to use for accumulation.
import numba as nb # v0.56.4, no support for numpy >= 1.22.0
import numpy as np # v1.21.6
#nb.vectorize([nb.int64(nb.int64, nb.int64)])
def reset_cumsum(x, y):
return x + y if y else 0
arr_in = np.array([[1,1,1,1,1,1],
[0,0,0,0,0,0],
[1,0,1,0,1,1],
[0,1,1,1,0,0]])
reset_cumsum.accumulate(arr_in, axis=1)
Output
array([[1, 2, 3, 4, 5, 6],
[0, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 1, 2],
[0, 1, 2, 3, 0, 0]])
You can compute the cumsum for the 1s, then identify the 0s and forward-fill the cumulated sum to subtract it:
# identify 0s
mask = arr_in==0
# get classical cumsum
cs = arr_in.cumsum(axis=1)
# ffill the cumsum value on 1s
# subtract from cumsum
out = cs-np.maximum.accumulate(np.where(mask, cs, 0), axis=1)
Output:
[[1 2 3 4 5 6]
[0 0 0 0 0 0]
[1 0 1 0 1 2]
[0 1 2 3 0 0]]
Output on second example:
[[1 2 3 4 5 6 0 1]
[0 1 2 0 0 0 1 0]]
I have some code in golang which is suppose to discover the next possibilities in a Tic-Tac-Toe board.
This is the buggy part:
var next []State
for row := 0; row < len(board); row++ {
for place := 0; place < len(board[row]); place++ {
if board[row][place] == 0 {
nPos := board
fmt.Print(nPos)
nPos[row][place] = play
fmt.Print(nPos, row, place, play, "\n")
next = append(next, nPos)
}
}
}
State is a type of [][]int.
board is a State, play is an int and next is a []State .
The output is as follows:
[[0 0 0] [0 0 0] [0 0 0]][[1 0 0] [1 0 0] [1 0 0]] 0 0 1
[[1 0 0] [1 0 0] [1 0 0]][[1 1 0] [1 1 0] [1 1 0]] 0 1 1
[[1 1 0] [1 1 0] [1 1 0]][[1 1 1] [1 1 1] [1 1 1]] 0 2 1
[[[1 1 1] [1 1 1] [1 1 1]] [[1 1 1] [1 1 1] [1 1 1]] [[1 1 1] [1 1 1] [1 1 1]]]
You can clearly see two things:
One iteration changes the whole column (I guess it has to do with the outer loop, row)
For some reason the changes are saved (nPos is not reinitialized through iterations)
I am somewhat new to Go, am I wrong when expect nPos to be a new variable in every iteration?
I have already looked for issues in the line nPos[row][place] = play, but apparently no specific line causes the issue. I guess it is just the scope.
As #zerkms pointed out:
nPos := board <--- here both nPos, and board contain the same slice, Go does not implicitly do a deep slice copy. If you want a duplicate of a slice - you should manually clone it.
One option is:
cpy := make([]T, len(orig)) copy(cpy, orig)
Other answers are here:
Concisely deep copy a slice?
I would like to do a convolution using a different kernel for each element. My signal array has the shape n x m and the kernels have the shape [i, i]. I have the kernels in a 4D array of shape n x m x i x i, such that the value at [x, y] is the i x i kernel to apply to the element at [x, y] in the signal array. For example:
2D signal array:
[[0 1]
[5 9]]
4D kernel array, having a different 3x3 kernel for each signal array element:
[[[[1 0 1] | [[0 0 0]
[0 1 0] | [0 0 0]
[1 0 1]] | [0 0 0]]
------------------------
[[1 1 1] | [[0 1 0]
[1 1 1] | [1 0 1]
[1 1 1]] | [0 1 0]]]]
Desired "convolution" process:
[[1 0 1] [[- - -] [[0 0 0] [[- - -]
[0 1 0] • [- 0 1] = 9 [0 0 0] • [0 1 -] = 0
[1 0 1]] [- 5 9]] [0 0 0]] [5 9 -]]
[[1 1 1] [[- 0 1] [[0 1 0] [[0 1 -]
[1 1 1] • [- 5 9] = 15 [1 0 1] • [5 9 -] = 6
[1 1 1]] [- - -]] [0 1 0]] [- - -]]
Desired result:
[[9 0]
[15 6]]
I can do this by looping over each convolution window, but that's slow for large arrays:
def fancy_convolve(signal, kernels):
kernel_size = kernels[0][0][0].shape[0]
pad_width = int(kernel_size / 2)
padded_signal = numpy.pad(signal, pad_width, 'constant',
constant_values=0)
output = numpy.empty(signal.shape)
for x in range(signal.shape[0]):
for y in range(signal.shape[1]):
signal_window = padded_signal[x:x+kernel_size, y:y+kernel_size]
kernel = kernels[x, y]
output[x, y] = numpy.dot(
signal_window.flatten(), kernel.flatten())
return output
Is there a function to do this efficiently in numpy, scipy, or another library? Is convolution the right word for it? I've looked at scipy.ndimage.convolve and scipy.signal.convolve, which allow higher dimensions but still only one kernel, and numpy.tensordot, which doesn't do the sliding window part of convolution. Thanks!
To make it straightforward without additional consideration about the layout of the arrays. I would combine a stride trick and an Einsteing sum.
def efficient_fancy_convolve(signal, kernels):
kernel_size = kernels[0][0][0].shape[0]
pad_width = int(kernel_size / 2)
padded_signal = numpy.pad(signal, pad_width, 'constant',
constant_values=0)
p1 = numpy.lib.stride_tricks.as_strided(
padded_signal, kernels.shape, 2*padded_signal.strides)
return np.einsum('xyjk,xyjk->xy', p1, kernels)
Then a quick test
x = np.random.randn(1000, 1000)
kernels = np.random.randn(1000, 1000, 10, 10)
x1 = fancy_convolve(x, kernels) # 3.7s
x2 = efficient_fancy_convolve(x, kernels) # 139ms
assert np.allclose(x1, x2)
I have the following array:
[[1 2 1 0 2 0]
[1 2 1 0 2 0]
[1 2 1 0 2 0]
[1 2 1 0 2 0]
[0 1 2 1 0 0]
[0 1 2 1 0 0]
[0 0 1 0 1 0]
[0 0 0 1 1 0]
[0 0 0 0 1 0]
[0 0 0 0 0 1]]
I need to add a column to this array that adds a number whenever the values in the rows change starting with number 3. So the result would look like this:
[[1 2 1 0 2 0 3]
[1 2 1 0 2 0 3]
[1 2 1 0 2 0 3]
[1 2 1 0 2 0 3]
[0 1 2 1 0 0 4]
[0 1 2 1 0 0 4]
[0 0 1 0 1 0 5]
[0 0 0 1 1 0 6]
[0 0 0 0 1 0 7]
[0 0 0 0 0 1 8]]
Thank you
If a is your array as:
a = np.array([[1, 2, 1, 0, 2, 0], [1, 2, 1, 0, 2, 0], [1, 2, 1, 0, 2, 0], [1, 2, 1, 0, 2, 0],
[0, 1, 2, 1, 0, 0], [0, 1, 2, 1, 0, 0], [0, 0, 1, 0, 1, 0], [0, 0, 0, 1, 1, 0],
[0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 1]])
using the following code will get you the results:
n = 3
a = a.tolist()
for i, j in enumerate(a):
if i == 0:
j.append(n)
elif i > 0 and j == a[i-1][:-1]:
j.append(n)
else:
n += 1
j.append(n)
# a = np.array(a)
which will give:
[[1 2 1 0 2 0 3]
[1 2 1 0 2 0 3]
[1 2 1 0 2 0 3]
[1 2 1 0 2 0 3]
[0 1 2 1 0 0 4]
[0 1 2 1 0 0 4]
[0 0 1 0 1 0 5]
[0 0 0 1 1 0 6]
[0 0 0 0 1 0 7]
[0 0 0 0 0 1 8]]
How can I reset all values in a column from a negative number to the top to zero in an array?
data = np.array([[1, 1, 1, 2], [0, 1, 0, -1], [-1, 0, 1, 0], [1, 1, 1, 1]])
resetneg_data = np.where(data<0, 0, data)
print(resetnet_data)
This gives me:
[[1 1 1 2]
[0 1 0 0]
[0 0 1 0]
[1 1 1 1]]
But what I want is:
[[0 1 1 0]
[0 1 0 0]
[0 0 1 0]
[1 1 1 1]]
That is, zero where negative, and zero everywhere above the negative. But not zero above other zeros. So that if a column drops below zero in a row, all the rows above it reset to zero.
Can I mask the values somehow by finding the specific ranges:
mask_end = np.where(data < 0)
print(mask_end)
gives:
(array([1, 2]), array([3, 0]))
maybe... use those values to replace to that row in a column with zeros?
# find values that are smaller than 0 from bottom up along with values above negatives
mask = np.minimum.accumulate(data[::-1])[::-1] < 0
# set value at mask positions as 0
data[mask] = 0
data
#[[0 1 1 0]
# [0 1 0 0]
# [0 0 1 0]
# [1 1 1 1]]