Fully Convolutional Network - tensorflow.js

TensorFlow.js version
tfjs-node-gpu 0.2.1
Describe the problem or feature request
I'm trying to make a supervised fully convolutional network and am not able to generate appropriate outputs. The net structure is based on several FCN examples done, specifically this one: http://deeplearning.net/tutorial/fcn_2D_segm.html
I've put the mask in a one-hot 4d boolean vector with the order of [batch, height, width, class] with only a single class. The input data is altered to a float32 tensor of [batch, height, width, 1] (no RGB channels) with a range of 0 to 1.
Data is here, and from the same tutorial above: https://drive.google.com/file/d/0B_60jvsCt1hhZWNfcW4wbHE5N3M/view
const input = tf.input({ shape: [this._dims[1], this._dims[2], this._dims[3]], name: 'Input', });
const batchNorm_0 = tf.layers.batchNormalization().apply(input);
//**Begin A-Scan Net*/
const fcn_1_0 = tf.layers.conv2d( { name: '', kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64, } ).apply(input);
const fcn_2 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_1_0);
const fcn_3_0 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_2);
const fcn_3_1 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_3_0);
const fcn_3_2 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_3_1);
const fcn_4 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_3_2);
const fcn_5_0 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_4);
const fcn_5_1 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_5_0);
const fcn_5_2 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_5_1);
const fcn_6 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_5_2);
const fcn_7_0 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_6);
const fcn_7_1 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_7_0);
const fcn_7_2 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_7_1);
const fcn_8 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_7_2);
const fcn_9_0 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_8);
const fcn_9_1 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_9_0);
const fcn_9_2 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_9_1);
const fcn_10 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_9_2);
const fcn_11 = tf.layers.conv2d({ kernelSize: [1, 1], strides: [1, 1], activation: 'relu', padding: 'same', filters: 2048 }).apply(fcn_10);
const fcn_12 = tf.layers.conv2d({ kernelSize: [1, 1], strides: [1, 1], activation: 'relu', padding: 'same', filters: this._classes }).apply(fcn_11);
const upsample_5 = tf.layers.conv2dTranspose( { kernelSize: [32, 32], strides: [32, 32], filters: this._classes, activation: 'relu', padding: 'same' } ).apply(fcn_12);
const upsample_6 = tf.layers.conv2d( { kernelSize: [1, 1], strides: [1, 1], filters: this._classes, activation: 'softmax', padding: 'same' } ).apply(upsample_5);
var model = tf.model( { name: 'AdvancedCNN', inputs: [input], outputs: [upsample_6] } );
The loss / meteric / optimizer is:
const LEARNING_RATE = .00001;
const optimizer = tf.train.adam(LEARNING_RATE)
model.compile({
optimizer,
loss: tf.losses.logLoss,
metrics: tf.metrics.categoricalCrossentropy,
});
The issue is that the network isn't learning and the output class is either all 0 or all 1, even after multiple epochs. I've tried with and without batch norm and altering the learning rate. The data seems sound, so either I'm formatting the data wrong or there is an issue with the loss function, label structure, etc.
Has anyone else built an FCN using TensorFlow.js?

In fully CNN convolutional neural networks, the last layer is a dense layer or fully connected layer. It is from that dense layer that the softmax activation is computed. Currently, your NN neural network architecture is missing such a layer, therein you're unable to get your classification correctly.
Actually is this the last layer, the dense layer who does perform the classification using the different features learnt by the convolutional layers.
The only thing to point out is that you might need to use a flatten layer at the entry of the dense layer - just for dimensions matching
Update:
Using an upsampling layer for the last layer will likely cause your loss to decrease. I think the issue has to do with the transpose layer. This article explains what is upsampling

I was able to solve the issue. I swapped out the conv2dTranspose with upSampling2d and conv2d layers. The one-hot encoding of the mask is sufficient as is the tf.losses.softmaxCrossEntropy for the loss function.
Finally, resizing my images down to 256x512 helped speed up training time. The final net structure that worked (super primitive network, so use it as you will) is:
const fcn_1_0 = tf.layers.conv2d( { name: '', kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64, } ).apply(input);
const fcn_1_1 = tf.layers.conv2d( { name: '', kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64, } ).apply(fcn_1_0);
const fcn_1_2 = tf.layers.conv2d( { name: '', kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64, } ).apply(fcn_1_1);
const fcn_2 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_1_2);
const fcn_3_0 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_2);
const fcn_3_1 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_3_0);
const fcn_3_2 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_3_1);
const fcn_4 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_3_2);
const fcn_5_0 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_4);
const fcn_5_1 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_5_0);
const fcn_5_2 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_5_1);
const fcn_6 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_5_2);
const fcn_7_0 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_6);
const fcn_7_1 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_7_0);
const fcn_7_2 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_7_1);
const fcn_8 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_7_2);
const fcn_9_0 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_8);
const fcn_9_1 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_9_0);
const fcn_9_2 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_9_1);
const fcn_10 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_9_2);
// const fcn_11_0 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_10);
// const fcn_11_1 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_11_0);
// const fcn_11_2 = tf.layers.conv2d( { kernelSize: [3, 3], strides: [1, 1], activation: 'relu', padding: 'same', filters: 64 } ).apply(fcn_11_1);
// const fcn_12 = tf.layers.maxPool2d( { kernelSize: [2, 2], strides: [2, 2] } ).apply(fcn_11_2);
//const fcn_13 = tf.layers.conv2d({ kernelSize: [7, 7], strides: [1, 1], activation: 'relu', padding: 'same', filters: 4096 }).apply(fcn_12);
const fcn_13_0 = tf.layers.conv2d({ kernelSize: [1, 1], strides: [1, 1], activation: 'relu', padding: 'same', filters: 4096 }).apply(fcn_10);
//const drop_0 = tf.layers.dropout( { rate: .5 } ).apply(fcn_13);
//const fcn_15 = tf.layers.conv2d({ kernelSize: [1, 1], strides: [1, 1], activation: 'relu', padding: 'same', filters: this._classes }).apply(fcn_13_0);
const upsample_1 = tf.layers.upSampling2d( { size: [2, 2], padding: 'same' } ).apply(fcn_13_0);
const conv_upsample1 = tf.layers.conv2d( { kernelSize: 3, strides: 1, activation: 'relu', padding: 'same', filters: 64 }).apply(upsample_1);
const upsample_2 = tf.layers.upSampling2d( { size: [2, 2], padding: 'same' } ).apply(conv_upsample1);
const conv_upsample2 = tf.layers.conv2d( { kernelSize: 3, strides: 1, activation: 'relu', padding: 'same', filters: 64 }).apply(upsample_2);
const upsample_3 = tf.layers.upSampling2d( { size: [2, 2], padding: 'same' } ).apply(conv_upsample2);
const conv_upsample3 = tf.layers.conv2d( { kernelSize: 3, strides: 1, activation: 'relu', padding: 'same', filters: 64 }).apply(upsample_3);
const upsample_4 = tf.layers.upSampling2d( { size: [2, 2], padding: 'same' } ).apply(conv_upsample3);
const conv_upsample4 = tf.layers.conv2d( { kernelSize: 3, strides: 1, activation: 'relu', padding: 'same', filters: 64 }).apply(upsample_4);
const upsample_5 = tf.layers.upSampling2d( { size: [2, 2], padding: 'same' } ).apply(conv_upsample4);
const conv_upsample5 = tf.layers.conv2d( { kernelSize: 3, strides: 1, activation: 'relu', padding: 'same', filters: 64 }).apply(upsample_5);
//const upsample_6 = tf.layers.upSampling2d( { size: [2, 2], padding: 'same' } ).apply(fcn_15);
const conv_upsample = tf.layers.conv2dTranspose( { kernelSize: 1, strides: 1, activation: 'softmax', padding: 'same', filters: this._classes }).apply(conv_upsample5);

Related

Optimised loop for array using ruby

I have (m = rows-1, n = cols-1) dimensional matrix.
And I passing i to method which will return array in following manner (provided with i <= m,n)
Suppose n=0, so for 4x4 matrix, it will return boundary elements position.
Do not consider below as ruby syntax, get only flow.
square = [[i,i] -> [i, m-i] -> [n-i, m-1] -> [n-i, i] -> [i,i]]
(no data is repeated in above)
I achieved above in recursion manner by setting parameters but I need easier/optimised trick.
Update - for user sawa
arr = [*1..16].each_slice(4).to_a
m,n = arr.length-1, arr[0].length-1
loop_count = 0
output = [[0, 0], [1, 0], [2, 0], [3, 0], [4, 0], [4, 1], [4, 2], [3, 2], [2, 2], [1, 2], [0, 2], [0, 1]]
loop_count = 1
output = [[1, 1], [2, 1], [2, 2], [1, 2]]
I ended up with this solution, but I think there is a better way out there.
First define a method to print the matrix mapped by indexes, just to check if the result id correct:
def print_matrix(n,m)
range_n, range_m = (0..n-1), (0..m-1)
mapp = range_m.map { |y| range_n.map { |x| [x, y] } }
mapp.each { |e| p e }
puts "-" * 8 * n
end
Then define a method that returns the frame starting from the loop s (where 0 is the external frame):
def frame (n, m, s = 0)
res = []
return res if (s >= n/2 and s >= m/2) and (n.even? or m.even?)
(s..n-s-1).each { |x| res << [x,s] }
(s..m-s-1).each { |y| res << [res.last[0], y] }
(res.last[0].downto s).each { |x| res << [x, res.last[1]] }
(res.last[1].downto s).each { |y| res << [res.last[0], y] }
res.uniq
end
Now, call the methods and check the output:
n, m, loops = 4, 4, 1
print_matrix(n,m)
frame(n, m, loops)
# [[0, 0], [1, 0], [2, 0], [3, 0]]
# [[0, 1], [1, 1], [2, 1], [3, 1]]
# [[0, 2], [1, 2], [2, 2], [3, 2]]
# [[0, 3], [1, 3], [2, 3], [3, 3]]
# --------------------------------
# [[1, 1], [2, 1], [2, 2], [1, 2]]
Here we can use Matrix methods to advantage, specifically Matrix::build, Matrix#minor and Matrix#[].
Code
require 'matrix'
def border_indices(nrows, ncols, i)
m = Matrix.build(nrows, ncols) { |r,c| [r,c] }.minor(i..nrows-1-i, i..ncols-1-i)
[[1,0,m.row_count-1], [0,1,m.column_count-1],
[-1,0,m.row_count-1], [0,-1,m.column_count-2]].
each_with_object([[0,0]]) do |(x,y,n),a|
n.times { a << [a.last.first+x, a.last.last+y] }
end.map { |i,j| m[i,j] }
end
Examples
nrows = 5
ncols = 6
border_indices(nrows, ncols, 0)
#=> [[0, 0], [1, 0], [2, 0], [3, 0],
# [4, 0], [4, 1], [4, 2], [4, 3], [4, 4],
# [4, 5], [3, 5], [2, 5], [1, 5],
# [0, 5], [0, 4], [0, 3], [0, 2], [0, 1]]
border_indices(nrows, ncols, 1)
#=> [[1, 1], [2, 1],
# [3, 1], [3, 2], [3, 3],
# [3, 4], [2, 4],
# [1, 4], [1, 3], [1, 2]]
border_indices(nrows, ncols, 2)
#=> [[2, 2], [2, 3]]
Explanation
Consider the calculation of border_indices(5, 6, 1).
nrows = 5
ncols = 6
i = 1
mat = Matrix.build(nrows, ncols) { |r,c| [r,c] }
#=> Matrix[[[0, 0], [0, 1], [0, 2], [0, 3], [0, 4], [0, 5]],
# [[1, 0], [1, 1], [1, 2], [1, 3], [1, 4], [1, 5]],
# [[2, 0], [2, 1], [2, 2], [2, 3], [2, 4], [2, 5]],
# [[3, 0], [3, 1], [3, 2], [3, 3], [3, 4], [3, 5]],
# [[4, 0], [4, 1], [4, 2], [4, 3], [4, 4], [4, 5]]]
m = mat.minor(i..nrows-1-i, i..ncols-1-i)
#=> mat.minor(1..3, 1..4)
#=> Matrix[[[1, 1], [1, 2], [1, 3], [1, 4]],
# [[2, 1], [2, 2], [2, 3], [2, 4]],
# [[3, 1], [3, 2], [3, 3], [3, 4]]]
b = [[1,0,m.row_count-1], [0,1,m.column_count-1],
[-1,0,m.row_count-1], [0,-1,m.column_count-2]]
#=> [[1, 0, 2], [0, 1, 3], [-1, 0, 2], [0, -1, 2]]
c = b.each_with_object([[0,0]]) do |(x,y,n),a|
n.times { a << [a.last.first+x, a.last.last+y] }
end
#=> [[0, 0], [1, 0],
# [2, 0], [2, 1], [2, 2],
# [2, 3], [1, 3],
# [0, 3], [0, 2], [0, 1]]
c.map { |i,j| m[i,j] }
#=> [[1, 1], [2, 1],
# [3, 1], [3, 2], [3, 3],
# [3, 4], [2, 4],
# [1, 4], [1, 3], [1, 2]]
Note that in the calculation of c, a.last is the last pair of indices added to the array being constructed (a.last = [a.last.first, a.last.last]).
Following will work for both m == n & m != n case.
I hope, all will consider what matrix variable below stands for (2 D array)
def matrixRotation(matrix)
m,n = matrix.length-1, matrix[0].length-1
loop_count = [m,n].min/2
0.upto(loop_count) do |i|
indices = []
i.upto(m-i) { |j| indices << [j, i] }
i.upto(n-i) { |j| indices << [m-i, j] }
i.upto(m-i) { |j| indices << [m-j, n-i] }
i.upto(n-i) { |j| indices << [i, n-j] }
puts "-------------- For Layer #{i+1} ---------------", nil
indices = indices.uniq
values = indices.map { |x| matrix[x[0]][x[1]] }
puts 'indices:', indices.inspect, nil, 'values:', values.inspect
end
end

Transform array by cutting in half but extend elements

I want to transform an existing array to display it. Therefore I cut the array in half but add the content of the cut element to the remained elements.
# source structure
s = [[1, 'blue'],
[2, 'red'],
[3, 'yellow'],
[4, 'green'],
[5, 'orange'],
[6, 'black']]
# result structure
format_array(s)
# [[1, 'blue', 4, 'green'],
# [2, 'red', 5, 'orange'],
# [3, 'yellow', 6, 'black']]
How would you achieve it?
a = [[1, "blue"], [2, "red"], [3, "yellow"], [4, "green"], [5, "orange"], [6, "black"]]
first, last = a.first(a.size / 2), a.last(a.size / 2)
#=> [[[1, "blue"], [2, "red"], [3, "yellow"]], [[4, "green"], [5, "orange"], [6, "black"]]]
first.zip(last).map(&:flatten)
# [
# [1, "blue", 4, "green"],
# [2, "red", 5, "orange"],
# [3, "yellow", 6, "black"]
# ]
Just one more solution:
a.each_slice(a.size / 2).to_a.transpose.map(&:flatten)
#=> [[1, "blue", 4, "green"], [2, "red", 5, "orange"], [3, "yellow", 6, "black"]]
s.each_slice((s.size + 1) / 2).reduce(&:zip).map(&:flatten)
Step1: Divide array into two using each_slice method of array. each_slice documentation
Step2: Use array.zip method to map self with corresponding elements of array. zip documentation
Step3: Use flatten to flatten the array. Flatten documentation
s => [[1, "blue"], [2, "red"], [3, "yellow"], [4, "green"], [5, "orange"], [6, "black"]]
s1,s2 = s.each_slice((s.length)/2).to_a
==> [[[1, "blue"], [2, "red"], [3, "yellow"]], [[4, "green"], [5, "orange"], [6, "black"]]]
s1.zip(s2).map(&:flatten)
=> [[1, "blue", 4, "green"], [2, "red", 5, "orange"], [3, "yellow", 6, "black"]]
s = [[1, 'blue'], [2, 'red'], [3, 'yellow'], [4, 'green'], [5, 'orange'], [6, 'black']]
# Split into two sections
s1 = s[0...s.length/2]
s2 = s[s.length/2..-1]
# Compile
p s1.each_with_index.map { |x, i| x + s2[i] }
#[[1, "blue", 4, "green"], [2, "red", 5, "orange"], [3, "yellow", 6, "black"]]
a maths trick -))
s.group_by {|a| a[0]%((s.length)/2) }.values.map {|e| e.flatten }
# [
# [1, "blue", 4, "green"],
# [2, "red", 5, "orange"],
# [3, "yellow", 6, "black"]
# ]

Joining two ranges into 2d array Ruby

How do I join two ranges into a 2d array as such in ruby? Using zip doesn't provide the result I need.
(0..2) and (0..2)
# should become => [[0,0],[0,1],[0,2], [1,0],[1,1],[1,2], [2,0],[2,1],[2,2]]
Ruby has a built in method for this: repeated_permutation.
(0..2).to_a.repeated_permutation(2).to_a
I'm puzzled. Here it is a day after the question was posted and nobody has suggested the obvious: Array#product:
[*0..2].product [*1..3]
#=> [[0, 1], [0, 2], [0, 3], [1, 1], [1, 2], [1, 3], [2, 1], [2, 2], [2, 3]]
range_a = (0..2)
range_b = (5..8)
def custom_join(a, b)
a.inject([]){|carry, a_val| carry += b.collect{|b_val| [a_val, b_val]}}
end
p custom_join(range_a, range_b)
Output:
[[0, 5], [0, 6], [0, 7], [0, 8], [1, 5], [1, 6], [1, 7], [1, 8], [2, 5], [2, 6], [2, 7], [2, 8]]
straight forward solution:
range_a = (0..2)
range_b = (5..8)
def custom_join(a, b)
[].tap{|result| a.map{|i| b.map{|j| result << [i, j]; } } }
end
p custom_join(range_a, range_b)
Output:
[[0, 5], [0, 6], [0, 7], [0, 8], [1, 5], [1, 6], [1, 7], [1, 8], [2, 5], [2, 6], [2, 7], [2, 8]]
Simply, this will do it:
a = (0...2).to_a
b = (0..2).to_a
result = []
a.each { |ae| b.each { |be| result << [ae, be] } }
p result
# => [[0, 0], [0, 1], [0, 2], [1, 0], [1, 1], [1, 2]]

Sorting array by more than one condition

I have an array of points:
arr = [[2,0], [1,0], [2,1], [1,1]]
How would I sort the elements in descending and ascending orders by x first and then by y values of the similar x value?
max = [[2,1], [2,0], [1,1], [1,0]]
min = [[1,0], [1,1], [2,0], [2,1]]
.
min = arr.sort
# => [[1, 0], [1, 1], [2, 0], [2, 1]]
max = min.reverse
# => [[2, 1], [2, 0], [1, 1], [1, 0]]
If performance is an issue rather than simplicity, then the following can be used.
min = arr.sort_by(&:itself)
This is a good use case for Enumerable#sort_by.
For max:
arr.sort_by { |el| [-el[0], -el[1]] }
=> [[2, 1], [2, 0], [1, 1], [1, 0]]
For min:
arr.sort_by { |el| [el[0], el[1]] }
=> [[1, 0], [1, 1], [2, 0], [2, 1]]

Ruby grouping elements

I have array:
a = [1, 3, 1, 3, 2, 1, 2]
And I want to group by values, but save it indexes, so result must be looks like this:
[[0, 2, 5], [1, 3], [4, 6]]
or hash
{1=>[0, 2, 5], 3=>[1, 3], 2=>[4, 6]}
Now I'm using pretty ugly and big code:
struc = Struct.new(:index, :value)
array = array.map.with_index{ |v, i| struc.new(i, v) }.group_by {|s| s[1]}.map { |h| h[1].map { |e| e[0]}}
`
If you use a hash with a default value to avoid iterating twice over the elements:
a = [1, 3, 1, 3, 2, 1, 2]
Hash.new { |h, k| h[k] = [] }.tap do |result|
a.each_with_index { |i, n| result[i] << n }
end
#=> { 1 => [0, 2, 5], 3 => [1, 3], 2 => [4, 6] }
a = [1, 3, 1, 3, 2, 1, 2]
a.each_with_index.group_by(&:first).values.map { |h| h.map &:last }
First we get an Enumerator in the form [val, idx], ... (each_with_index), then group_by the value (first value in pair), then take the index (last element) of each pair.
You can use:
Enumerable#each_with_index
Enumerable#group_by and
Array#transpose:
a = [1, 3, 1, 3, 2, 1, 2]
a.each_with_index.group_by(&:first).values.map { |b| b.transpose.last }
#=> [[0, 2, 5], [1, 3], [4, 6]]

Resources