Error: dot support for x of rank 4 is not yet implemented - tensorflow.js

I am using Tensorflow.js to predict on a model I trained in Keras. However, when I feed in my 4-dimensional tensor I get the following error:
UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: dot support for x of rank 4 is not yet implemented: x shape = 32,1,1,100
I can't find anything on the web about this error- I suspect it has something to do with Tensorflow.js not having this capability yet but I'm not sure. Any idea where I can get more information?
Edit 1
Here is my code, the line throwing the error is model.predict(noise_tensor). Most of the code proceeding it is irrelevant:
noise_tensor.print(true)
generated_images = model.predict(noise_tensor) //error occours here
Here is the print output of my 4d tensor:
Tensor
dtype: float32
rank: 4
shape: [64,1,1,100]
values:
[ [ [[0.3799773 , -0.0252707, 0.0118336 , ..., 0.1703698 , -0.0649208, 0.2152225 ],]],
[ [[0.219656 , 0.2850143 , -0.1078744, ..., 0.1627689 , -0.0838831, -0.1112608],]],
[ [[-0.1295149, -0.08308 , 0.1872116 , ..., -0.2033772, -0.4184959, -0.3357461],]],
...
[ [[0.0029674 , 0.0422036 , 0.067896 , ..., 0.1368463 , 0.1122015 , -0.0395375],]],
[ [[0.043546 , -0.0281712, 0.0898769 , ..., 0.205565 , 0.1444133 , 0.0067788 ],]],
[ [[-0.1089588, -0.0161969, -0.0724337, ..., 0.1427118 , -0.2577117, 0.0013836 ],]]]
Here is the summary of the Keras model:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 1, 1, 32768) 3309568
_________________________________________________________________
reshape_1 (Reshape) (None, 8, 8, 512) 0
_________________________________________________________________
batch_normalization_1 (Batch (None, 8, 8, 512) 2048
_________________________________________________________________
activation_1 (Activation) (None, 8, 8, 512) 0
_________________________________________________________________
conv2d_transpose_1 (Conv2DTr (None, 16, 16, 256) 3277056
_________________________________________________________________
batch_normalization_2 (Batch (None, 16, 16, 256) 1024
_________________________________________________________________
activation_2 (Activation) (None, 16, 16, 256) 0
_________________________________________________________________
conv2d_transpose_2 (Conv2DTr (None, 32, 32, 128) 819328
_________________________________________________________________
batch_normalization_3 (Batch (None, 32, 32, 128) 512
_________________________________________________________________
activation_3 (Activation) (None, 32, 32, 128) 0
_________________________________________________________________
conv2d_transpose_3 (Conv2DTr (None, 64, 64, 64) 204864
_________________________________________________________________
batch_normalization_4 (Batch (None, 64, 64, 64) 256
_________________________________________________________________
activation_4 (Activation) (None, 64, 64, 64) 0
_________________________________________________________________
conv2d_transpose_4 (Conv2DTr (None, 128, 128, 1) 1601
_________________________________________________________________
activation_5 (Activation) (None, 128, 128, 1) 0
=================================================================
Total params: 7,616,257
Trainable params: 7,614,337
Non-trainable params: 1,920
_________________________________________________________________
and the corresponding code in Python:
def construct_generator():
generator = Sequential()
generator.add(Dense(units=8 * 8 * 512,
kernel_initializer='glorot_uniform',
input_shape=(1, 1, 100)))
generator.add(Reshape(target_shape=(8, 8, 512)))
generator.add(BatchNormalization(momentum=0.5))
generator.add(Activation('relu'))
generator.add(Conv2DTranspose(filters=256, kernel_size=(5, 5),
strides=(2, 2), padding='same',
data_format='channels_last',
kernel_initializer='glorot_uniform'))
generator.add(BatchNormalization(momentum=0.5))
generator.add(Activation('relu'))
generator.add(Conv2DTranspose(filters=128, kernel_size=(5, 5),
strides=(2, 2), padding='same',
data_format='channels_last',
kernel_initializer='glorot_uniform'))
generator.add(BatchNormalization(momentum=0.5))
generator.add(Activation('relu'))
generator.add(Conv2DTranspose(filters=64, kernel_size=(5, 5),
strides=(2, 2), padding='same',
data_format='channels_last',
kernel_initializer='glorot_uniform'))
generator.add(BatchNormalization(momentum=0.5))
generator.add(Activation('relu'))
generator.add(Conv2DTranspose(filters=1, kernel_size=(5, 5),
strides=(2, 2), padding='same',
data_format='channels_last',
kernel_initializer='glorot_uniform'))
generator.add(Activation('tanh'))
optimizer = Adam(lr=0.00015, beta_1=0.5)
generator.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=None)
print('generator')
generator.summary()
return generator
Edit 2
This is a bug in tensorflow.js. For future visitors, please take a look a the GitHub thread here.

For now, the inputs should be of rank 1 or 2 for tf.dot to work

Related

ValueError: Error when checking input: expected conv2d_5_input to have shape (6705, 20, 130) but got array with shape (20, 130, 1)

I am using a dataset of 11 classes of audio files and by using Convolutional Neural Network I tried to classify those audio files.
My model:
train_data = np.array(X)
train_labels = np.array(y)
model = Sequential()
model.add(layers.Conv2D(32, (3,3), activation='relu', input_shape=train_data.shape))
model.add(layers.MaxPool2D(2,2))
model.add(layers.Conv2D(32, (3,3), activation='relu'))
model.add(layers.MaxPool2D(2,2))
model.add(layers.Flatten())
model.add(layers.Dense(128, activation="relu"))
model.add(layers.Dense(34, activation="relu"))
model.add(layers.Dense(NUM_LABELS))
model.summary()
train_data is audio loaded using librosa with the shape of (6705, 20, 130)
train_label is an array of one-hot vectors with the shape of (6705, 11)
Whether I expand dimensions: reshaped_train_data = np.expand_dims(train_data, axis=3)
or reshape it: reshaped_train_data = train_data.reshape(-1, train_data.shape[1], train_data.shape[2], 1)
and tried to train it: history = model.fit(reshaped_train_data , train_labels, epochs=50, validation_split=0.1)
It gives me the following error: ValueError: Error when checking input: expected conv2d_5_input to have a shape (6705, 20, 130) but got an array with shape (20, 130, 1)
How to reshape it or expand it in a way so that I could train my model?
There are 2 mistakes:
training data shape
conv2d input_shape parameter
training data should be 4dimensional(batch, rows, cols, channels) so use train_data = np.expand_dims(train_data, axis=3)
input_shape is a tuple of integers that does not include the sample axis so use model.add(layers.Conv2D(32, (3,3), activation='relu', input_shape=train_data.shape[1:]))
Here's a sample code using random input:
import numpy as np
import tensorflow.keras.layers as layers
from tensorflow import keras
NUM_LABELS = 11
train_data = np.random.random(size=(6705, 20, 130))
###############expand shape################
train_data = np.expand_dims(train_data, axis=3)
# generate one-hot random vector
train_labels = np.eye(11)[np.random.choice(1, 6705)]
model = keras.Sequential()
###############input_shape################
model.add(layers.Conv2D(32, (3,3), activation='relu', input_shape=train_data.shape[1:]))
model.add(layers.MaxPool2D(2,2))
model.add(layers.Conv2D(32, (3,3), activation='relu'))
model.add(layers.MaxPool2D(2,2))
model.add(layers.Flatten())
model.add(layers.Dense(128, activation="relu"))
model.add(layers.Dense(34, activation="relu"))
model.add(layers.Dense(NUM_LABELS))
model.summary()
model.compile(
loss = 'categorical_crossentropy', optimizer = 'sgd', metrics = ['accuracy']
)
history = model.fit(train_data , train_labels, epochs=1, validation_split=0.1)
Results:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 18, 128, 32) 320
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 9, 64, 32) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 7, 62, 32) 9248
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 3, 31, 32) 0
_________________________________________________________________
flatten (Flatten) (None, 2976) 0
_________________________________________________________________
dense (Dense) (None, 128) 381056
_________________________________________________________________
dense_1 (Dense) (None, 34) 4386
_________________________________________________________________
dense_2 (Dense) (None, 11) 385
=================================================================
Total params: 395,395
Trainable params: 395,395
Non-trainable params: 0
_________________________________________________________________
189/189 [==============================] - 8s 42ms/step - loss: 16.0358 - accuracy: 0.0000e+00 - val_loss: 16.1181 - val_accuracy: 0.0000e+00

How to fix a mismatch in array shape between layers

I'm building a reinforcement DNN (DQN) but got the following error as my data was sent to the model:
ValueError: Error when checking target: expected dense_2 to have 2 dimensions, but got array with shape (64, 4, 1)
I'm using this a input of (1,139) with a minibatch size of 64, making it: (64,1,139).
def create_model(self):
model = Sequential()
model.add(LSTM(128, input_shape=(1,139), return_sequences=True, stateful=False))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(LSTM(128, return_sequences=True, stateful=False))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(4, activation='softmax'))
#Model compile settings:
opt = tf.keras.optimizers.Adam(lr=0.001, decay=1e-6)
# Compile model
model.compile(loss='sparse_categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
I ran a summary on the model:
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_1 (LSTM) (None, 1, 128) 137216
_________________________________________________________________
dropout_1 (Dropout) (None, 1, 128) 0
_________________________________________________________________
batch_normalization_1 (Batch (None, 1, 128) 512
_________________________________________________________________
lstm_2 (LSTM) (None, 1, 128) 131584
_________________________________________________________________
dropout_2 (Dropout) (None, 1, 128) 0
_________________________________________________________________
batch_normalization_2 (Batch (None, 1, 128) 512
_________________________________________________________________
dense_1 (Dense) (None, 1, 32) 4128
_________________________________________________________________
dropout_3 (Dropout) (None, 1, 32) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 32) 0
_________________________________________________________________
dense_2 (Dense) (None, 4) 132
=================================================================
Total params: 274,084
Trainable params: 273,572
Non-trainable params: 512
_________________________________________________________________
None
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_3 (LSTM) (None, 1, 128) 137216
_________________________________________________________________
dropout_4 (Dropout) (None, 1, 128) 0
_________________________________________________________________
batch_normalization_3 (Batch (None, 1, 128) 512
_________________________________________________________________
lstm_4 (LSTM) (None, 1, 128) 131584
_________________________________________________________________
dropout_5 (Dropout) (None, 1, 128) 0
_________________________________________________________________
batch_normalization_4 (Batch (None, 1, 128) 512
_________________________________________________________________
dense_3 (Dense) (None, 1, 32) 4128
_________________________________________________________________
dropout_6 (Dropout) (None, 1, 32) 0
_________________________________________________________________
flatten_2 (Flatten) (None, 32) 0
_________________________________________________________________
dense_4 (Dense) (None, 4) 132
=================================================================
Total params: 274,084
Trainable params: 273,572
Non-trainable params: 512
_________________________________________________________________
None
Shouldn't the flatten layer make it a 2D array? Any ideas? :-/
It does not make any sense have this line
model.add(Flatten())
After a Dense layer. I believe you were supposed to put it after your second LSTM, right?

Is there a way to create subArray chunks of n elements from a given array in Scala?

Given Scala arrayBuffer:
ArrayBuffer(200, 13, 1, 200, 15, 1, 201, 13, 0, 202, 14, 3, 199, 10, 2, 199, 11, 3, 199, 96, 2)
Expected output:
ArrayBuffer((200, 13, 1), (200, 15, 1), (201, 13, 0), (202, 14, 3), (199, 10, 2), (199, 11, 3), (199, 96, 2))
Is there any simple way of achieving this form of chunking in Scala without for loops? The required chunk_size is 3. And the order of these elements must be the same.
I've tried:
def chunkArray(myArray){
val chunk_size = 3
var index = 0
var arrayLength = arrayToInsert.length
var tempArray = ArrayBuffer[Int](2)
val numChunks = arrayToInsert.length / 3
for (i <- 0 to numChunks-1) {
var myChunk = arrayToInsert.slice(i*chunk_size, (i+1)*chunk_size)
tempArray += (myChunk(0), myChunk(1), myChunk(2))
}
}
Expected result:
((200, 13, 1), (200, 15, 1), (201, 13, 0), (202, 14, 3), (199, 10, 2), (199, 11, 3), (199, 96, 2))
You want to use .grouped(3)
( the collections API examples )
collection.mutable.ArrayBuffer(200, 13, 1, 200, 15, 1, 201, 13, 0, 202, 14, 3, 199, 10, 2, 199, 11, 3, 199, 96, 2).grouped(3).toArray
res2: Array[collection.mutable.ArrayBuffer[Int]] = Array(ArrayBuffer(200, 13, 1), ArrayBuffer(200, 15, 1), ArrayBuffer(201, 13, 0), ArrayBuffer(202, 14, 3), ArrayBuffer(199, 10, 2), ArrayBuffer(199, 11, 3), ArrayBuffer(199, 96, 2))
This will create a Buffer of tuples, which is what the original code appears to attempt.
import collection.mutable.ArrayBuffer
val data =
ArrayBuffer(200, 13, 1, 200, 15, 1, 201, 13, 0 /*etc.*/)
data.grouped(3).collect{case Seq(a,b,c) => (a,b,c)}.toBuffer
//res0: Buffer[(Int, Int, Int)] = ArrayBuffer((200,13,1), (200,15,1), (201,13,0) /*etc.*/)
Note that if the final group is not 3 elements then it will be ignored.
This could also be achieved using sliding:
myArray.sliding(3, 3).toArray
Anyway, .grouped is better suited for this use case as discussed here Scala: sliding(N,N) vs grouped(N)

Outer product with R arrays

I have an array of dimension 3x1000. In truth each column is what's interesting. I want to use this to compute an array of dimension 3x3x1000, where a slab i is the outer product of the column i of the original array (in other words, v %*% t(v)). Is there a clean way to do this?
Below is a sample input matrix and output array, in the case of a 2x4 matrix.
mat_in <- cbind(c(1, 2), c(3, 4), c(5, 6), c(7, 8))
arr_out <- array(c(1, 2, 2, 4, 9, 12, 12, 16, 25, 30, 30, 36, 49, 56, 56, 64),
dim = c(2, 2, 4))
This gives you the desired result:
mat_in <- cbind(c(1, 2), c(3, 4), c(5, 6), c(7, 8))
array(apply(mat_in, 2, tcrossprod), dim=c(2,2,4))
### test:
arr_out <- array(c(1, 2, 2, 4, 9, 12, 12, 16, 25, 30, 30, 36, 49, 56, 56, 64),
dim = c(2, 2, 4))
arr_out - array(apply(mat_in, 2, tcrossprod), dim=c(2,2,4))

Load text from csv file with int and float columns into ndarray

I have csv file as input :
6,148,72,35,0,33.6,0.627,50,1
1,85,66,29,0,26.6,0.351,31,0
8,183,64,0,0,23.3,0.672,32,1
1,89,66,23,94,28.1,0.167,21,0
It has mix of int and float.
when i tried to import file using "numpy.loadtext" what i got is 2d array with every column as float.
r = np.loadtxt(open("text.csv", "rb"), delimiter=",", skiprows=0)
and i received output like :
array([[ 6. , 148. , 72. , ..., 0.627, 50. , 1. ],
[ 1. , 85. , 66. , ..., 0.351, 31. , 0. ],
[ 8. , 183. , 64. , ..., 0.672, 32. , 1. ],
...,
[ 5. , 121. , 72. , ..., 0.245, 30. , 0. ],
[ 1. , 126. , 60. , ..., 0.349, 47. , 1. ],
[ 1. , 93. , 70. , ..., 0.315, 23. , 0. ]])
which is perfect have 2d array with each row in list instead of tuple.
but when looking into datatypes every column treated as float which is not correct.
What i am asking is there any way i can do output like :
Desired output
array([[ 6 , 148 , 72 , ..., 0.627, 50 , 1 ],
[ 1 , 85 , 66 , ..., 0.351, 31 , 0 ],
[ 8 , 183 , 64 , ..., 0.672, 32 , 1 ],
...,
[ 5 , 121 , 72 , ..., 0.245, 30 , 0 ],
[ 1 , 126 , 60 , ..., 0.349, 47 , 1 ],
[ 1 , 93 , 70 , ..., 0.315, 23 , 0 ]])
I did tried this approach:
r = np.loadtxt(open("F:/idm/compressed/ANN-CI1/Diabetes.csv", "rb"), delimiter=",", skiprows=0, dtype=[('f0',int),('f1',int),('f2',int),('f3',int),('f4',int),('f5',float),('f6',float),('f7',int),('f8',int)])
Output
array([( 6, 148, 72, 35, 0, 33.6, 0.627, 50, 1),
( 1, 85, 66, 29, 0, 26.6, 0.351, 31, 0),
( 8, 183, 64, 0, 0, 23.3, 0.672, 32, 1),
( 1, 89, 66, 23, 94, 28.1, 0.167, 21, 0),
...,
( 1, 126, 60, 0, 0, 30.1, 0.349, 47, 1),
( 1, 93, 70, 31, 0, 30.4, 0.315, 23, 0)],
dtype=[('f0', '<i4'), ('f1', '<i4'), ('f2', '<i4'), ('f3', '<i4'), ('f4','<i4'), ('f5', '<f8'), ('f6', '<f8'), ('f7', '<i4'), ('f8', '<i4')])
Here you can see dtype solve the problem but now its not in correct form which i require,
[[col1,col2,...,coln],] instead of [(col1,col2,...,coln),] ndarray
Thanks
------------------EDIT------------------------
problem why i am asking is that I am giving this 2d array as input to my binary classification network, when all values are int and in [[ ]] shape it's converging to values , but in current case it's mixed output is either 0. or 1. with very high error learning.
Visit https://github.com/naitikshukla/MachineLearning/blob/master/neural/demo_ann.py!
for complete code
In input space if i mark my current input and unmark from line 69-88 then output will be both 0 and 1.
So i wanted to change it to correct datatype and see if that will solve my issue.
There are very good explanation below for this not possible , i will see any workaround and see if i can use current input for train and predict.
It's impossible to create a numpy array like [[col1,col2,...,coln],] which containing different types of values.
numpy array is homogeneous. In other words, numpy array contains only values of one single type.
In [32]: sio = StringIO('''6,148,72,35,0,33.6,0.627,50,1
...: 1,85,66,29,0,26.6,0.351,31,0
...: 8,183,64,0,0,23.3,0.672,32,1
...: 1,89,66,23,94,28.1,0.167,21,0''')
In [33]: r = np.loadtxt(sio, delimiter=",", skiprows=0)
In [34]: r.shape
Out[34]: (4, 9)
In [41]: r.dtype
Out[41]: dtype('float64')
This above line create a 2D array of floats, and it's shape is 4x9.
In [36]: r = np.loadtxt(sio, delimiter=",", skiprows=0, dtype=[('f0',int),('f1'
...: ,int),('f2',int),('f3',int),('f4',int),('f5',float),('f6',float),('f7'
...: ,int),('f8',int)])
In [38]: r.shape
Out[38]: (4,)
In [45]: r.dtype
Out[45]: dtype([('f0', '<i4'), ('f1', '<i4'), ('f2', '<i4'), ('f3', '<i4'), ('f4', '<i4'), ('f5', '<f8'), ('f6', '<f8'), ('f7', '<i4'), ('f8', '<i4')])
This line code create a 1-D structured array. Each element of this array is a structure that contains 9 items. It's still homogeneous.
In the first case you get a 2d array of floats. In the second, a 1d array with a structured dtype, a mix of ints and floats. What where columns in the first are now named fields. The structured records are marked with () instead of the [].
Both forms are valid and useful. It just depends on what you need to do.
The structured form is more useful when some of the fields are strings or other things that don't fit the integer/float pattern. Usually you can work with the integers as floats without any loss of functionality.
What exactly is wrong with the first case, the all floats? Which is most important - named columns, or ranges of columns (e.g. 0:5, 5:8)?

Resources