How to create a tensor with shape [null,256] in Tensorflow js? - tensorflow.js

I'm having problem in creating a latent point for my GAN model in Tensorflow js, my model keep spitting this error..
" Error when checking : expected input_4 to have shape [null,256] but got array with shape [8,1]"
How can I create a tensor with shape [null,256] in Tensorflow js ?

If you search for "null" in https://js.tensorflow.org/api/latest/ you'll see that null means any dimension. So a tensor with shape [1, 256] or [42, 256] or the like should work.
When I was first using TFJS I also was confused by "null". Perhaps the TFJS can put some effort into making error messages easier to understand???

Related

How to implement get_tensor_by_name and predict in tensorflow.js

I want to use Faster rcnn inception v2 to do object detection in tensorflow.js. But i can't find some method in tfjs like get_tensor_by_name and session run for prediction.
In tensorflow (python), the code as the following:
Define input and output node:
# Definite input Tensors for detection_graph
self.image_tensor = self.detection_graph.get_tensor_by_name('image_tensor:0')
# Definite output Tensors for detection_graph
self.detection_boxes = self.detection_graph.get_tensor_by_name('detection_boxes:0')
self.detection_scores = self.detection_graph.get_tensor_by_name('detection_scores:0')
self.detection_classes = self.detection_graph.get_tensor_by_name('detection_classes:0')
self.num_detections = self.detection_graph.get_tensor_by_name('num_detections:0')
Predict:
(boxes, scores, classes, num) = self.sess.run(
[self.detection_boxes, self.detection_scores, self.detection_classes, self.num_detections],
feed_dict={self.image_tensor: image_np_expanded})
Do anyone know how to implement those two part of code in tfjs?
Please help. Thank you!
You don't have a session.run function in tensorflow.Js as there is in Python. In python, you start defining a graph and in the run function, you execute the graph. Tensors and Variables are assigned values in the graph, but the graph defines only the flow of computation, it does not hold any values. The real computation occurs when you run the session. One can create many session where each session can assign different values to the variable, that is why the graph has a get_by_tensor_name which outputs the tensor whose name is given as parameter.
You don't have the same mechanism in Js. You can use the variable as soon as you define them. It means that whenever you define a new tensor or variable, you can print it in the following line whereas in python, you can only print the tensor or variable only during a session. The get_by_tensor_name does not really have a sense in Js.
As for the predict function, you do have one in Js as well. if you create a model using tf.model or tf.sequential, you can invoke predict to make a prediction.

logistic regression always predict the same value when the nework are deeper

im using darknet to train a logistic regresion model. but it always output the same prediction for different input image.
but when i remove some convolutional layers , it seems to become normal.(different output for different input images)
the model cfg file is as follows:
[net]
some parameter...
[convolutions]
[convolutions]
[shortcut]
...
[avgpool]
[connected]
batch_normalize=1
output=1
activation=linear
[logistic]
i tryed different learning rate, momentum. not work.
and the trainging data is ballanced. two class, 15000images for each class.
any advices?
thanks.

scn file all items rotated

I have a curtain I want to render in my ARKit app. I added all parts of my curtain (exported as COLLADA) and added them in my .scn file and placed them so that they line up properly and form a correct curtain.
Now I have added the file to the ARKit testing app and it seems like all individual objects rotated around the X-axis of their own coordinate system.
Why is this? Does it have something to do with Y vs Z being up?
Blender's coordinate system uses Z-up, but SceneKit uses Y-up. I believe that's where your issue is stemming from.
SCNScene has an initializer with loading options, try using the convertToYUp option. This could be done at runtime or before with a custom command line tool.
Here's an example of how to load a Collada file with the convert Y-up option, then exported to the url destination of your choice.
let scene = try? SCNScene(url: daeURL, options: [.convertToYUp: true])
scene.write(to: scnURL, options: nil, delegate: nil, progressHandler: nil)
I've been very happy with the results. SceneKit is able to convert not just single objects but multiple objects in complex scenes.
Just remember to apply rotation in Blender before exporting to get good results. I believe the hotkey to apply rotation is Ctrl+A, then R.

Best way to get Shape from a Transform

Using ls -sl returns a transform. The only way I can find to get the shape of a transform is to use getRelatives but this seems wonky compared to other workflows. Is there a better more standard way to get a Shape from a Transform?
Be aware, as of 2018, the pymel getShape() is flawed (IMO) in that it assumes there is only one shape per node, and that is not always the case. (like 99% of the time, its the case though, so I'm nitpicking)
However; the getShape() method only works off of a transform nodeType. If you have an unknown node type that you are trying to parse if its a mesh, or a curve for instance, by saying getShape() you'll want to check if you can use the method or not.
if pm.nodeType(yourPyNode) == 'transform':
'shape = yourPyNode.getShape()
If parsing unknowns: listRelatives() command with the shape or s flag set to true
selected_object = pm.ls(sl=True)[0]
shapes = pm.listRelatives(selected_object, s=True)
if len(shapes) > 0:
for shape in shapes:
# Do something with your shapes here
print('Shapes are: {}'.format(shape))
# or more pymel friendly
shapes = pm.selected_object.listRelatives(s=True)
for shape in shapes:
# Do something in here
Even though Pymel is more pythonic and can be more pleasant as a dev to use than maya.cmds, it is not officially supported and brings its share of bugs, breaks and slowness into the pipeline. I would strongly suggest to never import it. It is banned in a lot of big studios.
Here is the solution with the original Maya commands and it's as simple as that:
shapes = cmds.listRelatives(node, shapes=True)
Very standard way of getting shape from transform in PyMEL:
transform.getShape()
To get shapes from a list of selection, you can do the following which results in list of shapes.
sel_shapes = [s.getShape() for s in pm.ls(sl=1)]
A note that certain transforms do not have shapes. Like a group node, which is basically a empty transform.

Saving a decision tree model for later application in Julia

I have trained a pruned decision tree model in Julia using the DecisionTree module. I now want to save this model for use on other data sets later.
I have tried converting the model to a data array for export using writetable() and I have tried exporting using writedlm() and neither of these work. When I look at the type of the model I see that it is a DecisionTree.Node type. I don't know how to work with this and can't get it to export/save.
In:DataFrame(PrunedModel)
Out:LoadError: MethodError: `convert` has no method matching convert(::Type{DataFrames.DataFrame}, ::DecisionTree.Node)
This may have arisen from a call to the constructor DataFrames.DataFrame(...),
since type constructors fall back to convert methods.
Closest candidates are:
call{T}(::Type{T}, ::Any)
convert(::Type{DataFrames.DataFrame}, !Matched::Array{T,2})
convert(::Type{DataFrames.DataFrame}, !Matched::Dict{K,V})
...
while loading In[22], in expression starting on line 1
in call at essentials.jl:56
In:typeof(PrunedModel)
Out:DecisionTree.Node
Any ideas how I can get this model saved for use later?
If I understand correctly that this is a Julia object, you should try using the JLD.jl package to save the object to disk and load it back in, preserving the type information.

Resources