Best way to get Shape from a Transform - mel

Using ls -sl returns a transform. The only way I can find to get the shape of a transform is to use getRelatives but this seems wonky compared to other workflows. Is there a better more standard way to get a Shape from a Transform?

Be aware, as of 2018, the pymel getShape() is flawed (IMO) in that it assumes there is only one shape per node, and that is not always the case. (like 99% of the time, its the case though, so I'm nitpicking)
However; the getShape() method only works off of a transform nodeType. If you have an unknown node type that you are trying to parse if its a mesh, or a curve for instance, by saying getShape() you'll want to check if you can use the method or not.
if pm.nodeType(yourPyNode) == 'transform':
'shape = yourPyNode.getShape()
If parsing unknowns: listRelatives() command with the shape or s flag set to true
selected_object = pm.ls(sl=True)[0]
shapes = pm.listRelatives(selected_object, s=True)
if len(shapes) > 0:
for shape in shapes:
# Do something with your shapes here
print('Shapes are: {}'.format(shape))
# or more pymel friendly
shapes = pm.selected_object.listRelatives(s=True)
for shape in shapes:
# Do something in here

Even though Pymel is more pythonic and can be more pleasant as a dev to use than maya.cmds, it is not officially supported and brings its share of bugs, breaks and slowness into the pipeline. I would strongly suggest to never import it. It is banned in a lot of big studios.
Here is the solution with the original Maya commands and it's as simple as that:
shapes = cmds.listRelatives(node, shapes=True)

Very standard way of getting shape from transform in PyMEL:
transform.getShape()
To get shapes from a list of selection, you can do the following which results in list of shapes.
sel_shapes = [s.getShape() for s in pm.ls(sl=1)]
A note that certain transforms do not have shapes. Like a group node, which is basically a empty transform.

Related

GradCam Implementation in TFJS

I'm trying to implement GradCam (https://arxiv.org/pdf/1610.02391.pdf) in tfjs, based on the following Keras Tutorial (http://www.hackevolve.com/where-cnn-is-looking-grad-cam/) and a simple image classification demo from tfjs, similar to (https://github.com/tensorflow/tfjs-examples/blob/master/webcam-transfer-learning/index.js) with a simple dense, fully-connected layer at the end.
However, I'm not able to retrieve the gradients needed for the gradcam computation. I tried different ways to retrieve gradients for the last sequential layer, but did not succeed, as types of tf.LayerVariable from the respective layer is not convertible to the respective type of tf.grads or tf.layerGrads.
Did anybody already succeeded to get the gradients from sequential layer to a tf.function like object?
I'm not aware of the ins and outs of the implementation, but I think something along the lines of this: http://jlin.xyz/advis/ is what you're looking for?
Source code is available here: https://github.com/jaxball/advis.js (not mine!)
This official example in the tfjs-examples repo should be close to, if not exactly, what you want:
https://github.com/tensorflow/tfjs-examples/blob/master/visualize-convnet/cam.js#L49

How can I smooth the seam from merged geometry in Maya?

I'm not sure if this is a geometry problem or a normal problem but I am having a hard time combining meshes without leaving a "seam" or visible discontinuity. In the example below I have a simple example with two sphere polygons with matching divisions. I have done a union and merged nearby vertices. I then tried to do a little manual adjustment to smooth it, but as you can see the result is not good.
I know that I can use the smooth tool to smooth it by adding new geometry, but I feel like given that the vertices match perfectly here I should be able to fix this through some other means. I've played with "soften edge normals" but I don't see any effect from that. I've tried averaging vertices but that doesn't seem to do much. I've gone to the sculpting tools and using the relax and smooth tools there... No matter how correct the geometry looks to me it still appears discontinuous unless I use the add-geometry smoothing tool.
What is the correct way to merge geometry like this?
thanks.
UPDATE:
I'm going to mark the answer below correct even thought it is basically one of the procedures I tried before. I think the real answer is that I just wasn't performing the merge very well (cutting the geometry and merging the edges in the cleanest way possible) prior to softening the edge.
Here is how to merge geometry and get rid of unpleasant seam from scratch:
a) Delete a history for each object (Edit–>DeleteByType–>History)
b) Combine a mesh (Mesh–>Combine)
c) Merge the edges, controlling a tolerance (EditMesh–>Merge)
d) Soften the edges, controlling an angle (MeshDisplay–>SoftenEdge)
Remember, Angle parameter makes the edge hard/soft.
Here are MEL equivalents:
// Deleting a history
DeleteHistory;
// Combining a mesh
polyUnite;
// Merge border edges within a given threshold
polySewEdge;
// Softening the edge (angle = 0...180)
polySoftEdge;
MEL example for softening the edge:
select -r pSphere2.e[35:54];
polySoftEdge -a 180;

I have MDLAsset created from an SCNScene. How do I extract MDLMeshs, MDLCamera(s), and MDLLights?

I am struggling trying to traverse an MDLAsset instance created by loading an SCNScene file (.scn).
I want to identify and extract the MDLMeshs as well as camera(s) and lights. I see no direct way to do that.
For example I see this instance method on MDLAsset:
func childObjects(of objectClass: Swift.AnyClass) -> [MDLObject]
Is this what I use?
I have carefully labeled things in the SceneKit modeler. Can I not refer to those which would be ideal. Surely, there is a dictionary of ids/labels that I can get access to. What am I missing here?
UPDATE 0
I had to resort to pouring over the scene graph in the Xcode debugger due to the complete lack of Apple documentation. Sigh ...
A few things. I see the MDLMesh and MDLSubmesh that is what I am after. What is the traversal approach to get it? Similarly for lights, and camera.
I also need to know the layout of the vertex descriptors so I can sync with my shaders. Can I force a specifc vertex layout on the parsed SCNScene?
MDLObject has a name (because of its conformance to the MDLNamed protocol), and also a path, which is the slash-separated concatenation of the names of its ancestors, but unfortunately, these don't contain the names of their SceneKit counterparts.
If you know you need to iterate through the entire hierarchy of an asset, you may be better off explicitly recursing through it yourself (by first iterating over the top-level objects of the asset, then recursively enumerating their children), since using childObjects(of:) repeatedly will wind up internally iterating over the entire hierarchy to collect all the objects of the specified type.
Beware that even though MDLAsset and MDLObjectContainerComponent conform to NSFastEnumeration, enumerating over them in Swift can be a little painful, and you might want to manually extend them to conform to Sequence to make your work a little easier.
To get all cameras,
[asset childObjectsOfClass:[MDLCamera class]]
Similarly, to get all MDLObjects,
[asset childObjectsOfClass:[MDLObjects class]]
Etc.
MDLSubmeshes aren't MDLObjects, so you traverse those on the MDLMesh.
There presently isn't a way to impose a vertex descriptor on MDL objects created from SCN objects, but that would be useful.
One thing you can do is to impose a new vertex descriptor on an existing MDL object by setting a mesh's vertexDescriptor property. See the MDLMesh.h header for some discussion.

(De)serializing an object as an array in XStream

I'm trying to clean up some old code by replacing some arrays that were being passed around with proper objects to improve readability and to encapsulate some behaviour. I ran into a problem when it turned out the arrays were being run through XStream for persistence.
I need to retain the format of the serialization and the arrays in question are inside various other objects being (de)serialized through XStream. Is there and easy way to handle this?
I'm hoping there's an Annotation I can apply or simple XStream Converter I can write for my new classes and be done with it, but from what I can see it would require writing Converters for each of the containing classes instead. I'm not sure as I'm not familiar with XStream. If there isn't a easy solution I'm just going to have to give up and leave the arrays in place as I don't have the time budgeted for anything fancy or to learn the finer points of XStream.
Specifically, I have a TileLayer that has a member int[] metaTileFactors and I want to replace that with class MetaTiling which has members final int x and final int y and still have it serialize and deserialize to/from the same XML as before.

best method of turning millions of x,y,z positions of particles into visualisation

I'm interested in different algorithms people use to visualise millions of particles in a box. I know you can use Cloud-In-Cell, adaptive mesh, Kernel smoothing, nearest grid point methods etc to reduce the load in memory but there is very little documentation on how to do these things online.
i.e. I have array with:
x,y,z
1,2,3
4,5,6
6,7,8
xi,yi,zi
for i = 100 million for example. I don't want a package like Mayavi/Paraview to do it, I want to code this myself then load the decomposed matrix into Mayavi (rather than on-the-fly rendering) My poor 8Gb Macbook explodes if I try and use the particle positions. Any tutorials would be appreciated.
Analysing and creating visualisations for complex multi-dimensional data is complex. The best visualisation almost always depends on what the data is, and what relationships exists within the data. Of course, you are probably wanting to create visualisation of the data to show and explore relationships. Ultimately, this comes down to trying different posibilities.
My advice is to think about the data, and try to find sensible ways to slice up the dimensions. 3D plots, like surface plots or voxel renderings may be what you want. Personally, I prefer trying to find 2D representations, because they are easier to understand and to communicate to other people. Contour plots are great because they show 3D information in a 2D form. You can show a sequence of contour plots side by side, or in a timelapse to add a fourth dimension. There are also creative ways to use colour to add dimensions, while keeping the visualisation comprehensible -- which is the most important thing.
I see you want to write the code yourself. I understand that. Doing so will take a non-trivial effort, and afterwards, you might not have an effective visualisation. My advice is this: use a tool to help you prototype visualisations first! I've used gnuplot with some success, although I'm sure there are other options.
Once you have a good handle on the data, and how to communicate what it means, then you will be well positioned to code a good visualisation.
UPDATE
I'll offer a suggestion for the data you have described. It sounds as though you want/need a point density map. These are popular in geographical information systems, but have other uses. I haven't used one before, but the basic idea is to use a function to enstimate the density in a 3D space. The density becomes the fourth dimension. Something relatively simple, like the equation below, may be good enough.
The point density map might be easier to slice, summarise and render than the raw particle data.
The data I have analysed has been of a different nature, so I have not used this particular method before. Hopefully it proves helpful.
PS. I've just seen your comment below, and I'm not sure that this information will help you with that. However, I am posting my update anyway, just in case it is useful information.

Resources