MEL: Traverse through hierarchy - mel

I'm writing a MEL script that will rename all of the joints in a joint hierarchy to a known format. The idea is that you would select the Hip joint and the script would rename the Hips, and go through every other joint and rename it based on its position in the hierarchy.
How can you traverse through a joint hierarchy in MEL?

If you assign to $stat_element the name of your top joint in the hierarchy and run the following code, it will add a prefix "myPrefix_" to all children elements of that joint.
string $stat_element = "joint1";
select -r $stat_element;
string $nodes[] = `ls -sl -dag`;
for($node in $nodes){
rename -ignoreShape $node ("myPrefix_" + $node);
}
Hope this helps

If you need to make detailed decisions as you go along, instead of bulk-renaming, traversing the hierarchy is pretty simple. The command is 'listRelatives'; With the 'c' flag it returns children of a node and with the 'p' flag it returns the parent. (Note that -p returns a single objects, -c returns an array)
Joint1
Joint2
Joint3
Joint4
listRelatives -p Joint2
// Result: Joint1 //
listRelatives -c Joint2
// Result: Joint3, Joint4
The tricky bit is the renaming, since maya will not always give you the name you expect (it won't allow duplicate names at the same level of the hierarchy). You'll need to keep track of the renamed objects or you won't be able to find them after they are renamed in case the new names don't match your expectations.
If you need to keep track of them, you can create a set with the set command before renaming; no matter what becomes of the names, all of the objects will still be in the set. Alternatively, you can traverse the hierarchy by selecting objects and renaming the current selection -- this won't record the changes but you won't have problems with objects changing names in the middle of your operation and messing up your commands.

It can be messy to do this in MEL if you have non-unique names because the handle you have for the object is the name itself. Once you rename a parent of a node with a non-unique name, the child's name is different. If you stored the list of all names before starting to rename, you will get errors as the rename command will attempt to rename nodes that don't exist. There are 2 solutions I know of using MEL. But first, here's the pyMel solution which is much easier and I recommend you use it.
PyMel Solution:
import pymel.core as pm
objects = pm.ls(selection=True, dag=True, type="joint")
pfx = 'my_prefix_'
for o in objects:
o.rename(pfx + o.name().split('|')[-1])
As pm.ls returns a list of real objects, and not just the names, you can safely rename a parent node and still have a valid handle to its children.
If you really want to do it in MEL, you need to either rename from the bottom up, or recurse so that you don't ask for the names of children before dealing with the parent.
The first MEL solution is to get a list of long object names and sort them based on their depth, deepest first. In this way you are guaranteed to never rename a parent before its children. The sorting bit is too convoluted to be bothered with here, and the recursive solution is better anyway.
Recursive MEL solution:
global proc renameHi(string $o, string $prefix) {
string $children[] = `listRelatives -f -c -type "joint $o`;
for ($c in $children) {
renameHi( $c ,$prefix ) ;
}
string $buff[];
int $numToks = tokenize($o, "|", $buff);
string $newName = $buff[( $numToks - 1)];
$newName = ($prefix + $newName);
rename($o,$newName);
}
string $prefix = "my_prefix_";
string $sel[] = `ls -sl -type "joint"`;
for ($o in $sel) {
renameHi($o, $prefix);
}
This recursive solution drills down to the leaf nodes and renames them before renaming their parents.

Related

Delete duplicate nodes between two nodes in Neo4j

Due to unwanted scrip execution my database has some duplicate nodes and it looks like this
From the image, there are multiple nodes with 'see' and 'execute' from p1 to m1.
I tried to eliminate them using this:
MATCH (ur:Role)-[c:CAN]->(e:Entitlement{action:'see'})-[o:ON]->(s:Role {id:'msci'})
WITH collect(e) AS rels WHERE size(rels) > 1
FOREACH (n IN TAIL(rels) | DETACH DELETE n)
Resulting in this:
As you can see here, it deletes all the nodes with 'see' action.
I think I am missing something in the query which I am not sure of.
The good graph should be like this:
EDIT: Added one more scenario with extra relations
this works if there is more than one extra :) and cleanses your entire graph.
// get all the patterns where you have > 1 entitlement of the same "action" (see or execute)
MATCH (n:Role)-->(e:Entitlement)-->(m:Role)
WITH n,m,e.action AS EntitlementAction,
COLLECT(e) AS Entitlements
WHERE size(Entitlements) > 1
// delete all entitlements, except the first one
FOREACH (e IN tail(Entitlements) |
DETACH DELETE e
)
Your query pretty explicitly is matching the "See" action.
MATCH (ur:Role)-[c:CAN]->(e:Entitlement{action:'see'})
You might try the query without specifying the action type.
Edit:
I went back and played with this and this worked for me:
MATCH (ur:Role)-[c:CAN]->(e:Entitlement {action: "see"})-[o:ON]->(s:Role {id:'msci'})
with collect(e) as rels where size(rels) > 1
with tail(rels) as tail
match(n:Entitlement {id: tail[0].id})
detach delete n
You'd need to run two queries one for each action but as long as it's only got the one extra relationship it should work.

Merge/concatenate CSV-imported dataframes and delete duplicates

I am following up on my previous question.
Have sorted out a loop to import CSVs, concatenate data and remove duplicates.
files = glob.glob('./A08_csv/A08_B1_T*.csv')
dfs = [pd.read_csv(fp, index_col=[0], parse_dates=[0], dayfirst=True) for fp in files]
df = pd.concat(dfs)
df_purged = df.drop_duplicates(inplace=True)
print df_purged
However df.drop_duplicates(inplace=True) does not work (surely I am missing something) and print returns a void. How can I specify to check the duplicates by index? Adding the column name does not seem to work.
Also, how can I transform this loop into a formula, so I can apply this recursive input to csv with different filenames (i.e something that could work for A08_B1_T*.csv (bedroom) and for A08_KI_T*.csv (kitchen) etc.)?
Do you understand the inplace = True option?
If you do it inplace, it means you will modify df, so don't set the values to df_purged.
You here have two solutions: either you want to keep the 'unpurged' dataframe and you do:
df_purged = df.drop_duplicates()
Either you don't care about keeping it and you do:
df.drop_duplicates(inplace = True)
First option your result dataframe will be df_purged, but in the second it will be df which will be purged since you performed it inplace.
That being said, if you want to purge on your index, if you don't need to keep it, you can reset_index and then drop_duplicates like this:
df_purged = df.reset_index().drop_duplicates(['index']).drop('index',1)
And if you need to keep the index (modulo the dropped lines):
df_purged = df.reset_index().drop_duplicates(['index']).set_index('index')
del df.index.name
(Note that once again deleting the index name is only here for aesthetic)
Would this help?
df.drop_duplicates(['col_name'])
Here is a solution that adds the index as a dataframe column, drops duplicates on that, then removes the new column:
df= df.reset_index().drop_duplicates(subset='Date', 'Time', keep='last').set_index(subset='Date', 'Time')

Does the Google File System allow listing directory contents?

In the GFS paper, Section 4.1 describes how GFS is able to make concurrent mutations within a directory while only requiring a read lock on the directory for each client - there's no actual inode in GFS, so clients are free to create, remove, or mutate /x/y/somefile while only requiring a read lock on /x/ and /x/y/.
If there are no inodes, then is it still possible to maintain an explicit tree structure? The only way I can see this working is if the master maintains a flattened, 1-dimensional mapping from directory or file names to their metadata, allowing for fast file creation and manipulation.
Suppose that some client of GFS wanted to scan the names of all files in a directory - for instance, ls. Without an iteration over all metadata nodes, how is this possible?
It might be possible for a client to maintain their own version of what they think the directory tree looks like in the GFS, but this will only work if each client keeps to their own directory.
A master lookup table offers access to a single conceptual tree of nodes. It does this by listing all paths of names to nodes. Some nodes are directories. The only data is owned by non-directory leaf nodes. Eg these paths:
/a/b/foo
/a/b/c/bar
/a/baz/
describe this tree:
\
a/--b/--foo
| \
| c/--bar
baz/
Every path identifies a node. The nodes that are the children of a node are the ones whose paths are one name longer in the lookup table. To list a node's children nodes is to list all the paths in the lookup table that are one name longer than its path. What the paper means by metatdata is info like whether and how a node is locked and for a non-directory leaf node where its (unshared) data is.
One doesn't navigate by visiting directory nodes that own data that gives child and parent node names and whether they are directories, as in Unix/Linux. Copying a leaf means copying its data to another leaf's, like Unix/Linux cat, not cp. I presume one can copy a subtree, which would add new paths in the lookup table and copy data for non-directory leaves.
One cannot use technical terms like "file" or "directory" as if they mean the same thing in both systems. What one can do is consider GFS and Unix/Linux to both manage the same kind of tree of paths of names through directory nodes to directory leaves and non-directory data-owning leaves. But after that the other parts of the file system state (metadata and data) and their operators differ. In your mind put "GFS-" and "Unix/Linux-" in front of every technical term other than those refering to trees of named nodes.
EDIT: Examples.
1.
Suppose that some client of GFS wanted to scan the names of all files
in a directory - for instance, ls. Without an iteration over all
metadata nodes, how is this possible?
A directory listing would return the paths in the lookup table that extend the given directory's path. GFS will offer file server commands to do such things or that support doing such things, hiding its implementation. It would be enough (but slow) to be able iterate through the lookup table. Eg ls /a/b:
/a/b/foo
/a/b/c/bar
2.
To copy source node children to be target node children: For each path that extends the source's path, add to the lookup table the path got by replacing that prefix by the target path. Presumably the copy command creating the new nodes copies associated data for non-directories. Eg copy children of /a/ to /a/b/c/ adds:
/a/b/c/b/foo
/a/b/c/b/c/bar
/a/b/c/baz/
giving:
\
a/--b/--foo
| \
| c/--bar
| |--b/--foo
| | \
| | c/--bar
| baz/
baz/

How to determine newly added elements into my private branch

In a major development, I have added multiple files to the source control into my private branch. There were also existing files that was modified and checked into my private branch. Now as we are approaching to merge the changes to our project branch, I would like to validate all the elements I have newly added to my private branch, to ascertain if the locations are correct (ex, they should have been placed in another location and a symlink should have been added)
I listed all the elements in my private branch, but could not figure out, which of these elements were newly added.
Is there a reliable way to do so?
You can do a query finding all elements in a given branch since a certain date for a certain user:
cleartool find . -type f -branch "brtype(abranch)" -element "{created_since(10-Jan)}" -user aloginname -print
(this would search only files, as mentioned in "how to find files in a given branch", and also in "how can I list a certain user's activity in a branch")
The other approach is to create a dedicated (simple base ClearCase) view to display those elements, as in "Get all versions from a specific time" or in "how to find out all the activities happend in a branch in the last month?".
But generally, the first query is enough.

Merge arrow in clear case

I have to merge all objects from a sub branch to main branch recursively. I would like to merge manually by check in the code from sub branch to main branch instead of using merge command in clear case.
So after the check in into the main branch I would like to draw arrow
recursively to all my objects.
ic from sub branch to main branch
I have used this command
cleartool mkhlink -unidir Merge <sub branch path>>##/main/<<sub branch>> <<main brach path>>##/main/LATEST
But when I dit it, it is drawing the arrow for the directory only not for all contains of the directory.
Please suggest how to draw the arrow recursively from sub branch to main branch objects.
Thanks in advance
According to the merge man page,
cleartool merge -ndata -to aFile -version /main/a/SourceVersion /main/a/DestVersion
will draw a red arrow without performing any actual merges.
Since you have made your checkout/checkings in a branch or a UCM activity, what you need to do is:
be in your destination view (the one where the merge occurred)
query all the versions you made for that merge
extract the file for each version
extract the destination version
compute the source version (for instance /main/aBranch/LATEST)
do a "merge -ndata"
So, it is not so much a "recursive" algorithm, but rather an enumeration of all versions involved in this merge in order to draw the appropriate red arrows.
Just use the ClearCase Merge Manager - it should take all the pain out of doing a task like this.
I know this is 8 months later but have you tried this?
cleartool find . -type f -nxname -exec 'cleartool merge -to $CLEARCASE_PN -ndata -version /main/aBranch/LATEST '
Probably do not need the -nxname in the first part. I changed to the directory that I wanted to create only merge arrows to and did a FIND for everything in the directory.

Resources