Verify edge and create edge in the same instruction (gremlin python) - graph-databases

My initial logic of checking if an edge is present and creating an edge needs to query. Im trying to verify and create an edge in one instruction.
This query does not seem to work
ipdb> prop = self._graph.V('pppp').outE('friend').hasId('testEdge').as_('e').inV()
.hasId('dddd').select('e').
coalesce(__.property('testedder', 1111).fold().unfold(),
__.V('dddd'). as_('to_a').V('pppp').addE('friend').to('to_a')).
toList()
1) The first part of the coalesce - updating the property of Edges work fine
2) The second part of the coalesce is either not being called or not working. It is working as an independent query. Does 'as' not work in anonymous traversals?
PS: Im using AWS Neptune

You had the right idea but you needed some simplification. I'll try to do it in steps. First, whenever I see labelled steps, I try to see if there is a way to avoid using them. In this case, they can be factored out:
g.V('pppp').outE('friend').
filter(hasId('testEdge').inV().hasId('dddd')).
coalesce(__.property('testedder', 1111).fold().unfold(),
__.V('dddd'). as_('to_a').V('pppp').addE('friend').to('to_a'))
Readability of the traversal improves on those first two lines because the reader can immediately see that you want to find an edge given some criteria which was only implied by the step labeling approach. Next, I looked at coalesce(). As it stands, if filter() returns no edges then coalesce() will never get a chance to execute and that's why the second part of coalesce() never has an opportunity to work for you. So, let's clean that part up:
g.V(1).outE('knows').
filter(hasId(6).inV().hasId(2)).
fold().
coalesce(unfold().property('testedder', 1111),
V('dddd').as_('to_a').V('pppp').addE('friend').to('to_a'))
If it's not clear why the fold() and unfold() are where they are, you should check out my detailed explanation of the approach here. So, with fold() and unfold() where they should be, the coalesce() should now trigger both conditions depending on whether or not an edge passes the filter(). The first part of the coalesce() is fine, but the second could still use a bit of work as I'd again like to factor out the step labels if the aren't necessary:
g.V('pppp').outE('friend').
filter(hasId('testEdge').inV().hasId('dddd')).
fold().
coalesce(unfold().property('testedder', 1111),
addE('friend').from(V('pppp')).to(V('dddd')))
The above Gremlin assumes that you know "pppp" vertex exists. If you do not then you might try (as suggested by Daniel Kuppitz):
g.V('pppp').
not_(outE('friend').hasId('testEdge').
filter(inV().hasId('dddd')).
property('testedder', 1111)).as('p').
V('dddd').
addE('friend').from('p')

Related

Why are the inputs to my guess_nonlinear() all 1s?

The N2 diagram for my full problem is below.
The N2 diagram for the coupled portion of the problem is below.
I have a DirectSolver handling the coupling between LLTForces and ImplicitLiftingLine, and an LNBGS solver handling the coupling between LiftingLineGroup and TestCL.
The gist for the problem is here: https://gist.github.com/eufren/31c0e569ed703b2aea3e2ef5360610f7
I have implemented guess_nonlinear() on ImplicitLiftingLine, which should use various outputs from LLTGeometry to give a good initial guess for the vortex strengths based on a linearised form of the governing equations.
def guess_nonlinear(self, inputs, outputs, resids):
freestream_unit_vector = inputs['freestream_unit_vector']
freestream_velocity = inputs['freestream_velocity']
n = inputs['normal_vectors']
A = inputs['surface_areas']
l = inputs['bound_vortices']
ic_tot = inputs['influence_coefficients_total']
v_inf = freestream_velocity
v_inf_vec = v_inf*freestream_unit_vector
lin_numerator = np.pi * v_inf * A * np.sum(n * v_inf_vec, axis=1)
lin_denominator = (np.linalg.norm(np.cross(v_inf_vec, l), axis=1) - np.pi * v_inf * A * np.sum(np.sum(n * ic_tot, axis=2), axis=1))
lin_vtx_str = lin_numerator / lin_denominator
outputs['vortex_strengths'] = lin_vtx_str
However, when the problem is run for the first time, any inputs not explicitly set with p.set_val() are all 1s. This causes guess_nonlinear() to give a bad output and so the system fails to converge:
As far as I can tell, the execution order for the LLT group is correct, and the geometry components should be being executed before the implicit component. I'm confused as to why this doesn't seem to actually be happening when the code is run, and instead these inputs are taking their default values.
What do I need to change to get this to work properly? Additionally, I've found difficulty in getting LNBGS to converge (hence adding guess_nonlinear()) during optimisation - only DirectSolver gets all the way through the optimisation without issues, but it's very slow for large numbers of LLT nodes). How can I improve the linear and nonlinear solver selection, and improve the reliability of the iterative solver?
Note: Thanks for providing a testable example. It made figuring out the answer to your question a lot simpler. Your problem was a bit subtle and I would not have been able to give a good answer without runnable code
Your first question: "Why are all the inputs 1"
"Short" Answer
You have put the nonlinear solver to high in the model hierarchy, which then included a key precurser component that computed your input values. By moving the solver down to a lower level of the model, I was able to ensure that the precurser component (LTTGeometry) ran and had valid outputs before you got to the guess_nonlinear of implicit component.
Here is what you had (Notice the implicit solver included LTTGeometry even though the data cycle does not require that component:
I moved both the nonlinear solver and the linear solver down into the LTTCycle group, which then allows the LTTGeometry component to execute before getting to the nonlinear solver and guess_nonlinear step:
My fix is only partially correct, since there is a secondary cycle from the TestCL component that also needs a solver and does not have one. However, that cycle still does not involve the LTTGeometry group. So the fully correct fix is to restructure you model top run geometry first, and then put the LTTCycle and TestCL groups together so you can run a solver over just them. That was a bit more hacking than I wanted to do on your test problem, but you can see the general idea from the adjusted N2 above.
Long Answer
The guess_nonlinear sequence in OpenMDAO does NOT run the compute method of explicit components or of groups. It follows the execution hierarchy, and calls any guess_nonlinear that it finds. So that means that any explicit components you have in your model will NOT get executed, their outputs will not get updated with computed values, and those computed values will not get passed to the inputs of downstream components.
Things get a little tricky when you have deep model hierarchies. The guess_nonlinear method is called as the first step in the nonlinear solver process. If you have a NonLinearRunOnce solver at the top level, it will follow the compute chain down the line calling compute or solve_nonlinear on each child and doing a data transfer after each one. If one of those children happens to be a group with a nonlinear solver, then that solver will call guess_nonlinear on its children (grandchildren of the top group with the NonLinearRunOnce solver) as the first step. So any outputs that were computed by the siblings of this group will be valid, but none of the outputs from the grandchild level will have been computed yet.
You may be wondering why not just have the guess_nonlinear method call the compute for any explicit components? There is a difficult to balance trade off here. If you assume that all explicit components are very cheap to run, then it might make sense to run the compute methods --- or it might not. A lot depends on the cyclic data structure. If some early component in the group needs guesses from the later one, then running its compute isn't going to help you much at all. Perhaps more importantly though, not all explicit components are cheap to run. You might have a very expensive computation, and calling compute as part of the guess process would be way too costly.
The compromise here, if you need some kind of top level guess process, is that you can implement guess_nonlinear at the group level. It's less common to do, but it gives you total control over what happens. You can call whatever you need to call in whatever sequence.
So the absolute key thing to remember is that the only data you have available to you when a guess_nonlinear is called is any data that was computed before your containing solver was executed. That means any thing that was computed before you got to the model scope of the containing solver (not the scope of the component with the guess_method itself).
Your second question: "How can I speed this up when the number of nodes gets large?"
This one not possible to give a generic answer to at all. I noticed that you have already specified sparse partial derivatives. That is a great start, but if its still not fast enough for you then it means you're reaching the limits of what you can do with a DirectSolver. You note that this solver is the only one that gets you through the optimization without issues, which I will take to mean that ScipyKryloventer link description here and PetscKrylov are not converging the linear system well for you --- at least not by themselves. Thats not surprising, as krylov solvers almost always require some kind of preconditioner... and this is why I can't offer a generic answer. Setting up efficient linear solvers for larger-scale compute is a tricky subject. If you look into the literature, you'll find some good suggestions. You can also study open source implementations like VSPAero for some tips.
effectively, you've reached the limit of what simple linear solvers can offer you. From this point forward, OpenMDAO can help a bit by making it easier to implement some preconditioning, but you'll have to suffer the math side yourself.

How do you represent a function call as an if condition statement in Sequence Diagram?

I've been drawing a sequence diagram of a module recently, while reverse engineering.
I encountered a control statement, and it is like,
if (func_A() == True)
{
DoSomeThing();
}
else
{
DoSomeThingElse();
}
The problem is how to draw the condition?
As I mentioned, It is reverse engineering. The code cannot be modified now.
I drew two diagrams, and I don't know which way is right,
The first one is this, I think it's wrong because it doesn't show the function call as a message from A to B.
This is the second, It shows a message func_A.
What do you think about to do this right?
To complete the other answer there is anyway a problem in the second proposal because we do not know if in [func_A() == True] you reuse the value return by the previous call or you do a second call, to avoid that add the explicit return in your diagram :
Out of that do you know the activities ? A sequence diagram is "just" an interaction while an activity is a behavior and can be more adapted :
It depends. If func_A is an operation defined in Object2 the second representation would be correct. The first does not tell where the operation is defined. Most likely (!) one would interpret func_A as an operation local to ObjectA which your code seems to say. (Btw. you have two completely different object sets AB vs. 12 in your examples.) But that is uncertain. So the 2nd variant is more explicit (and correct).
In any case I advise to not overdo SDs with fragments as "graphical programming" doesn't make things easier to read (my practical experience). It's excellent to show message flows in various collaborations. But when it comes to conditions it's getting messy very soon. A better way is to create different sub-diagrams or even use pseudo code if there are too nested if conditions. In many cases such if clauses are a good fit for state machines.

pddl precondtion not working correctly in the plan

I am working on a project in pddl. The idea is to pick four balls and transfer them to the conveyer. (defined in the goal) The simple pickup, move and drop actions work fine but when I try to make it more complicated for eg. by adding different poses for the robot,detecting the item before picking, the plan does not follow the pre-condition. For e.g. focus on the pick action, The correct pose is not followed. Any ideas regarding the mistake in the code? The final plan should to have correct pose for the each action and detect the item one by one and not all at once
link below:
http://editor.planning.domains/#read_session=BzTaNrk4dQ
faulty output:
https://i.stack.imgur.com/ubWS8.png
Likely something missing/wrong in the precondition. You have this for the pickup action:
(exists (?f - pose ?g - gripper)
(at-pose robotarm pregrasppose))
Note that you don't use the variables ?f and ?g at all in the fluent.
Thanks haz for going through the code.
I was able to debug it. the assignment of preconditions was incorrect. the correct way of assigning a value to a parameter is the following:
(= ?p findshirt)
in the above line you assign findshirt type to 'p'

dojo.query "," (or) for combining multiple selections does not work on IE 7

I am using
dojo.query('input,select',myDiv)[0].focus();
to focus the first input element found in a div container.
This will work in Firefox, but not in IE 7.
IE 7 only takes the first query into consideration:
dojo.query('input,select')[0] will select the first input element,
even if a select element is first.
dojo.query('select,input')[0] will select the first select element,
even if an input element is first.
Does anybody know a workaround for this?
If I recall correctly, dojo.query does not necessarily guarantee "chronological" order within the NodeList it returns, especially for complex queries. This is generally due to the fact that for some browsers / in some scenarios, it does have to cobble multiple disparate result sets together, and trying to reorder this based on where each element is in the document would probably be far more of a performance hit than it's worth.
That said, off the cuff I'm not sure what to suggest as an alternative. It'd be easy enough to find the first of one OR the other separately, just not while looking for both within the same query.
If your form has some kind of consistent markup around your inputs (e.g. each field is inside let's say, a div with class="field"), I suppose you could do something like this:
dojo.query('.field:first-child select, .field:first-child input')

MD5 code kata and BDD

I was thinking to implement MD5 as a code kata and wanted to use BDD to drive the design (I am a BDD newb).
However, the only test I can think of starting with is to pass in an empty string, and the simplest thing that will work is embedding the hash in my program and returning that.
The logical extension of this is that I end up embedding the hash in my solution for every test and switching on the input to decide what to return. Which of course will not result in a working MD5 program.
One of my difficulties is that there should only be one public function:
public static string MD5(input byte[])
And I don't see how to test the internals.
Is my approach completely flawed or is MD5 unsuitable for BDD?
I believe you chose a pretty hard exercise for a BDD code-kata. The thing about code-kata, or what I've understood about it so far, is that you somehow have to see the problem in small incremental steps, so that you can perform these steps in red, green, refactor iterations.
For example, an exercise of finding an element position inside an array, might be like this:
If array is empty, then position is 0, no matter the needle element
Write test. Implementation. Refactor
If array is not empty, and element does not exist, position is -1
Write test. Implementation. Refactor
If array is not empty, and element is the first in list, position is 1
Write test. Implementation. Refactor
I don't really see how to break the MD5 algorithm in that kind of steps. But that may be because I'm not really an algorithm guy. If you better understand the steps involved in the MD5 algorithm, then you may have better chances.
It depends on what you mean with unsuitable... :-) It is suitable if you want to document a few examples that describes your implementation. It should also be possible to have the algorithm emerge from your specifciation if you add one more character for each test.
By just adding a switch statement you're just trying to "cheat the system". Using BDD/TDD does not mean you have to implement stupid things. Also the fact that you have hardcoded hash values as well as a switch statement in your code are clear code smells and should be refactored and removed. That is how your algorithm should emerge because when you see the hard coded values you first remove them (by calculating the value) and then you see that they are all the same so you remove the switch statement.
Also if your question is about finding good katas I would recommend lokking in the Kata catalogue.

Resources