how to use jump to if I need to evaluate a condition of context variable for two different nodes at same time - ibm-watson

I have one parent node ,based on the user input Iam setting a context variable at my application level eligibility:yes or no and passing back.And for my parent node I have two child nodes for conditions $eligibility=="yes" and $eligibility=="no".So once users input from parent node validation is done and context variable is passed back ,then I need to jump and look for condition of eligibility.If yes I need to go one node ,if no then to other.How can I do?
I tried putting true to node and added these two nodes to this and jump to true..But didnt worked..How can we achieve this?

what #data_henrik has mentioned is a good way to set context value and then switch to different flows depending upon the set value. But when you need to perform some logic before setting that value in the context from your application, it won't be a suitable way.
I had a requirement like this, so we used to send a dummy text from our application after we were done with setting the value in context after the parent node execution. Check out the images and explanation after that.
We didn't use Jump because we had to do some validation in the Conversation service after parent node before moving forward. Using jumps would've allowed the Conversation to move to next node before we could set value in context.
Use case flow - once user enters text for the parent node intent, for my case "#send-mail" intent, I show the parent response and do some functional validation in my app after that and add a value to the context. Now we send a dummy text "valid" which satisfies the intent "#Valid" and hence move to the next node in flow. In this node we check for the value in context (which is already set by now) and show appropriate response to user.

You can set within your first two test nodes, $testMe==true and $testMe==false a temp output variable within the output json packet, i.e. output{"temp":"true"} or "false". Then you can jump to a new set of nodes and test for the output.temp value, i.e. output.temp == 'true' then do something, or output.temp == 'false' then do something.
The nice side effect of this action is that the output.temp variable only has a life of that current conversation input. Unlike context variables which need to be removed / deleted.

Related

Adding a new Entry in a Struct holding a TArray as Member value doesn’t update it’s entries

I am currently working on a Character Customization System where a HUDLayout dynamically create Widgets based on a set of Skins available for the Character Selected. A Skin is represented as a Struct called MaterialInstanceContainer and holds a TArray. Player can mix and match their selection according to the Body Parts they select. In order to achieve the final result, I want to create a TMap<string, MaterialInstanceContainer> so that I can map each BodyParts available for selection with the individual material instance targeting the same BodyPart.
ISSUE: My issue is as follow, when I try to foreach over my collection of Material Instances inside my Container, I do a string comparison and if the output is valid, I can then break my struct to access the Material Instance Array and ADD to it however, at the very end of the process, the length of the array inside Material Container is still at zero.
How can I add a new entry in the array that my Material Container Struct hold?
Thanks!
enter image description here
The issue here is actually pretty straight forward: in Blueprints when you 'Find' a member of Map you are not getting it by-reference, instead you get the copy.
This is exactly what happens at the end of your nested loop: You get a copy, you add item to it, and when another iteration kicks-in the copy gets terminated.
And here on my side it returns exactly the same result as expected:
The fix for that would be easy:
After editing a Copy, you can overwrite the Map member by its copy (via 'Add' node).
But in your case it will be more tricky. You cannot just plug the same BreakStruct/Array node that you just used because it would call whole Find sequence again which creates another copy. Look
If you are confused. This code actually looks like this for Unreal's point of view.
So you have to store the the Struct in local variable first, then perform any operations on it and after everything is done - overwrite the Map member by its locally stored copy. And the result is
Map member gets overwritten every time and everything is as it should be.
Well, almost... For some reason your code is stored in Macro. I think you have to change it to a Function to be able to create local struct variable. But it shouldn't be a problem in this specific case because in your code there is no logic that needs to be done in macro.
Hope, it helps

What is the purpose of some fields in DSL in Dasha?

I want to know for what we need following fields:
node
do
digression
disable
goto
next
transitions
set
exit
node call_reason
{
do
{
digression disable sayHi;
goto next;
}
transitions
{
next: goto how_are_you;
}
}
I suppose, you are asking this question because you are a little bit confused about syntax, I'll try to make it clear.
Nodes and Transitions
DashaScript is the language for describing automated conversations. Basically, any conversation script consists of
nodes - states of your conversation (please, see node doc)
transitions - relations between nodes that are described by conditions of switching from the current node to the another. There are three different kinds of transitions, e.g. instant transition that is used in code of your example (please, see transitions doc).
In some sense, the scripted conversation can be thought of as a graph. In this case, nodes and transitions can be interpreted as vertices and edges of a graph, respectively.
Hence, node and transition define the structure of your conversation script.
Every node has subsection do where you can specify actions and instructions you want to be performed in this particular node.
Also, node may have subsection transitions which is used to specify conditions of switching a current state to another.
Every event-transition (like transition on event and timer transition) specified in this section has the following syntax: <transition_name>: goto <node_name> on <switching_condition>.
Instant transitions (like the one used in your code) have no conditions: <transition_name>: goto <node_name>. To execute such transition, it must be called in section do of current node with goto instruction.
Also, there are special nodes that can be visited from any state. These nodes are called digressions. (see digressions doc). They are used to make fast reactions in your conversation and return to main branch of conversation. To control digression, we have mechanism of enabling/disabling them (see digression-control doc).
So, in your example, the node with name call_reason has section do where you disable degression-node and then instant transition with name next is executed.
All entities of DashaScript language mentioned above are described in program structure docs. I would recommend you to check it out, since there are more important entities that you might need to know about.
Set
set is the instruction that is used for assigning a value for some variable. Example:
node some_node
{
do {
var some_variable: number = 1;
set some_variable = 2; // now some_variable has value of 2
}
}
Exit
exit is the instruction that interrupts the dialog.

How do I select whether the routine continues based on the participant's response?

I want to create an experiment in PsychoPy Builder that conditionally shows a second routine to participants based on their keyboard response.
In the task, I have a loop that first goes through a routine where participants have three options to respond ('left','right','down') and only if they select 'left', regardless of the correct answer, should they see a second routine that asks a follow-up question to respond to. The loop should then restart with routine 1 each time.
I've tried using bits of code in the "begin experiment" section as such:
if response.key=='left':
continueRoutine=True
elif response.key!='left':
continueRoutine=False
But here I get an error saying response.key is not defined.
Assuming your keyboard component is actually called response, the attribute you are looking for is called response.keys. It is pluralised as it returns a list rather than a single value. This is because it is capable of storing multiple keypresses. Even if you only specify a single response, it will still be returned as a list containing just that single response (e.g. ['left'] rather than 'left'). So you either need to extract just one element from that list (e.g. response.keys[0]) and test against that, or use a construction like if 'left' in response.keys to check inside the list.
Secondly, you don't need to have a check that assigns True to continueRoutine, as it defaults to being True at the beginning of a routine. So it is only setting it to False that results in any action. So you could simply do something like this:
if not 'left' in response.keys:
continueRoutine = False
Lastly, for PsychoPy-specific questions, you might get better support via the dedicated forum at https://discourse.psychopy.org as it allows for more to-and-fro discussion than the single question/answer structure here at SO.

Test for corrupted list

Suppose I receive a list in a volatile environment, where the tail element is only partially filled with accessible items; further, passing on/deleting/dropping the element is a perfectly adequate solution.
So,
next->A // is unaccessible
next->B // is accessible
if (next->A) // evaluates to true
is there a method to test and pass/delete this list element?
C does not provide a built-in way of testing if a memory location is accessible or not. You cannot check if next->A is available for the same reason that you cannot check if a pointer is "dangling".
A fix to this is to add a level of indirection: make a list of "envelope" objects which are always available. Each envelope holds a pointer to the actual object, along with a flag indicating object's accessibility. This way the provider of the list would be able to manipulate the flag independently of the data object itself, without disturbing the content of the list:

Flink trigger on a custom window

I'm trying to evaluate Apache Flink for the use case we're currently running in production using custom code.
So let's say there's a stream of events each containing a specific attribute X which is a continuously increasing integer. That is a bunch of contiguous events have this attributes set to N, then the next batch has it set to N+1 etc.
I want to break the stream into windows of events with the same value of X and then do some computations on each separately.
So I define a GlobalWindow and a custom Trigger where in onElement method I check the attribute of any given element against the saved value of the current X (from state variable) and if they differ I conclude that we've accumulated all the events with X=CURRENT and it's time to do computation and increase the X value in the state.
The problem with this approach is that the element from the next logical batch (with X=CURRENT+1) has been already consumed but it's not a part of the previous batch.
Is there a way to put it back somehow into the stream so that it is properly accounted for the next batch?
Or maybe my approach is entirely wrong and there's an easier way to achieve what I need?
Thank you.
I think you are on a right track.
Trigger specifies when a window can be processed and results for a window can be emitted.
The WindowAssigner is the part which says to which window element will be assigned. So I would say you also need to provide a custom implementation of WindowAssigner that will assign same window to all elements with equal value of X.
A more idiomatic way to do this with Flink would be to use stream.keyBy(X).window(...). The keyBy(X) takes care of grouping elements by their particular value for X. You then apply any sort of window you like. In your case a SessionWindow may be a good choice. It will fire for each key after that key hasn't been seen for some configurable period of time.
This approach will be much more robust with regard to unordered data which you must always assume in a stream processing system.

Resources