Nintex Workflow was unable to interpret your response- Lazy approval - nintex-workflow

In our situation for lazy approval, the user responded with "Yes I approve" instead of the one word "yes, ok, approve, approved"
And user got return mail from system as follow
<<< Start>>>
-----Original Message-----
From: exampleWbesite#exampleDomain.com
Sent: exampleUser1 12:55 PM
To: exampleUser2
Subject: RE: ACTION: something Approval Required for something else
Nintex Workflow was unable to interpret your response. Please try again with a clear indication of your approval outcome.
Valid 'approved' responses are:
approve
approved
ok
yes
Valid 'declined' responses are:
decline
declined
no
reject
rejected
Yes I approve.
My question is "Is it the default behavior of Lazy approval or something else went wrong".

Lazy Approval requires the response to be on a single line by itself.
In this case, if the phrase "Yes I approve" was one of the valid approval responses, the result would have been an approval.

Related

Chaining intents doesn't work as expected

Say my custom skill is named Portal Entry. Here's what I need to do:
User: "Alexa, open Portal Entry"
Alexa: "Ok, would you like to perform task ABC or do XYZ?"
User: "Perform task ABC"
Alexa: "Ok, performing task ABC"
[...] executes function (couple of HTTP requests) [...]
when done:
Alexa: "Task ABC was performed successfully"
If in 3 user said "do XYC", it should say "Ok, doing XYZ", execute a different function, and when done say "XYZ is complete. Pretty simple, right?
So (4) is just a confirmation, like "Ok, got you, I'm gonna do what you asked me". To make alexa jump from Intent 4 to intent 5 without having the user say something, I found out I should use Chaining Intents (as a way to trigger intent 5 automatically after intent 4 is resolved). I've followed amazon's tutorial on Chaining Intents and added the following to IntentFour
handle(handlerInput) {
const speakOutput = 'Ok, performing Task ABC!';
return handlerInput.responseBuilder
.addDelegateDirective({
name: 'IntentFive',
confirmationStatus: 'NONE',
slots: {}
})
.speak(speakOutput)
.getResponse();
}
In my IntentFive I have this:
handle(handlerInput) {
// executeFunction();
const speakOutput = 'Task ABC was performed successfully';
return handlerInput.responseBuilder
.speak(speakOutput)
.getResponse();
}
So as it is now, it should say 'Ok, performing Task ABC' and then say 'Task ABC was performed successfully.
Problem: it just jumps to IntentFive without saying what it's supposed to on IntentFour. So when I say 'Perform task abc' it says 'Task ABC was performed successfully'.
What am I missing? What's wrong with implementation described? Is there a better way to do that?
note: don't know if it's important but as amazon's tutorial recommended in order to have a dialog model "If you have at least one intent with a required slot, or you’ve enabled auto delegation, your skill has a dialog model", so I added a FooIntent with a required slot just to make it work, but I'm not using it at all.

Microsoft graph API --> To see if the message is a reply

I am using the graph API to retrieve the mail from mail folders. For example I got a mail, I will change or edit the subject line and store the conversation id for future use. if I got the reply mail for the same email chain I got different conversation id. How to handle this, I need to find out the reply mail.
"subject": "Test",
"conversationId": "AAQkADU1YWM2MjMyLTVkOGQtNDdiMy05YWM4LTE4NTNlYzg1ZWRiNwAQADofdbq8_JtJkY8M5wnunlU=",
reply msg:
"subject": "Re: Test1",
"conversationId": "AAQkADU1YWM2MjMyLTVkOGQtNDdiMy05YWM4LTE4NTNlYzg1ZWRiNwAQAHu3pWtxNmBFjdfyjYaVGKc=",
I need to find this ts reply message.
I'd suggest you use the In-Reply-To header https://wesmorgan.blogspot.com/2012/07/understanding-email-headers-part-ii.html in that way you can relate multiple replies (to the same replied to message) in a Message chain etc. You can either get the In-Reply-To header by requesting the InternetHeaders https://learn.microsoft.com/en-us/graph/api/resources/internetmessageheader?view=graph-rest-1.0 (this will return all the headers) or you can request the extended property to just get that one property eg
https://graph.microsoft.com/v1.0/users('user#domain.com')/MailFolders('Inbox')/messages/?$select=ReceivedDateTime,Sender,Subject,IsRead,inferenceClassification,InternetMessageId,parentFolderId,hasAttachments,webLink&$Top=10&$expand=SingleValueExtendedProperties($filter=(Id%20eq%20'String%200x1042'))

Errors with multi turn dialog in alexa skills

I have created a skill with name "BuyDog" and its invocation name is "dog app"
So that should mean, I can use the intents defined inside only after the invocation name is heard. (is that correct?)
Then I have defined the Intents with slots as:
"what is {dog} price."
"Tell me the price of {dog}."
where the slot {dog} is of slot type "DogType". I have marked this slot as required to fulfill
Then I have added the endpoint to AWS lambda function where I have used the blueprint code of factskills project in node.js, and done few minor changes just to see the working.
const GET_DOG_PRICE_MESSAGE = "Here's your pricing: ";
const data = [
'You need to pay $2000.',
'You need to pay Rs2000.',
'You need to pay $5000.',
'You need to pay INR 3000.',
];
const handlers = {
//some handlers.......................
'DogIntent': function () {
const factArr = data;
const factIndex = Math.floor(Math.random() * factArr.length);
const randomFact = factArr[factIndex];
const speechOutput = GET_DOG_PRICE_MESSAGE + randomFact;
}
//some handlers.......................
};
As per the about code I was expecting when
I say: "Alexa open dog app"
It should just be ready to listen to the intent "what is {dog} price." and the other one. Instead it says a random string from the node.js code's data[] array. I was expecting this response after the Intent was spoken as the slot was required for intent to complete.
And when
I say: "open the dog app and Tell me the price of XXXX."
It asks for "which breed" (that is my defined question) But it just works fine and show the pricing
Alexa says: "Here's your pricing: You need to pay $5000."
(or other value from the data array) for any XXXX (i.e. dog or not dog type).
Why is alexa not confirming the word is in slot set or not?
And when
I say: "open the dog bark".
I expected alexa to not understand the question but it gave me a fact about barking. WHY? How did that happen?
Does alexa have a default set of skills? like search google/amazon etc...
I am so confused. Please help me understand what is going on?
Without having your full code to see exactly what is happening and provide code answers, I hope just an explanation for your problems/questions will point you in the right direction.
1. Launching Skill
I say: "Alexa open dog app"
It should just be ready to listen to the intent...
You are expecting Alexa to just listen, but actually, Alexa opens your skill and is expecting you to have a generic welcome response at this point. Alexa will send a Launch Request to your Lambda. This is different from an IntentRequest and so you can determine this by checking request.type. Usually found with:
this.event.request.type === 'LaunchRequest'
I suggest you add some logging to your Lambda, and use CloudWatch to see the incoming request from Alexa:
console.log("ALEXA REQUEST= " + event)
2. Slot Value Recognition
I say: "open the dog app and Tell me the price of XXXX."
Why is alexa not confirming the word is in slot set or not?
Alexa does not limit a slot to the slot values set in the slotType. The values you give the slotType are used as a guide, but other values are also accepted.
It is up to you, in your Lambda Function, to validate those slot values to make sure they are set to a value you accept. There are many ways to do this, so just start by detecting what the slot has been filled with. Usually found with:
this.event.request.intent.slots.{slotName}.value;
If you choose to set up synonyms in the slotType, then Alexa will also provide her recommended slot value resolutions. For example you could inlcude "Rotty" as a synonym for "Rottweiler", and Alexa will fill the slot with "Rotty" but also suggest you to resolve that to "Rottweiler".
var resolutionsArray = this.event.request.intent.slots.{slotName}.resolutions.resolutionsPerAuthority;
Again, use console.log and CloudWatch to view the slot values that Alexa accepts and fills.
3. Purposefully Fail to Launch Skill
I say: "open the dog bark".
I expected alexa to not understand the question but it gave me a fact about barking.
You must be doing this outside of your Skill, where Alexa will take any inputs and try to recognize an enabled skill, or handle with her best guess of default abilities.
Alexa does have default built-in abilities (not skills really) to answer general questions, and just be fun and friendly. You can see what she can do on her own here: Alexa - Things To Try
So my guess is, Alexa figured you were asking something about dog barks, and so provided an answer. You can try to ask her "What is a dog bark" and see if she responds with the exact same as "open the dog bark", just to confirm these suspicions.
To really understand developing an Alexa skill you should spend the time to get very familiar with this documentation:
Alexa Request and Response JSON Formats
You didn't post a lot of your code so it's hard to tell exactly what you meant but usually to handle incomplete events you can have an incomplete even handler like this:
const IncompleteDogsIntentHandler = {
// Occurs when the required slots are not filled
canHandle(handlerInput) {
return handlerInput.requestEnvelope.request.type === 'IntentRequest'
&& handlerInput.requestEnvelope.request.intent.name === 'DogIntent'
&& handlerInput.requestEnvelope.request.dialogState !== 'COMPLETED'
},
async handle(handlerInput) {
return handlerInput.responseBuilder
.addDelegateDirective(handlerInput.requestEnvelope.request.intent)
.getResponse();
}
you add this handler right above your actual handler usually in the index.js file of your lambda
This might not fix all your issues, but it will help you handle the event when a user doesn't mention a dog.

How to Develop alexa to speak latest response again

In Details:
Example:
user:- asks About Cricket News.
Alexa:- Reads about the new.
If users says come again or replay
user: Come again.
Alexa: Must read it again what it spoke earlier.
How to handle this situation using webhooks.
Thanks in advance.
You can make use of sessionAttributes to keep track of the last response that Alexa spoke. Whenever you return a response just store the speech and re-prompt in sessionAttributes and whenever a ComeAgainIntent is triggered, take the value from the sessionAttributes and respond accordingly.
Ex:
...
"sessionAttributes": {
"lastResponse": {
"speech": "This was my last speech",
"reprompt": "This was my lst reprompt"
}
}
...
Every time before building the response store the response as lastSpeech in session attributes and write a comeAgainIntent or use amazon.REPEAT intent to repeat the response by getting lastSpeech from session attributes.

paypal pay now and returned variables

I am making a paypal pay now button for my website to add credits that can be used.
I understand the whole buy now and ipn thing but I what I would like to do is that when the user finishes his pay, there should be a way for the IPN to get the username so that I could add the credits into the database.
Is there a way to have the username sent together with the transaction so that it is returned, too? Thank you.
Please do not rely on the notify url.
You can send a 'custom' parameter in your dictionary object to paypal as listed in the Paypal IPN Docs.
paypal_dict = {
"business": "yourpaypalemail#example.com",
"amount": "10000000.00",
"item_name": "name of the item",
"invoice": "unique-invoice-id",
"notify_url": "http://www.example.com/your-ipn-location/",
"return_url": "http://www.example.com/your-return-location/",
"cancel_return": "http://www.example.com/your-cancel-location/",
"custom": request.user.username, #or you can use request.user.pk
}
The 'custom' value will be saved on paypal's server with the transaction and in your IPN object. This is a much better option than having payments with no customer identified.
In the button params, set the notify_url variable to an URL of maximum 255 characters.
PayPal will HTTP POST to this URL with the IPN details (payment_status, payment_date, etc.).
When creating the button, add params to this URL, such as http://yourwebsite.com?username_for_credits=john%20doe
.
When PayPal will make the IPN HTTP POST, you will get back the username you want.

Resources