Test CanFullfillmentRequest in alexa mobile app or echo device during development mode - alexa

I'm creating a alexa skill, in which i want to trigger all my commands without invocation name, i have implement CanFullfillmentRequest by following (https://developer.amazon.com/en-US/docs/alexa/custom-skills/implement-canfulfillintentrequest-for-name-free-interaction.html#invoke-and-test-the-skill) this url and test it from simulator using json file.
now i want to test this in mobile app environment.
How do i test this?
Is there only way to submit my skill and test this feature on live mode? Or there is any other way to test this.
#name-free-intrection

The only way to test a skill that implements CanFulfillIntentRequest and is not live yet is to simulate the request by crafting one and sending it to your skill.
Create a new .json file containing the input JSON with the request type set to CanFulfillIntentRequest.
The following is a sample .json file for the request. Substitute the appropriate values for your skill. Because you cannot test CanFulfillIntentRequest with an Alexa-enabled device, the purpose of this file is to duplicate the content of an actual CanFulfillIntentRequest from Alexa for testing with ASK CLI, or in the Alexa Simulator.
{
"session":{
"new": true,
"sessionId":"SessionId.[unique-value-here]",
"application":{
"applicationId":"amzn1.ask.skill.[unique-value-here]"
},
"attributes":{
"key": "string value"
},
"user":{
"userId":"amzn1.ask.account.[unique-value-here]"
}
},
"request":{
"type":"CanFulfillIntentRequest",
"requestId":"EdwRequestId.[unique-value-here]",
"intent":{
"name":"MyNameIsIntent",
"slots":{
"name":{
"name":"name",
"value":"Jeff"
}
}
},
"locale":"en-US",
"timestamp":"2017-10-03T22:02:29Z"
},
"context":{
"AudioPlayer":{
"playerActivity":"IDLE"
},
"System":{
"application":{
"applicationId":"amzn1.ask.skill.[unique-value-here]"
},
"user":{
"userId":"amzn1.ask.account.[unique-value-here]"
},
"device":{
"supportedInterfaces":{
}
}
}
},
"version":"1.0"
}
More info: https://developer.amazon.com/en-US/docs/alexa/custom-skills/implement-canfulfillintentrequest-for-name-free-interaction.html#create-the-json-for-testing-your-skill

Related

How to Convert PDF to Image using Azure Logic App

I'm trying to use a Logic App to convert a PDF File into an Image (JPG). I did every configuration with this article showed but it's not working. When I send it to API, that returns this error:
Not sure whether this is a fix. I have raised a thread in the Adobe Forum as well
I switched the logic app to code view
I moved the below piece of code
{
"body": "JPEG",
"headers": {
"Content-Disposition": "form-data; name=\"targetFormat\""
}
},
Above :
{
"body": "this ith",
"headers": {
"Content-Disposition": "form-data; name=\"InputFile0\""
}
},
Final Version :
Save it, Don't switch it to designer view. Run the Flow. You will be able to run the flow without the above error.

getting a Firefox plugin to detect and mimic attempts to check for Apple Pay support

Now that Apple's credit card offering is out, I can get 2% cash back on purchases on the web made with Apple Pay. Unfortunately, my browser of choice is Firefox, which doesn't yet support Apple Pay.
I'd like to detect attempts to check for Apple Pay support, so I can alert myself in some way and switch over Safari to complete my purchase. Per Apple's docs, this check is performed via window.ApplePaySession.
So, I've attempted the following in an extension:
manifest.json
{
"manifest_version": 2,
"name": "applepay",
"version": "1.0",
"content_scripts": [
{
"matches": [
"*://*/*"
],
"js": [
"applepay.js"
]
}
]
}
applepay.js
window.ApplePaySession = {
canMakePayments: function () {
console.log('canMakePayments');
return Promise.resolve(true);
},
canMakePaymentsWithActiveCard: function () {
console.log('canMakePaymentsWithActiveCard');
Promise.resolve(true);
},
};
I'm able to console.log(window) in applepay.js and get the whole object, but my changes to the object don't appear to take effect - it's acting like window is read-only. What am I missing?
In Firefox, content scripts (addon written in WebExtensions) don't share the same context as page scripts (website scripts).
In your content script, do something similar to this:
function notify(message) {
console.log("do something");
}
exportFunction(notify, window, {defineAs:'notify'});
After, the page script will see that window.notify exists.
https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Sharing_objects_with_page_scripts

Getting shareable link to document in react app

I am currently making an app that generates Itineraries and I am able to convert the html to pdf using PDFjs using something like this:
var doc = new jsPDF();
doc.fromHTML(html);
doc.save("YourItinerary.pdf");
How should I proceed about making a shareable link to this pdf on client-side preferably using an API such as Google Drive?
Getting the shareable link would be to get the webViewLink which you can try by passing webViewLink as parameter in the 'fields' property in Files.get. This returns a link you can open in any browser. However, you also have to deal with permissions.
To make the webViewLink (your shareable link) work for anyone you can use in Permissions.create:
{
"role": "writer",
"type": "anyone",
}
To make the webViewLink available to certain users only you'll have a request body like:
{
"role": "writer",
"type": "user",
"emailAddress": "someuser#gmail.com"
}

Open URL in IBM Watson conversation

I am using a Blumix free account to develop a chat-bot using watson conversation.
How do I add a clickable URL in the response, or automatically call a URL in browser?
I have edited the "advanced response" using the suggestions as described on this page but could not get it work.
How can I achieve that?
I don't know if I understood your question correctly, but.. if you wants add some url inside flows Conversation Service (IBM Watson), try it:
1º: Add the url with tag <a target> and href= your URL inside flows. See the example:
JSON:
"output": {
"text": "This is a link <a target=\"_blank\" href= \"https://www.choosemyplate.gov\">Food and nutrition Guide</a>.\n<br/><br/>Talk to you later, bye for now!"
},
2º See that it did not work inside the Conversation, because it will be your browser that will render the html.
3º If you open with your browser, it works, see:
See that the link is showing up, and this will work for other things in html, like button, for example...
But if you can: based on user input should access a url:
This is done by using two features: Context.request skip_user_input
A request is a special context variable that has args, name and result. It is used to tell the calling app that it should do some action based on this variable.
Setting skip_user_input is optional. In many cases, you might want to execute some business logic in your application and then provide its results via result. Setting skip_user_input to true, will tell Watson Conversation to not wait for input from the user. Thus, your condition on the next node should be based on the content inside result.
{
"output": {},
"context": {
"request": {
"args": {
"url_to_invoke": "your_url"
},
"name": "Call_A_URL",
"result": "context.response"
},
"skip_user_input": true
}
}
Reference: IBM Professional #Dudi: here.

Is including additional information in the output object a good idea?

I'm experimenting with a Conversation where I would like to modify the output in a couple of different ways:
different output for speech or text
different output depending on the tone of the conversation
It looks like I can add extra output details which make it through to the client ok. For example, adding speech alongside text...
{
"output": {
"speech": {
"Hi. Please see my website for details."
},
"link": "http://www.example.com",
"text": {
"Hi. Please see http://www.example.com for details."
}
}
}
For the tone, I wondered about making up a custom selection policy, unfortunately it seems to treat it the same as a random selection policy. For example...
{
"output": {
"text": {
"values": [
"Hello. Please see http://www.example.com for more details.",
"Hi. Please see http://www.example.com for details."
]
},
"append": false,
"selection_policy": "tone"
}
}
I could just add a separate tone-sensitive object to output though so that's not a big problem.
Would there be any issues adding things to output in this way?
You can definitely use the output field to specify custom variables you want your client app to see with the benefit that these variables will not persist across multiple dialog rounds (which they would if you would add them to the context field).
Now currently there is no "easy" way how to define your custom selection policy (apart from the random and sequential supported by the runtime right now) - but you could still return an array of possible answers to the client app with some attribute telling the client app which selection policy to use and you would implement this policy in the client app.

Resources