How do I handle 'Yes'/'No' responses from the user in Custom Skill? - alexa

I am trying to build an alexa custom skill. I am facing an issue where I am trying to get Yes/No responses from the user for a question which the skill asks the user.
Alexa: Would you like to know the rules of the game?
User: <Can respond either Yes or No>
Based on the user response I would like to perform a specific action.
Here is my intent schema:
{
"intents": [
{
"intent": "AMAZON.StopIntent"
},
{
"intent": "AMAZON.CancelIntent"
},
{
"intent": "AMAZON.HelpIntent"
},
{
"intent": "StartGame"
},
{
"intent": "GetRules"
}
]
}
Here are my sample utterances:
StartGame Begin the game
StartGame Start the game
GetRules What are the rules
GetRules Get the rules
GetRules Tell me the rules
GetRules Tell me the rules again
The question the skill asks the user is below:
Welcome to the game. Would you like me to tell you the rules?
Whenever I say "Yes", the StartGame intent is what is being triggered. (Same is the case for "No"). Alexa picks the intent as StartGame always. What is the best way to invoke "GetRules" intent. I want the user to just say Yes/No and not say "Get the rules".
Please let me know if this has been already answered/more information is needed.

You need to use AMAZON.YesIntent and AMAZON.NoIntent.
You can read about them here:.
Standard Built-in Intents.
https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/built-in-intent-ref/standard-intents

Please add below code in interaction model.
{
"name": "AMAZON.NoIntent",
"samples": []
},
{
"name": "AMAZON.YesIntent",
"samples": []
}
And provide your business logic for yes/no in your lambda.
'AMAZON.YesIntent': function () {
//business code
},
'AMAZON.NoIntent': function () {
//business code
}

Related

Alexa Skills Kit: How to call custom intent from another intent in ASK sdk V2

Hi I have been using and developing skills on Alexa since quite few months. Recently I updated to the ask sdk version 2. I find everything cool and stuck nowhere.
I couldn't find a way to emit an intent now. Like earlier we were able to call Intent from another Intent in following manner:
this.emitWithState(<intent name here>);
Anybody know how to achieve this in ask sdk V2?
Any help would be highly appreciated.
Do it as
const FirstIntentHandler = {
canHandle(handlerInput) {
return handlerInput.requestEnvelope.request.type === 'IntentRequest'
&& handlerInput.requestEnvelope.request.intent.name === 'FirstIntentHandler';
},
handle(handlerInput) {
// some code
return SecondIntentHandler.handle(handlerinput);
},
};
If your skill's interaction model has a dialog model you can do the above by intent chaining. Intent chaining allows your skill code to start dialog management from any intent, including the LaunchRequest. You can Chain Intents with Dialog.Delegate as following :
.addDelegateDirective({
name: 'OrderIntent',
confirmationStatus: 'NONE',
slots: {}
})
Here is the official release blog of the intent chaining :
https://developer.amazon.com/blogs/alexa/post/9ffdbddb-948a-4eff-8408-7e210282ed38/intent-chaining-for-alexa-skill
I have also written a sample implementing the same : https://github.com/akhileshAwasthi/Alexa-Intent-chaining
Simply doing
this.emit(<intent_name>)
will work.
const handlers = {
'LaunchRequest': function () {
this.emit('HelloWorldIntent');
},
'HelloWorldIntent': function () {
this.emit(':tell', 'Hello World!');
}
};

How to show the current playing item (song) in player or queue screen in alexa app

I am able to send the songs to amazon echo devices and the song is playing. I am not understanding what i have to send to show the song in the player or queue screen in alexa app as it is coming to other music apps like saavn, spotify etc. Please let me know if there are any link or info regarding this.
]1
Check out Amazon's AudioPlayer Interface Reference. It gives a pretty comprehensive guide on how to make the audio interface work. Essentially, it boils down to adding another directive to the list of directives you're returning in your response JSON. For me, this will automatically come up with the audio player screen.
A basic version of the audio directive looks like the following:
{
"type": "AudioPlayer.Play",
"playBehavior": "ENQUEUE",
"audioItem": {
"stream": {
"token": "Audio Playback",
"url": "http://www.audio.com/this/is/the/url/to/the/audio",
"offsetInMilliseconds": 0
}
}
}
ENQUEUE adds the specified stream to the end of the current stream queue. The offsetInMilliseconds key sets how far into the stream (in milliseconds) playback should begin.
When you nest this into the larger response JSON, it takes on the form of following:
{
"version": "1.0",
"sessionAttributes": {},
"response": {
"outputSpeech": {},
"card": {},
"reprompt": {},
"directives": [
{
"type": "AudioPlayer.Play",
"playBehavior": "ENQUEUE",
"audioItem": {
"stream": {
"token": "Audio Playback",
"url": "http://www.audio.com/this/is/the/url/to/the/audio",
"offsetInMilliseconds": 0
}
}
}
],
"shouldEndSession": true
}
}
There are a handful of other options to include in your audio directive. These can be found in the link I mentioned above.
I find it most beneficial to make a function where you can pass in given values to create the AudioPlayer directive JSON. For example, in python, this may look like the following:
def build_audio_directive(play_behavior, token, url, offset)
return {
"type": "AudioPlayer.Play",
"playBehavior": play_behavior,
"audioItem": {
"stream": {
"token": token,
"url": url,
"offsetInMilliseconds": offset
}
}
}
There are multiple ways to build up the response, but I find this way is the easiest for me to visualize.

getconfig() for Community Connectors, how to employ user input

The Community Connector feature is very new, and I have searched, there isn't much information. We are building a Community Connector to enable Data Studio to pull API data from Google My Business Insights.
the getconfig() function is described here: https://developers.google.com/datastudio/connector/reference#getconfig
We can display our configuration options to the user, that was easy, but the API reference is unclear what the next step is: how to pass the user input to the next step. Pardon me if I am not using the proper terms here.
var config = {
configParams: [
{
"type": "SELECT_SINGLE",
"name": "SELECT_SINGLE",
"displayName": "Select a Location",
"helpText": "Pick One!",
"options": [
{
"label": locationName,
"value": name
},
{
"label": "altLocationName",
"value": "altName"
}
]
},
]
};
return config;
}
The preceding code displays properly to the user and the user can make a selection from the pull-down in Data Studio when making an initial data connection. But to repeat the question another way: how do we access the selection that the user chose?
The getData(), getSchema(), and getConfig() functions are all called with a parameter (which is called "request" in the documentation). The parameter is an object containing various info at each stage.
At the getConfig() stage, it includes a property called languageCode, in my case set to 'en-GB'.
The getSchema() stage is provided a property called configParams, which is essentially the result of all the settings in getConfig() after the user has set them.
Finally, getData() gets the most info, including whether this request is for extracting sample data for google to run heuristics on, and most importantly: again the configParams.
Here's what a sample request object might look like:
{ //------ Present in:
languageCode: en-GB, //////-Only getConfig()
configParams: { //////-getSchema() + getData()
SELECT_SINGLE: altName ////-+
}, //
scriptParams: { //////-Only getData()
sampleExtraction: true ////-|
lastRefresh: 'new Date()' ////-+
}, //
fields: [ //////-Only getData()
{ name: FooAwesomeness }, ////-|
{ name: BarMagicality }, ////-|
{ name: BazPizzazz } ////-+
] //
dimensionsFilters: [ //////-Only getData()
[{ // |
fieldName: "string", ////-|
values: ["string", ...], ////-|
type: DimensionsFilterType, ////-|
operator: Operator ////-+
}] //
] //
} //------
Do note
that the name field in your code, currently set to SELECT_SINGLE, would be better suited to be called location because that it how you'll access it later on.
In this way you would
access:
request.configParams.location
rather than
request.configParams.SELECT_SINGLE
:)
Also note
that the format for specifying a configuration screen has been updated. Your configuration would now be able to be done as follows:
function getConfig(request) {
var cc = DataStudioApp.createCommunityConnector();
var config = cc.getConfig();
config
.newSelectSingle()
.setId('location') // You can call this "location"
.setName('Select a Location')
.setHelpText('Pick One!')
.addOption(config.newOptionBuilder()
.setLabel('Location Name')
.setValue('value'))
.addOption(config.newOptionBuilder()
.setLabel('Alternate Location Name')
.setValue('altValue'))
config.setDateRangeRequired(true);
config.setIsSteppedConfig(false);
return config.build();
}
See: Connector API Reference
See: Build a Connector Guide
The user selections will be passed to getSchema() and getData() requests under configParams object.
Using your example, let's assume the user selects altLocationName in the configuration screen. In your getSchema() and getData() functions, request.configParams.SELECT_SINGLE should return altName.

Is there currently a consensus on which api design fits a react/redux app?

I'm building a single-page application with react and redux. Which of course needs a backend for its data. Now we're looking at what api architecture would be best, but it I couldn't find any consensus on which api architecture would best fit a react/redux application.
Now I know that you can basically consume data any way you like. But still there are clear market leaders, like redux over mobx for example. That doesn't make them better, but it's nice to at least know where the preference lies.
So my question is: which api architecture is used most for a react/redux app at this point in time?
From our experience, it's best not to make the API "fit" react/redux and vice versa.
Just use thunk-action-creators and handle the API calls in whatever shape or form they are.
Here is a contrived example:
export function getUserById (userId) {
return async dispatch => {
dispatch({ type: 'REQUEST_USER_BY_ID', payload: userId })
try {
const user = await api.get(`/users/${userId}`)
ga('send', 'event', 'user', 'profile', 'view')
dispatch({
type: 'LOAD_USER',
payload: user
})
dispatch({
type: 'NOTIFY_SUCCESS',
payload: `Loaded ${user.firstname} ${user.lastname}!`
})
}
catch (err) {
dispatch({
type: 'NOTIFY_ERROR',
payload: `Failed to load user: ${err.message}`,
})
}
}
}
The biggest benefit of this approach is flexibility.
The API(s) stay completely unopinionated about the consumer.
You can handle errors, add retry logic, fallback logic differently on any page.
Easy to glue together actions that require calls to several different apis in parallel or sequential.
We tried many approaches like "redux over the wire" and relays/apollos "bind component to query".
This one stuck as the most flexible and easiest to understand and refactor.
It's difficult to find authoritative information or guidelines on this subject, but it's hard to argue that if you create an API specifically for one flux/redux app, and you store the data in normalized form in the database, it's rather silly to de-normalize it in your API endpoint only to normalize it straight after in your client (using normalizr)... In that case, just leave the data normalized and pass it over the wire to your client like that.
Concretely you'd have something like this:
GET /meetings
{
"result": ["1", "2"],
"entities": {
"meetings": {
"1": { "id": 1, "date": "2016-01-01", "attendees": [1, 2, 3] },
"2": { "id": 2, "date": "2016-01-02", "attendees": [2, 3, 4] }
},
"users": {
"1": { "id": 1, "name": "User 1" },
"2": { "id": 1, "name": "User 2" },
"3": { "id": 1, "name": "User 3" },
"4": { "id": 1, "name": "User 4" }
}
}
}
Given that each of these entities correspond to a property on your state, such response is trivial to merge into your store in a reducer action, using something like Lodash merge:
return _.merge({}, state, action.entities);
If you have multiple consumers, you might opt for a normalize=true query parameter. You might also want to combine this with some kind of expand|include=entities,to,include query parameter.
Finally, note that the JSON API spec doesn't play nicely with the normalized structure of flux/redux stores.
Further reading:
https://softwareengineering.stackexchange.com/questions/323325/how-to-structure-a-rest-api-response-for-a-flux-redux-frontend
https://www.reddit.com/r/reactjs/comments/5g3rht/use_normalizr_or_just_flatten_in_the_rest_api/
Nowadays, there's so much new technology. There isn't really a consensus on what to choose to use, like it was back in the day. Don't find an architecture that would be a best fit for a react/redux application. Find an architecture that best FITS YOUR PROJECT.

Fetching Facebook Timeline posts using Facebook API

I'm facing following issues while fetching facebook user/pages/group timeline posts:
I'm not getting all information (photos urls, link, created_time, etc) in posts objects retrieved using this. Following is a sample response of this api:
[
{
"message": "this is going to be real fun https://localhost.com/N1AyEvZp",
"story": "Rajveer Singh added photos to XYZ Photos in My-group.",
"updated_time": "2015-09-03T16:27:34+0000",
"id": "405944472923733_413853035466210"
},
{
"message": "this is going to be fun https://localhost.com/EJo1WvZp",
"story": "Rajveer Singh added photos to XYZ Photos in My-group.",
"updated_time": "2015-09-03T16:14:41+0000",
"id": "405944472923733_413848848799962"
},
{
"message": "this is going to be some real funhttps://localhost.com/VyVKdWga",
"story": "Rajveer Singh added photos to XYZ Photos in My-group.",
"updated_time": "2015-09-02T15:45:08+0000",
"id": "405944472923733_413582785493235"
}
]
This response is missing the photo urls, links, captions, etc from the posts. Is there any different api for fetching those informations ? Also, if I directly hit the one of the post object /405944472923733_413582785493235 then I get following response:
{
"created_time": "2015-09-02T15:45:07+0000",
"message": "this is going to be some real funhttps://localhost.com/VyVKdWga",
"story": "Rajveer Singh added photos to XYZ Photos in My-group.",
"id": "405944472923733_413582785493235"
}
Though I get created_time in this response but pictures, urls, are still missing. I found this api deprecated. Is there any different api which can give me all the info ?
The above response is also missing comments and replies. On google search I found that we can get comments using /405944472923733_413582785493235/comments api but again that api doesn't mention the exact comments count. Also, the api doesn't give all the comments in a single api call. They have a pagination kind of thing. Can anyone tell me how can I get exact count of comments, replies to comments, and retrieve all the comments in a single api call. If we can't retrieve all the comments in a single go, then how can we use pagination ? I need to send all the comments related to a post to my front-end. with pagination, how can I achieve that ? Do I need to store the previous/next urls somewhere in front-end ? Following is a sample response of this:
{
"data": [
{
"id": "[post_id]",
"from": {
"name": "[name]",
"id": "[id]"
},
"message": "[message]",
"created_time": "2011-01-23T02:36:23+0000"
},
{
"id": "[id]",
"from": {
"name": "[name]",
"id": "[id]"
},
"message": "[message]",
"created_time": "2011-01-23T05:16:56+0000"
}
],
"paging": {
"cursors": {
"after": "WTI5dGJXVnVkRjlqZFhKemIzSTZOREUzTVRJeE5qWTRORGN5Tmpnd09qRTBOREl3T0RRd09URT0=",
"before": "WTI5dGJXVnVkRjlqZFhKemIzSTZOREUzTVRFNU16RTRORGN5T1RFMU9qRTBOREl3T0RRd05qZz0="
},
"previous": "previousUrl",
"next": "nextUrl"
}
How to get count of all the likes and shares of a post ? I simply want the count and doesn't want who actually likes/shared the post. How can I get that ? I found that using /likes gives a list of all those who liked the post but it doesn't give the count. Following is a sampple response of that:
{
"data": [
{
"id": "824565440992306"
}
],
"paging": {
"cursors": {
"after": "ODI0NTY1NDQwOTkyMzA2",
"before": "ODI0NTY1NDQwOTkyMzA2"
}
}
}
General Information:
I'm using Node.js Javascript SDK for hitting FB APIs.
I'm using correct access token, so that's not an issue for sure.
I have gone through this and this but didn't get any help from them.
I need all the information related to wall posts on my back-end so that I can send all data to my front-end for proper rendering. This is a screenshot of my front-end and all the information which I need in front-end.
Can anyone please try to clear my doubts ?
Also, if there is any optimized way of fetching all this information, then please do suggest. Your suggestions are welcome.

Resources