Getting Relative Time in Alexa - alexa

I'm trying to develop an Alexa skill, and I need to get the relative time, eg: "5 minutes ago".
I have defined a time slot for my skill which accepts time like 5 minutes, 6.30 am or 4 in the morning. But I'm not able to accept a time like 5 minutes ago. I new to Alexa and can some one help me out with this
{
"slots": [
{
"name": "time",
"type": "AMAZON.TIME"
}
],
"intent": "ReportTime"
}

You could add a {modifier} slot that can take several keywords like "ago" and "from now". The intent could then have something like the following utterances:
{
"name": "TimeTest",
"samples": [
"what happened {time} {modifier}",
"what will happen {time} {modifier}"
],
"slots": [
{
"name": "time",
"type": "AMAZON.TIME",
"samples": []
},
{
"name": "modifier",
"type": "custom_time_modifier",
"samples": []
}
]
}
with the following custom modifier type:
"types": [
{
"name": "custom_time_modifier",
"values": [
{
"id": null,
"name": {
"value": "ago",
"synonyms": [
"in the past"
]
}
},
{
"id": null,
"name": {
"value": "from now",
"synonyms": [
"in the future"
]
}
}
]
}

Related

Alexa home skill v3 ToggleController skill fails to find devices

I'm trying to use the Alexa Smart Home Skill ToggleController v3 interface to build a skill that will open and close my gate. I read the docs, and successfully implemented the sample light bulb tutorial from end to end. https://developer.amazon.com/en-US/docs/alexa/smarthome/smart-home-skill-tutorial.html
Everything worked fine. Then I created a new skill and tried to implement the ToggleController interface, mapping ON and OFF to Open and Close using semantics.
Account linking works fine
Lambda gets called with discover directive when I enable the skill on
my Alexa app
There are no errors in CloudWatch
Alexa Simulator calls the right directives and receives responses with no errors
Schema validates successfully
When I click discover devices, I get "No new devices found".
I checked all devices in the Alexa app and my device is not there.
Below is the discovery response message that my lambda returns - (from CloudWatch).
Does anyone know what I'm doing wrong?
{
"event": {
"header": {
"namespace": "Alexa.Discovery",
"name": "Discover.Response",
"messageId": "fedfbae4-0ec8-4b4e-81d1-c998bc0ee860",
"payloadVersion": "3"
},
"payload": {
"endpoints": [
{
"endpointId": "pleasant-view-gate",
"manufacturerName": "Ancient Geeks",
"description": "Smart Gate at Pleasant View Cottage",
"friendlyName": "Pleasant Gate",
"displayCategories": [
"OTHER"
],
"capabilities": [
{
"type": "AlexaInterface",
"interface": "Alexa.ToggleController",
"instance": "PleasantView.Gate",
"version": "3",
"properties": {
"supported": [
{
"name": "toggleState"
}
],
"proactivelyReported": false,
"retrievable": true
},
"capabilityResources": {
"friendlyNames": [
{
"#type": "text",
"value": {
"text": "Gate",
"locale": "en-US"
}
}
]
},
"semantics": {
"actionMappings": [
{
"#type": "ActionsToDirective",
"actions": [
"Alexa.Actions.Close"
],
"directive": {
"name": "TurnOff",
"payload": {}
}
},
{
"#type": "ActionsToDirective",
"actions": [
"Alexa.Actions.Open"
],
"directive": {
"name": "TurnOn",
"payload": {}
}
}
],
"stateMappings": [
{
"#type": "StatesToValue",
"states": [
"Alexa.States.Closed"
],
"value": "OFF"
},
{
"#type": "StatesToValue",
"states": [
"Alexa.States.Open"
],
"value": "ON"
}
]
}
},
{
"type": "AlexaInterface",
"interface": "Alexa",
"version": "3"
},
{
"type": "AlexaInterface",
"interface": "Alexa.EndpointHealth",
"version": "3",
"properties": {
"supported": [
{
"name": "connectivity"
}
],
"proactivelyReported": false,
"retrievable": true
}
}
]
}
]
}
}
}

Alexa custom skill: getting FallbackIntent instead of validation prompt

I have an interaction model with a GetMenuIntent which I can invoke with "what's for {meal}". meal is a MealType custom slot with allowed values of "breakfast" and "lunch". I added validation on the meal slot in my GetMenuIntent to only allow those values defined in the slot type and it works great for those configured values.
However, after saving and building my model, when I put "what's for dinner" into the Utterance Profiler or the interactive tester, It ended up calling my FallbackIntent instead of reprompting for a correct value.
I feel like what I'm trying to do isn't really much different than Amazon's own example here.
Here's "whats for lunch" working correctly:
And here's "whats for dinner" ignoring my GetMenuIntent and calling FallbackIntent instead:
Here's my interaction model:
{
"interactionModel": {
"languageModel": {
"invocationName": "school menus",
"intents": [
{
"name": "AMAZON.CancelIntent",
"samples": []
},
{
"name": "AMAZON.HelpIntent",
"samples": []
},
{
"name": "AMAZON.StopIntent",
"samples": []
},
{
"name": "AMAZON.NavigateHomeIntent",
"samples": []
},
{
"name": "GetMenuIntent",
"slots": [
{
"name": "meal",
"type": "Meal"
},
{
"name": "date",
"type": "AMAZON.DATE"
}
],
"samples": [
"whats for {meal} {date}",
"what will you have for {meal} {date}",
"what is on the menu for {meal} {date}",
"what are we having for {meal} {date}",
"what we're having for {meal} {date}"
]
},
{
"name": "AMAZON.FallbackIntent",
"samples": []
}
],
"types": [
{
"values": [
{
"name": {
"value": "lunch"
}
},
{
"name": {
"value": "breakfast"
}
}
],
"name": "Meal"
}
]
},
"dialog": {
"intents": [
{
"name": "GetMenuIntent",
"confirmationRequired": false,
"prompts": {},
"slots": [
{
"name": "meal",
"type": "Meal",
"elicitationRequired": false,
"confirmationRequired": false,
"prompts": {},
"validations": [
{
"type": "hasEntityResolutionMatch",
"prompt": "Slot.Validation.806855880612.19281662909.602239253259"
}
]
},
{
"name": "date",
"type": "AMAZON.DATE",
"elicitationRequired": false,
"confirmationRequired": false,
"prompts": {}
}
]
}
],
"delegationStrategy": "ALWAYS"
},
"prompts": [
{
"id": "Slot.Validation.806855880612.19281662909.602239253259",
"variations": [
{
"type": "PlainText",
"value": "Hmm, I don't know about that menu type. Please try again."
}
]
}
]
},
"version": "48"
}
Since this is 6 months old I assume you figured out by now that your interaction model only includes Lunch and Breakfast.

Azure Search - Cannot merge (with skill) data obtained from the KeyPhraseExtractionSkill

I am creating an indexer that takes a document, runs the KeyPhraseExtractionSkill and outputs it back to the index.
For many documents, this works out of the box. But for those records which are over 50,000, this does not work. OK, no problem; this is clearly stated in the docs.
What the docs suggest is so use the Text Split Skill. What I've done is use the Text Split skill, split the original document into pages, pass all pages to the KeyPhraseExtractionSkill. Then we need to merge them back, as we'd end up with an array of arrays of strings. Unfortunately, it seems that the Merge Skill does not accept an array of arrays, just an array.
https://i.imgur.com/dBD4qgb.png <- Link to the skillset hierarchy.
This is the error reported by Azure:
Required skill input was not of the expected type 'StringCollection'. Name: 'itemsToInsert', Source: '/document/content/pages/*/keyPhrases'. Expression language parsing issues:
What I want to achieve in the end of the day is to run the KeyPhraseExtractionSkill for text which is larger than 50,000 to add it back to the index eventually.
JSON for skillset
"#odata.context": "https://-----------.search.windows.net/$metadata#skillsets/$entity",
"#odata.etag": "\"0x8D957466A2C1E47\"",
"name": "devalbertcollectionfilesskillset2",
"description": null,
"skills": [
{
"#odata.type": "#Microsoft.Skills.Text.SplitSkill",
"name": "SplitSkill",
"description": null,
"context": "/document/content",
"defaultLanguageCode": "en",
"textSplitMode": "pages",
"maximumPageLength": 1000,
"inputs": [
{
"name": "text",
"source": "/document/content"
}
],
"outputs": [
{
"name": "textItems",
"targetName": "pages"
}
]
},
{
"#odata.type": "#Microsoft.Skills.Text.EntityRecognitionSkill",
"name": "EntityRecognitionSkill",
"description": null,
"context": "/document/content/pages/*",
"categories": [
"person",
"quantity",
"organization",
"url",
"email",
"location",
"datetime"
],
"defaultLanguageCode": "en",
"minimumPrecision": null,
"includeTypelessEntities": null,
"inputs": [
{
"name": "text",
"source": "/document/content/pages/*"
}
],
"outputs": [
{
"name": "persons",
"targetName": "people"
},
{
"name": "organizations",
"targetName": "organizations"
},
{
"name": "entities",
"targetName": "entities"
},
{
"name": "locations",
"targetName": "locations"
}
]
},
{
"#odata.type": "#Microsoft.Skills.Text.KeyPhraseExtractionSkill",
"name": "KeyPhraseExtractionSkill",
"description": null,
"context": "/document/content/pages/*",
"defaultLanguageCode": "en",
"maxKeyPhraseCount": null,
"modelVersion": null,
"inputs": [
{
"name": "text",
"source": "/document/content/pages/*"
}
],
"outputs": [
{
"name": "keyPhrases",
"targetName": "keyPhrases"
}
]
},
{
"#odata.type": "#Microsoft.Skills.Text.MergeSkill",
"name": "Merge Skill - keyPhrases",
"description": null,
"context": "/document",
"insertPreTag": " ",
"insertPostTag": " ",
"inputs": [
{
"name": "itemsToInsert",
"source": "/document/content/pages/*/keyPhrases"
}
],
"outputs": [
{
"name": "mergedText",
"targetName": "keyPhrases"
}
]
}
],
"cognitiveServices": {
"#odata.type": "#Microsoft.Azure.Search.CognitiveServicesByKey",
"key": "------",
"description": "/subscriptions/13abe1c6-d700-4f8f-916a-8d3bc17bb41e/resourceGroups/mde-dev-rg/providers/Microsoft.CognitiveServices/accounts/mde-dev-cognitive"
},
"knowledgeStore": null,
"encryptionKey": null
}```
Please let me know if there is anything else that I can add to improve the question. Thanks!
[1]: https://i.stack.imgur.com/GNf7F.png
You don't have to merge the key phrase outputs to insert them to the index.
Assuming your index already has a field called mykeyphrases of type Collection(Edm.String), to populate it with the key phrase outputs, add this indexer output field mapping:
"outputFieldMappings": [
...
{
"sourceFieldName": "/document/content/pages/*/keyPhrases/*",
"targetFieldName": "mykeyphrases"
},
...
]
The /* at the end of sourceFieldName is important to flattening the array of arrays of strings. This will also work as the skill input if you want to pass an array of strings to another skill for other enrichments.

Alexa saying "minute" (the time) wrong as minute(very small)

Alexa is saying minute wrong, how can I make her say minute as in 60 seconds when replying to my Skill ?
At the moment she says "as of 5 minutes ago" 5 very small objects haha
This is my skill
{
"interactionModel": {
"languageModel": {
"invocationName": "jarvis",
"intents": [
{
"name": "NSStatus",
"slots": [],
"samples": [
"How am I doing"
]
},
{
"name": "UploaderBattery",
"slots": [],
"samples": [
"How is my uploader battery"
]
},
{
"name": "PumpBattery",
"slots": [],
"samples": [
"How is my pump battery"
]
},
{
"name": "LastLoop",
"slots": [],
"samples": [
"When was my last loop"
]
},
{
"name": "MetricNow",
"slots": [
{
"name": "metric",
"type": "LIST_OF_METRICS"
},
{
"name": "pwd",
"type": "AMAZON.US_FIRST_NAME"
}
],
"samples": [
"What is my {metric}",
"What my {metric} is",
"What is {pwd} {metric}"
]
},
{
"name": "InsulinRemaining",
"slots": [
{
"name": "pwd",
"type": "AMAZON.US_FIRST_NAME"
}
],
"samples": [
"How much insulin do I have left",
"How much insulin do I have remaining",
"How much insulin does {pwd} have left",
"How much insulin does {pwd} have remaining"
]
},
{
"name": "AMAZON.NavigateHomeIntent",
"samples": []
}
],
"types": [
{
"name": "LIST_OF_METRICS",
"values": [
{
"name": {
"value": "bg"
}
},
{
"name": {
"value": "blood glucose"
}
},
{
"name": {
"value": "number"
}
},
{
"name": {
"value": "iob"
}
},
{
"name": {
"value": "insulin on board"
}
},
{
"name": {
"value": "current basal"
}
},
{
"name": {
"value": "basal"
}
},
{
"name": {
"value": "cob"
}
},
{
"name": {
"value": "carbs on board"
}
},
{
"name": {
"value": "carbohydrates on board"
}
},
{
"name": {
"value": "loop forecast"
}
},
{
"name": {
"value": "ar2 forecast"
}
},
{
"name": {
"value": "forecast"
}
},
{
"name": {
"value": "raw bg"
}
},
{
"name": {
"value": "raw blood glucose"
}
}
]
}
]
}
}
}
Obviously this can't be launched until this is resolved as it just sounds ridiculous hahah
I tried to do some googling and searching on here but its really hard when 2 words are spelt the same to distinguish between minute and minute - see !
Thanks :D
Use SSML speech tag for the response texts.
<speak>
<say-as interpret-as="time" > 5' </say-as>
</speak>
will be pronounced as 5 minutes.
<speak>
<say-as interpret-as="time" > 5'10" </say-as>
</speak>
will be pronounced as 5 minutes and ten seconds.
The say-as tag of SSML will help you to interpret your response in the desired way. You can use interpret-as="time" to make Alexa interpret it as time.
<speak>
<say-as interpret-as="time" > 5'10" </say-as> ago.
</speak>
Beware that if you want just "minute" and not seconds, use it like 5'0". If you only include 5' it will read as "five apostrophe".
<say-as interpret-as="time" > 5'0" </say-as> ago.
In the same way for seconds alone use it like 0'10". This will read as "ten seconds".
<say-as interpret-as="time" > 0'10" </say-as>
More on say-as tag here.
phoneme
If you have some complex pronunciations or the same text has different pronunciations, then use phoneme tag to provide its exact phonetic pronunciation.
For example, The "minute" (time) and "minute" (size) can be pounced differently by giving its exact phonetic pronunciation symbols.
<speak>
<phoneme alphabet="ipa" ph="/mʌɪˈnjuːt/">minute</phoneme>particles.
One <phoneme alphabet="ipa" ph="/ˈmɪnɪt/">minute</phoneme>.
</speak>
This will be spoken as
"minute particles" and "One minute ago".
More on phoneme tag here.

Alexa skills: No value returned for custom slot

Hopefully someone can help me with this because I've been stumped for a week.
I am creating a simple Alexa skill from one of the samples. It's the color picker skill - you tell Alexa your favorite color, and then you ask her your favorite color. I'm using Custom Slots, and the Skill Service doesn't want to return a value for the color. It launches successfully, and then loads the correct intent, however it doesn't send the correct value. Instead, there's not even a value parameter in the output, just name and confirmation status.
Here's my skill's JSON, followed by the request JSON output after I tell the skill: "My color is red." I want the skill to pass "red" into the value parameter.
{
"interactionModel": {
"languageModel": {
"invocationName": "color picker",
"intents": [
{
"name": "MyColorIsIntent",
"slots": [
{
"name": "color",
"type": "LIST_OF_COLORS"
}
],
"samples": [
"my color is {color}",
"{color} is my color"
]
},
{
"name": "WhatsMyColorIntent",
"slots": [],
"samples": [
"what's my color",
"what's my favorite color"
]
},
{
"name": "AMAZON.NavigateHomeIntent",
"samples": []
}
],
"types": [
{
"name": "LIST_OF_COLORS",
"values": [
{
"name": {
"value": "green"
}
},
{
"name": {
"value": "red"
}
},
{
"name": {
"value": "yellow"
}
},
{
"name": {
"value": "orange"
}
},
{
"name": {
"value": "black"
}
},
{
"name": {
"value": "blue"
}
}
]
}
]
}
}
}
Down below is the request:
"request": {
"type": "IntentRequest",
"requestId": "amzn1.echo-api.request.918d6da6-cd7e-4bb8-a2a9-41fb1af8a354",
"timestamp": "2018-10-01T01:53:56Z",
"locale": "en-US",
"intent": {
"name": "MyColorIsIntent",
"confirmationStatus": "NONE",
"slots": {
"Color": {
"name": "Color",
"confirmationStatus": "NONE"
}
}
}
}
Your issue is that slot "color" should be named "Color" and your sample references changed to reflect this so "my color is {Color}", and "{Color} is my color", It is not picking up the slot because the name is identical.
Be sure to also complete the skill with the required Intents for stop and help, currently, this will just continue asking for color choices until you kill the program.
​This is what happened:
I was working on different versions of the same skill, each with the same invocation name. When I typed in the invocation name, it actually opened an outdated version of the skill (I hadn't deleted the old skills - I had like 3 different ones - I like to start over). I didn't realize that when you click "test" you can test any of your saved skills, not just the one you have open.

Resources