InvalidSlotTypeLiteral: Slot type "AMAZON.LITERAL" for slot "Text" in intent "RawText" is not valid - alexa

AMAZON.LITERAL is deprecated as of October 22, 2018. Older skills built with AMAZON.LITERAL.
What is the alternative to AMAZON.LITERAL, I want each and every word spoken by user from Alexa device in my endpoint API.
I have created custom slots, but my endpoint is not called everytime.
Anyone have solution to this?

You will not get the entire user input through any inbuilt slots or intents. The closest one to your requirement that I can think of is AMAZON.SearchQuery.
AMAZON.SearchQuery
AMAZON.SearchQuery is a phrase-type slot that lets you capture less-predictable input that makes up the search query. You can use phrase slots when you cannot predict all possible values the user might say, or when there may not be an identifiable pattern that can be captured by a custom slot. The intended use of this slot is to capture short messages, comments, search queries, and other short free-form text, not the entire user spoken utterance.
Ex:
{
"intents": [
{
"name": "SearchIntent",
"slots": [
{
"name": "Query",
"type": "AMAZON.SearchQuery"
},
{
"name": "CityList",
"type": "AMAZON.US_CITY"
}
],
"samples": [
"search for {Query} near me",
"find out {Query}",
"search for {Query}",
"give me details about {CityList}"
]
}
]
}
You cannot add sample intent utterances consisting of only phrase type slots.
That means, you cannot give something like this:
{
"name": "QueryIntent",
"slots": [
{
"name": "query",
"type": "AMAZON.SearchQuery"
}
],
"samples": [
"{query}" // utterance with only phrase-type slot
]
}
More on AMAZON.SearchQuery here
Alexa will always will fire a POST request to your skill's endpoint with a payload whenever there is a user interaction.

Related

How do I restrict medication annotations to a specific document section via IBM Watson Annotator for Clinical Data (ACD)

I’m using the IBM Watson Annotator for Clinical Data (ACD) API hosted in IBM Cloud to detect medication mentions within discharge summary clinic notes. I’m using the out-of-the-box medication annotator provided with ACD.
I’m able to detect and extract medication mentions, but I ONLY want medications mentioned within “DISCHARGE MEDICATIONS” or “DISCHARGE INSTRUCTIONS” sections.
Is there a way I can restrict ACD to only return medication mentions that appear within those two sections? I’m only interested in discharge medications.
For example, given the following contrived (non-PHI) text:
“Patient was previously prescribed cisplatin.DISCHARGE MEDICATIONS: 1. Aspirin 81 mg orally once daily.”
I get two medication mentions: one over “cisplatin” and another over “aspirin” - I only want the latter, since it appears within the “DISCHARGE MEDICATIONS” section.
Since the ACD medication annotator captures the section headings as part of the mention annotations that appear within a section, you can define an inclusive filter that checks for (1) the desired normalized section heading as well as (2) a filter that checks for the existence of the section heading fields in general, should a mention appear outside of any section and not include section header fields as part of the annotation. This will filter out any medication mentions from the ACD response that don't appear within a "DISCHARGE MEDICATIONS" section. I added a couple other related normalized section headings so you can see how that's done. Feel free to modify the sample below to meet your needs.
Here's a sample flow you can persist via POST /flows and then reference on the analyze call as POST /analyze/{flow_id} - e.g. POST /analyze/discharge_med_flow
{
"id": "discharge_med_flow",
"name": "Disharge Medications Flow",
"description": "Detect medication mentions within DISCHARGE MEDICATIONS sections",
"annotatorFlows": [
{
"flow": {
"elements": [
{
"annotator": {
"name": "medication",
"configurations": [
{
"filter": {
"target": "unstructured.data.MedicationInd",
"condition": {
"type": "all",
"conditions": [
{
"type": "all",
"conditions": [
{
"type": "match",
"field": "sectionNormalizedName",
"values": [
"Discharge medication",
"Discharge instructions",
"Medications on discharge"
],
"not": false,
"caseInsensitive": true,
"operator": "equals"
},
{
"type": "match",
"field": "sectionNormalizedName",
"operator": "fieldExists"
}
]
}
]
}
}
}
]
}
}
],
"async": false
}
}
]
}
See the IBM Watson Annotator for Clinical Data filtering docs for additional details.
Thanks

How to use Split Skill in azure cognitive search?

I am new to Azure cognitive search. I have a docx file which is stored in azure blob storage.I am using #Microsoft.Skills.Text.SplitSkill to split the document into multiple pages(chunks).But when I index the output of this skill,I am getting the entire docx file content.how do I return the "pages" from the SplitSkill so that the user sees the portion of the original document that was found by their search instead of returning entire document?
Please assist me.Thank you in advance.
The split skill allows you to split text into smaller chunks/pages that can be then processed by additional cognitive skills.
Here is what a minimalistic skillset that does splitting and translation may look like:
"skillset": [
{
"#odata.type": "#Microsoft.Skills.Text.SplitSkill",
"textSplitMode": "pages",
"maximumPageLength": 1000,
"defaultLanguageCode": "en",
"inputs": [
{
"name": "text",
"source": "/document/content"
},
{
"name": "languageCode",
"source": "/document/language"
}
],
"outputs": [
{
"name": "textItems",
"targetName": "mypages"
}
]
},
{
"#odata.type": "#Microsoft.Skills.Text.TranslationSkill",
"name": "#2",
"description": null,
"context": "/document/mypages/*",
"defaultFromLanguageCode": null,
"defaultToLanguageCode": "es",
"suggestedFrom": "en",
"inputs": [
{
"name": "text",
"source": "/document/mypages/*"
}
],
"outputs": [
{
"name": "translatedText",
"targetName": "translated_text"
}
]
}
]
Note that the split skill generated a collection of text elements under the "\document\mypages" node in the enriched tree. Also not that by providing the context "\document\mypages*" to the translation skill, we are telling the translation skill to perform translation on "each page".
I should point out that documents will still be indexed at the document level though. Skillsets are not really built to "change the cardinality of the index". That said, a workaround for that may be to project each of the pages as separate elements into a knowledge store, and then create a separate index that is actually focused on indexing each page.
Learn more about the knowledge store projections here:
https://learn.microsoft.com/en-us/azure/search/knowledge-store-concept-intro

How can Alexa take a slot-only utterance?

I'm trying to write my first Alexa skill, but the application flow is a bit confusing, even reading all the documentation about dialogue delegation etc etc. I'd really love a bit of advice.
The Flow I'm Pursuing
"Alexa, start Movietime Quiz."
Welcome to Movietime Quiz. Before we begin, what team are you on: red or blue?
"Blue."
Blue was always the best team. Question 1: which of these films was not directed by Alfred Hitchcock? A: Vertigo, B: Rope, C: Happy Gilmore.
"C."
Correct! 10 points to the blue team. Question 2...
This is a boiled-down example to illustrate my problem in the shortest, clearest way, before you wonder why teams need to be involved in this.
My Instinct/Naive Approach
Have the initial launch-request handler say welcome-and-what-team, and then have two intents. The first would obviously be AnswerQuestionIntent, which listens for "A", "B", "C" or "D." The second would be SetTeamIntent, which listens for "red" or "blue."
I'd have an array with ~100 trivia questions. When the game starts, set a session attribute 'currentQuestion' to 0. In AnswerQuestionIntent, after handling the user's correct/incorrect response, increment that number, and if it's at 9, end the game; if not, ask a random question.
My Problem
I can't actually figure out how to have Alexa use a single slot as an utterance. I mean, I'd want to have a 'team' slot type (values 'red' and 'blue') and an 'answer' slot type (values 'A', 'B', 'C', and 'D'). SetTeamIntent should be activated by the utterance {team} and AnswerQuestionIntent by {answer}, but the developer.amazon.com skill builder gives me 'Bad Request' errors when I try to set that.
I tried looking at the SDK examples on GitHub, but I'm a bit lost because I've been using the GUI skill builder while learning and am not sure exactly how it maps -- not well enough to read the solution, anyway.
There is two different ways to handle this.
1. ElicitSlot Directive WITH Dialog Model
After you launch your skill and trigger an intent you can respond with a elicitslot directive.
Interaction Model: You define a slot and an intent, for example {team} and {answer} in PlayGameIntent. Provide utterances for the intent to get triggered, for example "start a game".
Skill: After triggering the PlayGameIntent. Return a response with a elicit slot directive. Something like the following.
{
"version": "1.0",
"sessionAttributes": {},
"response": {
"outputSpeech": {
"type": "PlainText",
"text": "What team are you on? Blue or Red? "
},
"shouldEndSession": false,
"directives": [
{
"type": "Dialog.ElicitSlot",
"slotToElicit": "team",
"updatedIntent": {
"name": "PlayGameIntent",
"confirmationStatus": "NONE",
"slots": {
"team": {
"name": "team",
"confirmationStatus": "NONE"
},
"answer": {
"name": "answer",
"confirmationStatus": "NONE"
}
}
}
}
]
}
}
The User can now provide an answer for the slot {team} and Alexa sends another IntentRequest for PlayGameIntent. You reelicit as many times as you need until your game is finished.
2. Custom Intents WITHOUT Dialog Model
Without using the Dialog Model you have no restriction with only-slot-utterances. You can build your intent schema as you described. If you leave the Skill Builder Beta you automatically disable the Dialog Model for your Interaction Model.
You can then build an intent schema with sample utterances like this:
AnswerQuestionIntent {answer}
SetTeamIntent {team}

What is the meaning here list in array calling

Below I have environment file and recipe can you explain I am not getting what is the list here.
{
"json_class": "Chef::Environment",
"description": "prod environment",
"default_attributes": {
},
"chef_type": "environment",
"override_attributes": {
"user": {
"mapr": {
"id": "application",
"group": "application",
},
"local" : {
"id": "chef",
"group": "chef"
},
"ldap" : {
"id": "ldap",
"sudo": true,
},
}
"name": "prod"
}
Below is the recipe what is the list here i did not get
node['user_create'].each do |list, user|
group user['group'] do
group_name user['group']
gid user['gid']
action [:create]
ignore_failure true
end
user user do
username user['id']
uid user['uid']
group user['gid']
home user['home']
manage_home true
end
if list !='ldap'
How list is passing here in if condition
You are not actually passing in any attributes via the environment, which you can see because the values of default_attributes and override_attributes are both just empty hashes { }. The data you've included there is just ignored by Chef as noise. In the future I recommend you use the Ruby DSL for environment files as it has more error checking for things like this (though not perfect error checking).
As an aside, you've been asking a lot of questions on here and seem to be struggling with Chef. Please consider joining the Chef community Slack team and asking there instead as it's a full chat system and thus the community could offer real-time help rather than here random blurbs.

Only getting single word parameters from Alexa Skills Kit

I'm writing an Alexa Skill, and I can only get single word parameters into my code.
Here is the intent schema:
{
"intents": [
{
"intent": "HeroQuizIntent",
"slots": [
{
"name": "SearchTerm",
"type": "SEARCH_TERMS"
}
]
},
{
"intent": "HeroAnswerIntent",
"slots": [
{
"name": "SearchTerm",
"type": "SEARCH_TERMS"
}
]
},
{
"intent": "AMAZON.HelpIntent"
}
]
}
and my sample utterances are:
HeroQuizIntent quiz me
HeroAnswerIntent is it {SearchTerm}
For the HeroAnswerIntent, I'm checking the SearchTerm slot, and I'm only getting single words in there.
So, "Peter Parker" gives "Parker", "Steve Rogers" gives "Rogers", and "Tony Stark" gives "Stark".
How do I accept multiple words into a slot?
I've had same problem with my skill and that's the only solution which is worked for my skill to use several words, but you need to check are these slots not empty and concatenate them
Intent schema:
{
"intent": "HeroAnswerIntent",
"slots": [
{
"name": "SearchTermFirst",
"type": "SEARCH_TERMS"
},
{
"name": "SearchTermSecond",
"type": "SEARCH_TERMS"
},
{
"name": "SearchTermThird",
"type": "SEARCH_TERMS"
}
]
},
Sample utterance
HeroAnswerIntent is it {SearchTermFirst}
HeroAnswerIntent is it {SearchTermFirst} {SearchTermSecond}
HeroAnswerIntent is it {SearchTermFirst} {SearchTermSecond} {SearchTermThird}
And last one you need to put every of your words in separate line in SEARCH_TERMS slot definition
Also using AMAZON.LITERAL sometimes not pass variable into skill at all even if you test it using service simulator (skill console, test tab)
The solution #Xanxir mentioned works equivalently with the newer custom slots format. In this case, you'd just put multiple length examples in your custom list of values for your slot type.
I had to change the Slot type to AMAZON.LITERAL.
The trick was that in the sample utterances, I also had to provide multiple utterances to demonstrate the minimum and maximum sizes of literals that Alexa should interpret. It's wonky, but works.
Here's the reference for it: https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-interaction-model-reference
AMAZON.SearchQuery
So you can use this in your utterances, and it will detect all words that the user speaks in between, Its rather accurate
It will solve your problem.
Ref Link: Alexa SearcQuery

Resources