My custom slot type is taking on unexpected values - alexa

I noticed something strange when testing my interaction model with the Alexa skills kit.
I defined a custom slot type, like so:
CAR_MAKERS Mercedes | BMW | Volkswagen
And my intent scheme was something like:
{
"intents": [
{
"intent": "CountCarsIntent",
"slots": [
{
"name": "CarMaker",
"type": "CAR_MAKERS"
},
...
with sample utterances such as:
CountCarsIntent Add {Amount} cars to {CarMaker}
Now, when testing in the developer console, I noticed that I can write stuff like:
"Add three cars to Ford"
And it will actually parse this correctly! Even though "Ford" was never mentioned in the interaction model! The lambda request is:
"request": {
"type": "IntentRequest",
...
"intent": {
"name": "CountCarsIntent",
"slots": {
"CarMaker": {
"name": "ExpenseCategory",
"value": "whatever"
},
...
This really surprises me, because the documentation on custom slot types is pretty clear about the fact that the slot can only take the values which are listed in the interaction model.
Now, it seems that values are also parsed dynamically! Is this a new feature, or am I missing something?

Actually that is normal (and good, IMO). Alexa uses the word list that you provide as a guide, not a definitive list.
If it didn't have this flexibility then there would be no way to know if users were using words that you weren't expecting. This way you can learn and improve your list and handling.

Alexa treat the provided slot values as 'Samples'. Hence slot values which are not mentioned in interaction model will also get mapped.
When you create a custom slot type, a key concept to understand is
that this is training data for Alexa’s NLP (natural language
processing). The values you provide are NOT a strict enum or array
that limit what the user can say. This has two implications
1) words and phrases not in your slot values will be passed to you,
2) your code needs to perform any validation you require if what’s
said is unknown.
Since you know the acceptable values for that slot, always perform a slot-value validation on your code. In this way when you get something other than a valid car manufacturer or something which you don't support, you can always politely respond back like
"Sorry I didn't understand, can you repeat"
or
"Sorry we dont have in our list. can you please
select something from [give some samples from your list]"
More info here

Related

Why does Activity Pub make servers wrap "Note" objects into "Create" activities?

What I've gathered is that new posts are published by POSTing a JSON-LD Activity Streams object of type Note to an actor's outbox.
{"#context": "https://www.w3.org/ns/activitystreams",
"type": "Note",
"to": ["https://chatty.example/ben/"],
"attributedTo": "https://social.example/alyssa/",
"content": "Say, did you finish reading that book I lent you?"}
The server will then have wrap it into an activity of type Create.
{"#context": "https://www.w3.org/ns/activitystreams",
"type": "Create",
"id": "https://social.example/alyssa/posts/a29a6843-9feb-4c74-a7f7-081b9c9201d3",
"to": ["https://chatty.example/ben/"],
"actor": "https://social.example/alyssa/",
"object": {"type": "Note",
"id": "https://social.example/alyssa/posts/49e2d03d-b53a-4c4c-a95c-94a6abf45a19",
"attributedTo": "https://social.example/alyssa/",
"to": ["https://chatty.example/ben/"],
"content": "Say, did you finish reading that book I lent you?"}}
I fail to see the usefulness of this, as the wrapping activity doesn't seem to add any useful data to the wrapped note. Worse even, it seems like it might introduce a fair bit of redundancy to the responses (in this basic example from the official page, actor and attributedTo, as well as the 2 to fields, have exactly the same purpose). Is this perhaps done just for consistency, as there are a few other other activity types that are applied to notes, and for newly created posts having just a plain object (or a collection of plain objects) as a response would not fit this way of doing things?
Also, why are other activity types (e.g., Like) able to simply reference notes by id, while Create activities enclose that data directly? Is that required or is there a specific reason for it?
{"#context": "https://www.w3.org/ns/activitystreams",
"type": "Like",
"id": "https://social.example/alyssa/posts/5312e10e-5110-42e5-a09b-934882b3ecec",
"to": ["https://chatty.example/ben/"],
"actor": "https://social.example/alyssa/",
"object": "https://chatty.example/ben/p/51086"}
ActivityStreams is a protocole to synchronise datas between different databases and softwares.
Some actions contain the detail (like for Notes) because we can create/update them and an activity stream reader must know the changes to display the good datas to its users.
Some operations like Like have a named cancel operation like Unlike. So we don't need the Create/Update container. and because they operate on existing datas, we also only need the unique ID to the concerned resource.

HATEOAS and forms driven by the API

I'm trying to apply HATEOAS to the existing application and I'm having trouble with modeling a form inputs that would be driven by the API response.
The app is allowing to search & book connections between two places. First endpoint allows for searching the connections GET /connections?from={lat,lon}&to={lat,lon}&departure={dateTime} and returns following payload (response body).
[
{
"id": "aaa",
"carrier": "Fast Bus",
"price": 3.20,
"departure": "2019-04-05T12:30"
},
{
"id": "bbb",
"carrier": "Airport Bus",
"price": 4.60,
"departure": "2019-04-05T13:30"
},
{
"id": "ccc",
"carrier": "Slow bus",
"price": 1.60,
"departure": "2019-04-05T11:30"
}
]
In order to make an order for one of connections, the client needs to make a POST /orders request with one of following payloads (request body):
email required
{
"connectionId": "aaa",
"email": "passenger#example.org"
}
email & flight number required (carrier handles only aiprort connections)
{
"connectionId": "bbb",
"email": "passenger#example.org",
"flightNumber": "EA1234"
}
phone number required
{
"connectionId": "ccc",
"phoneNumber": "+44 111 222 333"
}
The payload is different, because different connections may be handled by different carriers and each of them may require some different set of information to provide. I would like to inform the API client, what fields are required when creating an order. The question I have is how do I do this with HATEOAS?
I checked different specs and this is what I could tell from reading the specs:
HAL & HAL-FORMS There are "_templates" but, there is no URI in the template itself. It’s presumed to operate on the self link, which in my case would be /connections... not /orders.
JSON-LD I couldn't find anything about forms or templates support.
JSON-API I couldn't find anything about forms or templates support.
Collection+JSON There is at most one "template" per document, therefore it's presumed that all elements of the collection have the same fields which is not the case in my app.
Siren Looks like the "actions" would fit my use case, but the project seems dead and there are no supporting libraries for many major languages.
CPHL The project seems dead, very little documentation and no libraries.
Ion There is nice support for forms, but I couldn't find any supporting libraries. Looks like it's just a spec for now.
Is such a common problem as having forms driven by the API still unsolved with spec and tooling?
In your example, it appears that Connections are resources. It's not completely clear if Orders are truly resources. I'm guessing probably yes, but to have an Order you need a Client and Connection. So, to create an Order you will need to expose a collection, likely from the Client or Connection, possibly both.
I think the disconnect is from thinking along the lines of "now that we've got a list of available connections, the client can select one and create an Order." That's perfectly valid, but it's remote procedure call (RPC) thinking, not REST. Neither is objectively better than the other, except in the context of a particular set of project requirements, and generally they shouldn't be mixed together.
With an RPC mindset, a create order method is defined (e.g. using OpenAPI) and any clients are expected to use some out-of-band information to determine the correct form required (i.e. by reading the OpenAPI spec).
With a REST/HATEOAS mindset, the correct approach would be to expose a Orders collection from Connection. Each Connection in the collection has a self link and a Orders collection (link or object, as defined by app requirements). Each item of Order has a self link, and that is where the affordances are specified. An Order is a known type (even with REST/HATEOAS the client and service have to at least agree on a shared vocabulary) that the client presumably knows how to define. That vocabulary can be defined using any mechanism that works -- json-ld, XSD, etc.
HATEOAS requires that the result contains everything the client needs to update the state. There can be no out-of-band information (other than the shared vocabulary). So, to solve your issue, you either need to expose a collection of Orders from Connection or you need to allow an Order to be created by posting to Connection. If the latter seems like a bit of a hack, it probably is.
For example, in HAL-Forms, I would do something like:
{
"connections": [{
"id": "aaa",
"carrier": "Fast Bus",
"price": 3.20,
"departure": "2019-04-05T12:30"
"_links": {
"self": { ... }, // link to this connection
"orders": {} // link to collection of orders for this connection
}
},
, ...],
"_links": {
"self": { ... } // link to the collection
},
"_templates": { ... } // post/put/patch/delete connection
}
Clients would follow the links to orders and from there would get the _templates collection that contains the instructions for managing the Order resources. The Order POST would likely require a connection identifier and client information. The HAL-Forms Spec defines a regex property that can be used to specify the type of data to supply for any particular form element. Since you have reached the order by navigating through a specific connection, you would be able to specify in your _templates for that order exactly which fields are required. e.g. /orders?connectionType=aaa would return a different set of required properties than /orders?connectionType=bbb but both use the same self link of /orders?connectionType={type} and you'd validate it on POST/PUT/PATCH.
I should note that the Spring-HATEOAS goes beyond the HAL-Forms spec and allows for multiple _links and _templates. See this GitHub issue.
It may look like HATEOAS/REST requires quite a bit more work than a simple OpenAPI/RPC API and it does. But what you are giving up in simplicity, you are gaining in flexibility and resilience, assuming well-designed clients. Which approach is correct depends on a lot of factors, most of them not technical (team skills, expected consumers, how much control you have over clients, maintenance, etc.).

Using search.in with all

Follwing statement find all profiles that has Facebook or twitter and this works:
$filter=SocialAccounts/any(x: search.in(x, 'Facebook,Twitter'))
But I cant find any samples for finding all that has both Facebook and twitter. I tried:
$filter=SocialAccounts/all(x: search.in(x, 'Facebook,Twitter'))
But this is not valid query.
Azure Search does not support the type of ‘all’ filter that you’re looking for. Using search.in with ‘all’ would be equivalent to using OR, but Azure Search can only handle AND in the body of an ‘all’ lambda (which is equivalent to OR in the body of an ‘any’ lambda).
You might try a workaround like this:
$filter=tags/any(t: t eq 'Facebook') and tags/any(t: t eq 'Twitter')
However, this isn't actually equivalent to using all with search.in. The query as expressed using all is matching documents where every social account is strictly either Facebook or Twitter. If any other social account is present, the document won’t match. The workaround doesn’t have this property. A document must have at least Facebook and Twitter in order to match, but not exclusively those. This is certainly a valid scenario; it just isn't the same as using all with search.in, which was the original question.
No matter how you try to rewrite the query, you won’t be able to express an equivalent to the all query. This is a limitation due to the way Azure Search stores collections of strings and other primitive types in the inverted index.
Please vote on user voice to help prioritize:
https://feedback.azure.com/forums/263029-azure-search/suggestions/37166749-efficient-way-to-express-a-true-all
A possible workaround is to use the new Complex Types feature, which does allow more expressive filters inside lambda expressions. For example, if you model tags as objects with a single value property instead of as a collection of strings, you should be able to execute a filter like this:
$filter=tags/all(t: search.in(t/value, 'Facebook,Twitter'))
In the REST API, you'd define tags like this:
{
"name": "myindex",
"fields": [
...
{
"name": "tags",
"type": "Collection(Edm.ComplexType)",
"fields": [
{ "name": "value", "type": "Edm.String", "filterable": true }
]
}
]
}
Note that this feature is in preview at the time of this writing, but will be generally available (and publicly documented) soon.

How can Alexa take a slot-only utterance?

I'm trying to write my first Alexa skill, but the application flow is a bit confusing, even reading all the documentation about dialogue delegation etc etc. I'd really love a bit of advice.
The Flow I'm Pursuing
"Alexa, start Movietime Quiz."
Welcome to Movietime Quiz. Before we begin, what team are you on: red or blue?
"Blue."
Blue was always the best team. Question 1: which of these films was not directed by Alfred Hitchcock? A: Vertigo, B: Rope, C: Happy Gilmore.
"C."
Correct! 10 points to the blue team. Question 2...
This is a boiled-down example to illustrate my problem in the shortest, clearest way, before you wonder why teams need to be involved in this.
My Instinct/Naive Approach
Have the initial launch-request handler say welcome-and-what-team, and then have two intents. The first would obviously be AnswerQuestionIntent, which listens for "A", "B", "C" or "D." The second would be SetTeamIntent, which listens for "red" or "blue."
I'd have an array with ~100 trivia questions. When the game starts, set a session attribute 'currentQuestion' to 0. In AnswerQuestionIntent, after handling the user's correct/incorrect response, increment that number, and if it's at 9, end the game; if not, ask a random question.
My Problem
I can't actually figure out how to have Alexa use a single slot as an utterance. I mean, I'd want to have a 'team' slot type (values 'red' and 'blue') and an 'answer' slot type (values 'A', 'B', 'C', and 'D'). SetTeamIntent should be activated by the utterance {team} and AnswerQuestionIntent by {answer}, but the developer.amazon.com skill builder gives me 'Bad Request' errors when I try to set that.
I tried looking at the SDK examples on GitHub, but I'm a bit lost because I've been using the GUI skill builder while learning and am not sure exactly how it maps -- not well enough to read the solution, anyway.
There is two different ways to handle this.
1. ElicitSlot Directive WITH Dialog Model
After you launch your skill and trigger an intent you can respond with a elicitslot directive.
Interaction Model: You define a slot and an intent, for example {team} and {answer} in PlayGameIntent. Provide utterances for the intent to get triggered, for example "start a game".
Skill: After triggering the PlayGameIntent. Return a response with a elicit slot directive. Something like the following.
{
"version": "1.0",
"sessionAttributes": {},
"response": {
"outputSpeech": {
"type": "PlainText",
"text": "What team are you on? Blue or Red? "
},
"shouldEndSession": false,
"directives": [
{
"type": "Dialog.ElicitSlot",
"slotToElicit": "team",
"updatedIntent": {
"name": "PlayGameIntent",
"confirmationStatus": "NONE",
"slots": {
"team": {
"name": "team",
"confirmationStatus": "NONE"
},
"answer": {
"name": "answer",
"confirmationStatus": "NONE"
}
}
}
}
]
}
}
The User can now provide an answer for the slot {team} and Alexa sends another IntentRequest for PlayGameIntent. You reelicit as many times as you need until your game is finished.
2. Custom Intents WITHOUT Dialog Model
Without using the Dialog Model you have no restriction with only-slot-utterances. You can build your intent schema as you described. If you leave the Skill Builder Beta you automatically disable the Dialog Model for your Interaction Model.
You can then build an intent schema with sample utterances like this:
AnswerQuestionIntent {answer}
SetTeamIntent {team}

giving a string slot in Amazon alexa

I'm new to alexa. I learnt and started to build a weather app.
right now I'm able to get weather data, but on the below condition,
I've craeated a custom slot(LIST_OF_CITIES) to hold Cities as below.
{
"intents": [
{
"intent": "WeatherIntent",
"slots": [
{
"name": "city",
"type": "LIST_OF_CITIES"
}
]
},
{
"intent": "AMAZON.HelpIntent"
},
]
}
and in my custom slot I gave as below.
Type Values
LIST_OF_CITIES Hyderabad | pune | london
and below are my Utterances
WeatherIntent give me {city} climate
WeatherIntent {city}
WeatherIntent what's the climate in {city}
WeatherIntent what's the weather in {city}
WeatherIntent {city}
when I run my program using any of the three cities mentioned in the above table, I'm able to get the correct. If I use anything apart from the above, it is sending back value as -4.
If I want to get tempreature of some other city, I need to add that city in the slot list.
Please let me know how can I get the vaues dynamically, I mean with out depending on the LIST_OF_CITIES, If I enter a city name, it should send back the result.
Also I tried adding type as LITERAL and also as AMAZON.LITERAL. When I saved it, I get the exception as
Error: There was a problem with your request: Unknown slot name '{city}'. Occurred in sample 'WeatherIntent get me weather of {city}' on line 1.
Please let me know where am I going wrong and how can I fix this.
Thanks
Amazon provides some default list Slot Types for cities or even regions. E.g.
AMAZON.US_CITY
AMAZON.AT_CITY
AMAZON.DE_REGION
...
You can use these as type when defining your custom slot.
First, don't use LITERAL - it is deprecated and isn't even supported at all outside US region.
And no, you can't manage the list of words dynamically.
Alexa will try to match what the user says with your LIST_OF_CITIES, and will try to return one of those words, but might return something else if it can't match one of those (as you have seen).
There are some custom slot types for cities that you can use and build off of, see here:
https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-interaction-model-reference#h2_custom_syntax
But that probably won't work for you since each of them is just one country, so you will need to build your own list of cities (in your LIST_OF_CITIES).

Resources