Can you allow a user to REPLY to a card and not have a card appear with their response text in the timeline? - google-mirror-api

When I have a bundle (perhaps this also occurs with a single timeline card) with a REPLY action and the user executes that action, with lets say "peanut butter and jelly sandwich", a new timeline card appears with white text on a black background on Glass with the text "peanut butter and jelly sandwich". When looking at the playground that same card appears with the user's avatar on the left (like the Abe Lincoln template example), and the text on the right.
Lets call this new card the reminder card, as it reminds the user what text they spoke and allowed to be sent.
I did not insert that reminder card into the timeline.
Is this default Glass behavior for the REPLY action to insert a reminder to the user that they spoke some text? Does this count against our API tally, or is it a freebie and charged against some Google account?
Is there a way to use the REPLY action and apply some kind of undocumented attribute to prevent the display of this reminder card?
There is this documentation here which seems to encourage non-repeated REPLY actions, which could be motivated by this reminder card behavior.
REPLY, REPLY_ALL - Voice replies are intended to capture free form input by voice. Do not use voice replies to capture a limited set of options, such as possible moves in a game.
Source
Here is code to reproduce the problem in Java, nothing complicated:
menuItemList.add(new MenuItem().setAction("REPLY"));

The "REPLY" timeline item is automatically inserted by the Glass client and its ownership is set to your Glassware: this means that you have full read/write access to this timeline item.
It is up to your Glassware to process the timeline item and apply some styling. The timeline item is also useful for the user as it lets the user "DELETE" the reply if necessary.
If deleting the reply does not make sense in your Glassware, feel free to delete the timeline item when processed to remove it from the user's timeline.
Regarding API quota, this is only counted when you send an actual request to the API like retrieving the timeline item. Glass creating the "REPLY" does not consume your quota.

Standard practice is for your Glassware to either UPDATE the reply after you process it, or to DELETE the reply if it is no longer required. Either may make sense depending on the exact context of how the reply is handled. It may even make sense to add this existing reply to another bundle that you control.
As Alain noted, there is no quota on the number of cards that exist - just on the number of operations that you do. This isn't an operation that you take, so it doesn't count against the quota.
Finally, although not completely related, it is worth noting that this also happens when you SHARE a card - a copy of the card is made and you're permitted to that new card. Your application may take any actions on this new card that you wish.

Actually (at least in my opinion) the main reason for the card being there is that so your application can access the text somewhere. Replying won't add any extra information to the original card but create the new card with the text of the reply, and the itemId of this card will be send in a notification to your subscription.
What you could do is to delete the card once you receive the notification and handled the reply. The card is completely in your control so you can manipulate however you like.

Related

Porting an Alexa Skill - completing or continuing the dialog

I have a skill in Alexa, Cortana and Google and in each case, there is a concept of terminating the flow after speaking the result or keeping the mic open to continue the flow. The skill is mostly in an Http API call that returns the information to speak, display and a flag to continue the conversation or not.
In Alexa, the flag returned from the API call and passed to Alexa is called shouldEndSession. In Google Assistant, the flag is expect_user_response.
So in my code folder, the API is called from the Javascript file and returns a JSON object containing three elements: speech (the text to speak - possibly SSML); displayText (the text to display to the user); and shouldEndSession (true or false).
The action calls the JavaScript code with type Search and a collect segment. It then outputs the JSON object mentioned above. This all works fine except I don't know how to handle the shouldEndSession. Is this done in the action perhaps with the validate segment?
For example, "Hi Bixby, ask car repair about changing my tires" would respond with the answer and be done. But something like "Hi Bixby, ask car repair about replacing my alternator". In this case, the response may be "I need to know what model car you have. What model car?". The user would then say "Toyota" and then Bixby would complete the dialog with the answer or maybe ask for more info.
I'd appreciate some pointers.
Thanks
I think it can easily be done in Bixby by input prompt when a required input is missing. You can build input-view to better enhance the user experience.
To start building the capsule, I would suggest the following:
Learn more about Bixby on https://bixbydevelopers.com/dev/docs/dev-guide
Try some sample capsules and watch some tutorial videos on https://bixbydevelopers.com/dev/docs/sample-capsules
If you have a Bixby enabled Samsung device, check our marketplace for ideas and inspirations.

IntentRequest triggered by Response - without user-invocation

lets say i have a skill with 2 custom intents, 'FirstIntent' and 'SecondIntent'. SecondIntent also has a required slot, 'reqSlot'.
Now, i would like to sequence my intents. After my skill sent the FirstIntent-response, i would like Alexa to send a request with SecondIntent and a directive to elicit reqSlot, without the user to invoke it.
They say here, at the parameter 'updatedInted':
"Note that you cannot change intents when returning a Dialog directive, so the intent name and set of slots must match the intent sent to your skill."
Is this generally possilbe or did anyone figure out a workaround for this scenario?
Thanks :)
There are ways to handle this.
You can try:
When you send your first response it must set the shouldEndSession flag to false.
The end of your first response's output speech should lead the user into invoking the second response. For example: 'Say telephone number followed by your number'.
This way the user doesn't need to explicitly invoke your skill to get to the next intent.
It is not currently possible to cause Alexa to start speaking without a user first having spoken to it.
So for example, I cannot create a skill that will announce to my wife that "Ron is on his way home" whenever I leave work.
I can however, create a skill that my wife can ask "Is Ron on his way home", and it can respond with the answer.
Also, the new notifications allow a skill to post a notification, but this just causes the Alexa to light up its circular ring to indicate that a notification is waiting. A user must still ask Alexa to read it. In the example I cite above, that might be ok.
A lot of us would love for Alexa to be able to spontaneously start talking, but Amazon has not enabled that. Can you just imagine the opportunity for advertising abuse that functionality might enable? Nothing like sitting down watching TV and having Alexa start talking, "Hey, wouldn't some Wonder Popcorn taste great about now? We're having a sale..."

Clear previous Alexa Cards

Is there a way to tell Alexa to either remove previous cards when I send a new one?
I have an skill which sends kind of a status each time you run a command, and typical use case is to do multiple actions in a session, each of which I'd like to send a card. It gets really cluttered since they all just add on to each other. I'd like to just either update the first card, or remove it and make a new one each time.
That is not possible with the current API. I think anyone using skills is already used to the flood of cards that come after each interaction.

Watson Dialog Auto Learn

I have seen some references to the Auto-learn function of Watson Dialog but I can't find coverage in any of the documentation. Can you point me to a source of information on how best to use Auto-Learn?
Thank you for your feedback, we are always working to improve our documentation.
For your immediate benefit, auto-learn is a bit of a misleading name for a feature t, but the name has stuck.
Autolearn has become the "did you mean..." with four bullet points that shows when a user sends an input that has no direct matches.
A little history...at one time, we thought that if a user typed something, saw the did you mean... and clicked a link, their intial input must have meant the same thing as the one they clicked, the system should automatically remember that.
Imagine this:
"What are your credit card fees?"
Did you mean... 1. apply for a credit card 2. cancel a credit card 3. pay your bill
click apply for a credit card
The user was simply interested in that, but obviously the two DO NOT have the same meaning. So we realized thats a bad idea, the system will learn incorrectly. However, we still call it auto learn.

Mirror API latency when sending something to a timeline

It seems that sometimes timeline items (just text) arrive instantly and other times they take forever... Is there a way to send one at precisely the right time?
You can send the notification at a precise time.
timelineItem.getNotification()
.setDeliveryTime(new DateTime(oneMinuteInFuture.getTime()));
That's a java example, where oneMinuteInFuture is a Calendar object set to one minute after now.
What happens when you do this is the card is inserted in the timeline immediately, but the notification is delayed until the specified time. So the card goes in right away and one minute later I get a chime.
There is an unaccepted issue related to this at the issue tracker you might want to star and follow, it appears that this functionality might change in the future.

Resources