What gets shared? - google-mirror-api

Something is unclear to me with regards to the share action.
Does the sharing endpoint (google+ contacts, or other subscribed glassware) receive the timeline item with whatever it contains (html, attachment...)?
Example:
If I have this item in my timeline...
{
"menuItems": [{"action": "SHARE"}],
"html": "<div>a beautiful HTML card</div>",
"location": {
"latitude": 40.702587,
"timestamp": "2013-05-20T19:22:56.164600",
"displayName": "KLOMPCHING GALLERY",
"longitude": -73.98926,
"address": "111 Front Street"
},
"id": 42004,
"isDeleted": false,
"kind": "mirror#timelineItem"
}
...and I click on share: the sharing endpoint would receive this same JSON object?

By inserting a timeline card with the SHARE menuItem, you let the user share this card with other Glassware that supports this type of card (e.g image, video, link, etc.).
When a user decides to share this card with another Glassware, this is what happens:
Glass creates a copy of the timeline card (including attachments, html, text, recipients, etc.).
Glass sets the ownership of the copy to the Glassware it is shared with.
If the Glassware is subscribed to notifications, the Mirror API sends a notification to the Glassware: it is now up to the Glassware to process the SHARE action.
You can checkout this Google I/O session about Building Glass Services with the Mirror API that we did last week to learn more about the sharing model.

Related

Azure Logic Apps and Microsoft Forms - Get field descriptors

I have a Logic App that retrieves the responses submitted by the users through Microsoft Forms.
When I see the Logic App Run, I can see the descriptor for each field (MuleSoft, IoT & Integration, Encuesta de tecnologías, ...), for example:
But in the "Show raw outputs" I can't see those fields, I get an identifier (rcb6ccf0fc9e44f74b44fa2715fec4f27, ...):
How I can retrieve those descriptors??
The solution is to add a 'Send an HTTP request to SharePoint' action to get the details of the form.
The Site Address is: https://forms.office.com
The Method is: GET
The Uri is: /formapi/api/forms('')?select=id,title,questions&$expand=questions($expand=choices)
This returns a JSON with all the questions and for each question the ID, Title and more info about the question.
We can implement a loop through these questions and with each ID, extract the response from the Microsoft Forms:
foreach": "#body('Send_an_HTTP_request_to_SharePoint')['questions']"
And Compose the result:
"Compose": {
"inputs": {
"Id": "#{items('For_each')['id']}",
"Name": "#items('For_each')['title']",
"Value": "#{body('Get_response_details')[item()['id']]}"
},
"runAfter": {},
"type": "Compose"
}
These are field identifiers. You can retrieve them directly from the Dynamic content of Get response details.
Alternatively, you can build your own JSON body(in your case Get response details) from Compose connector.

Alexa model evaluation works great but intent is never called in simulator or alexa device

I'm struggling to build my Alexa Interaction model. My application is used for requesting live data from a smart home device. All i do is basically calling my Server API with a Username & Password and i get a value in return. My interaction model works perfectly for requesting the parameters, for example i can say "Temperature" and it works perfectly fine across all testing devices. For that intent i got a custom RequestType.
However for setting up Username & Password i need to use an built-it slot type: AMAZON.NUMBER. As i only need numbers for my credentials this should work perfectly fine in theory.
I got a interaction model setup which works perfectly fine when i press "Evaluate Model" in Alexa developer console. However once i go to Test on the simulator or to my real Alexa device it's absolutely impossible to call the intent. It always calls one of more other intents? (I see this in the request JSON).
Here's how the intent looks:
{
"name": "SetupUsername",
"slots": [
{
"name": "username",
"type": "AMAZON.NUMBER"
}
],
"samples": [
"my user id is {username}",
"username to {username}",
"set my username to {username}",
"set username to {username}",
"user {username}",
"my username is {username}",
"username equals {username}"
]
}
Whatever i say or type in the Simulator, i cannot call this intent. I have no overlaps from other intents. Does anyone see an issue here?
Thank you in advance
EDIT: I just realized that if you want to do account linking on Alexa you need to implement OAuth2 - maybe my intents are never called because they want to bypass me implementing my own authentication?
UPDATE:
This is the intent that is usually called instead - it's my init intent. So for example is i say "my username is 12345" the following intent is gonna be called:
UPDATE 2:
Here is my full interaction model.
(HelpIntent and SetPassword are only for testing purposes, they don't make sense right now)
It's impossible calling SetupUsername with any of the samples in my model.
You need to build the interaction model. Saving is not enough
When you develop your Interaction Model, you have to save it AND build it. Otherwise only the evaluation model will work (documentation about it).
Then when you test in the test console, you should see in the JSON Input, at the bottom, which intent was called:
"request": {
"type": "IntentRequest",
"requestId": "xxxx",
"locale": "en-US",
"timestamp": "2021-10-20T14:38:59Z",
"intent": {
"name": "HelloWorldIntent", <----------- HERE
"confirmationStatus": "NONE"
}
}

Can I determine whether an Alexa Request was triggered by a routine or a user?

I have a need to differentiate between an explicit request and a request from a routine.
Here is an example. Let's say I am controlling a smart light. The light is able to detect occupancy.
If a user comes in the room and says turn on the light, it will check occupancy and turn off.
However, if the user creates a scheduled routine to turn the light on, we should disable the occupancy check.
I don't see anything in the documentation for the TurnOn Directive that would indicate the source of the request.
Is there an indicator that I missed? Can I add some indicator? Or has anyone used a different approach to accomplish similar functionality?
The official response from Amazon is that you can't tell the difference. Here is a recent response from Amazon's Alexa developer forum: https://forums.developer.amazon.com/questions/218340/skills-invoking-routines.html
That said, you will generally see additional fields in the launch request if it is launched from a Routine:
"request": {
"type": "LaunchRequest",
"requestId": "amzn1.echo-api.request.abunchofnumbers",
"timestamp": "2020-01-18T22:27:01Z",
"locale": "en-US",
"target": {
"path": "AMAZON.Launch",
"address": "amzn1.ask.skill.abunchofnumbers"
},
"metadata": {
"referrer": "amzn1.alexa-speechlet-client.SequencedSimpleIntentHandler"
},
"body": {},
"payload": {},
"targetURI": "alexa://amzn1.ask.skill.abunchofnumbers/AMAZON.Launch",
"launchRequestType": "FOLLOW_LINK_WITH_RESULT",
"shouldLinkResultBeReturned": true
}
The target, metadata, body, payload, targetURI, and launchRequestType fields are generally not found when a user launches a skill with their voice. HOWEVER, I do not believe the existence of these fields are unique to being launched by an Alexa Routine. I suspect you'll find them if the skill was launched when, for example, Alexa asks, "Hey, since you like the Blind Monkey skill would you like to try Blind Pig?" and you say "yes."

passing selected button value from fb messenger to dialog flow

I am failing to understand the simple passing of parameters to and fro via webhook. i am trying to build a simple bot using dialog flow and fb messenger. i have requirement to show two buttons to the end user to pick a cake type. i am able to show the options using the below custom response:
{
"facebook": {
"attachment": {
"type": "template",
"payload": {
"template_type": "button",
"text": "What kind of cake would you like?",
"buttons": [
{
"type": "postback",
"payload": "witheggs",
"title": "Contain Eggs"
},
{
"type": "postback",
"payload": "noeggs",
"title": "Eggless"
}
]
}
}
}
}
once user tap one of the two buttons then how do i set it to some variable in dialog flow and then ask next set of question?
I guess you're missing few steps here. Before, I explain you what to do, be sure to know what a postback is! Postback, when tapped, the text is sent as a user query to dialogflow.com.
Step-1: I created an intent with custom payload as follows:
Step-2: Now, I created a new intent where I have entered user says as noeggs which is of type postback & payload as noeggs the in previous image.
Step-3: Save & test it in FB Messenger.
So basically, what has happened here is, when you click on Eggless button, postback as noeggs is sent as user query to dialogflow.com where there is an intent which matches user says with noeggs & sends response back.

Can't get all busy times of meeting rooms using the Office365 Calendar API

Trying to fetch calendar events in order to allow a user to pick a time for a meeting where the attendees and the meeting room are available.
We're using Outlook Calendar REST API v2 - findMeetingTimes:
https://msdn.microsoft.com/en-us/office/office365/api/calendar-rest-operations#FindMeetingTimes
The request returns almost all of the events.
For some reason events that were created by the user that executes the request are not included in the response. It means that the meeting room or attendee seem as FREE even though they have an event in their calendar.
Here's a sample request with only the meeting room as attendee. We see the same problematic behavior when requesting events for both meeting rooms and users.
https://outlook.office.com/api/v2.0/users('user#companyname.onmicrosoft.com')/findmeetingtimes
{
"Attendees": [{
"Type": "Required",
"EmailAddress": {
"Name": "Palo Alto meeting room",
"Address": "paloalto#companyname.onmicrosoft.com"
}
}],
"TimeConstraint": {
"Timeslots": [{
"Start": {
"DateTime": "2017-02-11T22:00:00",
"TimeZone": "GMT Standard Time"
},
"End": {
"DateTime": "2017-04-29T20:59:59",
"TimeZone": "GMT Standard Time"
}
}]
},
"LocationConstraint": {
"IsRequired": "false",
"SuggestLocation": "false",
"Locations": [{
"ResolveAvailability": "false",
"DisplayName": "Palo Alto meeting room",
"LocationEmailAddress": "paloalto#companyname.onmicrosoft.com"
}]
},
"MinimumAttendeePercentage": "0",
"MaxCandidates": "1000",
"ReturnSuggestionReasons": "true"
}
Any help will be much appreciated.
Ok, so to clarify, and this is the key point I missed at first: the problem that you're having is that the appointment booked by the authenticated user using the conference room as a location does NOT cause an entry to show up in the FindMeetingTimes response. (At first I thought you were saying it was showing as Free!)
This is correct behavior. FindMeetingTimes is not meant to return an exhaustive list of free/busy results. Rather, it's to find a potential meeting time! The list is based on the availability of the organizer (the authenticated user) and the specified attendees. Because both the organizer AND the room are busy (because the organizer has an appointment already booked in the room), the time slot isn't even presented. When you make the request as another user, they are the organizer, and since they are free at that time, the slot is presented as a possible time.
So I may misunderstand what you're trying to do, but this should work for you. As long as you're only presenting the times returned as possibilities, there isn't a potential for conflict.

Resources